![]() Comentarii Adauga Comentariu Legea numerelor mariExemplificarea a manifestării Legii Numerelor Mari: Grafic care ne arată cum tind aruncările stemă-ban să se egalizeze pe măsură ce executăm tot mai multe aruncări (ex. pe 10 seturi a 100 de aruncări).![]() Let X be a real-valued random variable, and let
[The concepts of convergence in probability and almost sure convergence in probability theory are specialisations of the concepts of convergence in measure and pointwise convergence almost everywhere in measure theory.] (If one strengthens the first moment assumption to that of finiteness of the second moment The weak law is easy to prove, but the strong law (which of course implies the weak law, by Egoroff’s theorem) is more subtle, and in fact the proof of this law (assuming just finiteness of the first moment) usually only appears in advanced graduate texts. So I thought I would present a proof here of both laws, which proceeds by the standard techniques of the moment method and truncation. The emphasis in this exposition will be on motivation and methods rather than brevity and strength of results; there do exist proofs of the strong law in the literature that have been compressed down to the size of one page or less, but this is not my goal here.
— The moment method — The moment method seeks to control the tail probabilities of a random variable (i.e. the probability that it fluctuates far from its mean) by means of moments, and in particular the zeroth, first or second moment. The reason that this method is so effective is because the first few moments can often be computed rather precisely. The first moment method usually employs Markov’s inequality
(which follows by taking expectations of the pointwise inequality
(note that (2) is just (1) applied to the random variable Generally speaking, to compute the first moment one usually employs linearity of expectation
whereas to compute the second moment one also needs to understand covariances (which are particularly simple if one assumes pairwise independence), thanks to identities such as or the normalised variant
Higher moments can in principle give more precise information, but often require stronger assumptions on the objects being studied, such as joint independence. Here is a basic application of the first moment method:
Proof. Let
Letting Returning to the law of large numbers, the first moment method gives the following tail bound:
Proof. By the triangle inequality, Lemma 1 is not strong enough by itself to prove the law of large numbers in either weak or strong form – in particular, it does not show any improvement as n gets large – but it will be useful to handle one of the error terms in those proofs. We can get stronger bounds than Lemma 1 – in particular, bounds which improve with n – at the expense of stronger assumptions on X.
Proof. A standard computation, exploiting (3) and the pairwise independence of the In the opposite direction, there is the zeroth moment method, more commonly known as the union bound or equivalently (to explain the terminology “zeroth moment”) for any non-negative random variables
Just as the second moment bound (Lemma 2) is only useful when one has good control on the second moment (or variance) of X, the zeroth moment tail estimate (3) is only useful when we have good control on the zeroth moment — Truncation — The second moment tail bound (Lemma 2) already gives the weak law of large numbers in the case when X has finite second moment (or equivalently, finite variance). In general, if all one knows about X is that it has finite first moment, then we cannot conclude that X has finite second moment. However, we can perform a truncation
of X at any desired threshold N, where and hence also we have finite variance
The second term
By the triangle inequality, we conclude that the first term
These are all the tools we need to prove the weak law of large numbers: Proof of weak law. Let From (7), (8), we can find a threshold N (depending on
From the first moment tail bound (Lemma 1), we know that — The strong law — The strong law can be proven by pushing the above methods a bit further, and using a few more tricks. The first trick is to observe that to prove the strong law, it suffices to do so for non-negative random variables Once X is non-negative, we see that the empirical averages
Because of this quasimonotonicity, we can sparsify the set of n for which we need to prove the strong law. More precisely, it suffices to show
Indeed, if we could prove the reduced version, then on applying that version to the lacunary sequence [This sparsification trick is philosophically related to the dyadic pigeonhole principle philosophy; see an old short story of myself on this latter topic. One could easily sparsify further, so that the lacunarity constant c is large instead of small, but this turns out not to help us too much in what follows.] Now that we have sparsified the sequence, it becomes economical to apply the Borel-Cantelli lemma. Indeed, by many applications of that lemma we see that it suffices to show that
for non-negative X of finite first moment, any lacunary sequence [If we did not first sparsify the sequence, the Borel-Cantelli lemma would have been too expensive to apply; see Remark 2 below. Generally speaking, Borel-Cantelli is only worth applying when one expects the events At this point we go back and apply the methods that already worked to give the weak law. Namely, to estimate each of the tail probabilities We should at least pick Now we look at the contribution of But there is one last card to play, which is the zeroth moment method tail estimate (4). As mentioned earlier, this bound is lousy in general – but is very good when X is mostly zero, which is precisely the situation with Putting this all together, we see that Summing this in j, we see that we will be done as soon as we figure out how to choose
and
are both finite. (As usual, we have a tradeoff: making the Based on the discussion earlier, it is natural to try setting and (where the implied constant here depends on the sequence Remark 1. The above proof in fact shows that the strong law of large numbers holds even if one only assumes pairwise independence of the Remark 2. It is essential that the random variables Remark 3. From the perspective of interpolation theory, one can view the above argument as an interpolation argument, establishing an Remark 4. By viewing the sequence Sursa terrytao.wordpress.com
Linkul direct catre PetitieCitiți și cele mai căutate articole de pe Fluierul:
|
ieri 17:29
Ludovic Orban se vaccinează anti-COVID-19
ieri 07:00
CITATUL ZILEI
ieri 06:34
Cum să-ți dai cu dreptu-n dreptu’...
|
|
Comentarii:
Adauga Comentariu