Natural introduction to probability theory 978-3-7643-8723-5 pdf download
A Natural Introduction to Probability Theory. Authors view affiliations Ronald Meester. Discovering of the theory along the way, rather than presenting it matter of factly at the beginning Contains many original and surprising examples A rigorous study without any measure theory Compactly written, but nevertheless very readable A probabilistic approach, appealing to intuition, introducing technical machinery only when necessary.
Front Matter Pages i-xi. Pages Random Variables and Random Vectors. Random Walk. Limit Theorems. Continuous Random Variables and Vectors. Infinitely Many Repetitions. Indeed, the probability that the point ends up in any given interval of length r should be r. The following striking example tells us that there is no hope that such an approach could work without problems.
Suppose that we want to select a random point from the surface of the unit sphere in three dimensions. For any subset A of the surface of the unit sphere, we would like to have the property that the probability that the chosen point is in A should be proportial to the area of A. Agreeing on the reasonableness of this idea, we now turn to the Polish mathematicians Banach and Tarskii.
They constructed a most remarkable subset of the surface of the unit sphere. This subset, we call it A, has the following property.
But now comes the unbelievable fact. This contradicts our earlier observation, and we are in deep trouble. This very counterintuitive example shows that it may, after all, 92 Chapter I. Intermezzo not be so easy to assign probabilities to all subsets of a given sample space.
What to do? We shall resolve this problem by insisting that the collection of events which receive a probability has a certain structure; not all sets will receive a probability. This is the only possible solution, we have to restrict the collection of events. We have noted in this intermezzo that very natural sample spaces can be uncountable, and that in such an uncountable sample space, certain strange subsets can exist which we can not assign a probability.
In this chapter we suggest a set up which allows us to do this. An experiment in the discrete setting consisted of a sample space and a probability mass function, assigning a certain probability to each outcome in the sample space. In the current continuous setting, this is impossible, but we can do something similar, replacing sums by appropriate integrals.
Here are some examples of how this could work. Example 5. Suppose we want to model the choice of a completely random point in the interval 0, 1. There are uncountably many points in 0, 1 , and hence we cannot list its elements.
It is reasonable to assume that this probability should be equal to the length of I. Writing I for the length of an interval I, we therefore 94 Chapter 5.
You should think of this number as the probability that a completely random point in the unit interval ends up in I. Another way of formulating this assignment of probabilities is the following. When we write things this way, there is no need to restrict ourselves to intervals contained in 0, 1. So not all subsets of R have a probability now, something we already anticipated in the Intermezzo. The sum has been replaced by an integral, and the probability mass function has been replaced by the function f.
This is illustrated by the next example. Suppose we play darts on a big circular board with radius 1 meter. Suppose that we are only interested in the distance between the arrow and the midpoint of this board, not in the exact position of the arrow. To compute this, we assume that we hit the board in a completely random place and that we never miss the board. These assumptions 5. Experiments 95 imply that the probability to hit the board within the circle of radius t is the area of this smaller circle with radius t, divided by the area of the full circle with radius 1.
It is, however, also possible to study this example in two dimensions: Example 5. Consider the set-up of the previous example, but now we are interested in the exact position of the arrow, not only in the distance to the midpoint. Then the probability that the arrow ends up in a given region A should be proportional to the area of A. We formalise this as follows. In fact, this assignment of probabilities corresponds to the experiment of choosing an arbitrary point in the disc with radius 1.
However, not all experiments that arise in this way are meaningful from a probabilistic point of view. This density is very often used to model the waiting time for a certain event to happen, like the waiting time for the next customer to arrive in a shop, or the waiting time for the next earthquake in California. We now sketch the reason for this. The argument goes via a suitable discretisation of the time axis, as follows.
In the above terminology, the event we are waiting for is the occurrence of heads. Continuous Random Variables and Vectors 5. It is perhaps a good idea to recall the dominated convergence theorem 3. Theorem 5. Lemma 5. Consider an experiment with sample space Rd and density f. For a , observe that for disjoint events A1 ,.
Properties of Probability Measures 99 as follows from elementary additivity properties of the Riemann integral. Rd Properties c - e also follow from elementary properties of the Riemann integral; see the forthcoming Exercise 5. For f we apply Theorem 5. Give the details of the proof of c - e. Write down exactly what properties of the Riemann integral you use.
So for instance, we want to be able to talk about the probability that X takes a value between 5 and This implies that the probability that X takes a value in a certain interval does not change when we include or exclude endpoints of the interval. Note that a continuous random variable X does not have a unique density. For example, if we have a density of X, then we can change this density in a single point to obtain another density of X.
Continuous Random Variables 4. Perhaps you wonder why continuous random variables with a given density g exist. In fact, this is easy to see. This construction is often used, but we shall see soon that interesting random variables are certainly not always constructed like this. The point of the lemma is that we do not assume from the outset that X is a continuous random variable.
Rather, this follows as a conclusion. Then X is a continuous random variable with density f. To see this, note that by Lemma 5. It now follows from 5. A random variable with a uniform distribution on [a, b] can be interpreted as the outcome of a completely random point 5. Continuous Random Variables from [a, b]. The interval [a, b] need not be closed. Show that this is a density, using the fact that the density of the standard normal distribution integrates to 1.
Continuous Random Variables and Vectors The reason that the normal distribution is very important became already apparent in the central limit Theorem 4. In the next chapter, a much more general statement will be proved, again involving the normal distribution.
We have come across this density already in Example 5. Here is a fairly natural example where the Cauchy distribution arises. Denote the intersection of the line with the x-axis by X, 0.
We claim that X is a continuous random variable with a Cauchy distribution. This is an example of a random variable X which is not the identity map. Expectation Example 5. Consider the darts Examples 5. We can reformulate this now in terms of continuous random variables.
Indeed, the sample space in Example 5. Then X represents the distance to the midpoint, and we have seen in Example 5. The expectation of Xn can Chapter 5. At this point, we only note the following. More properties follow in Section 5. Expectation Theorem 5. It is easy to see, using the symmetry of f around 0, that the expectation of X is equal to 0.
We claim that X has no expectation. Can you verify this? Suppose that I show you two envelopes, both containing a certain amount of money.
You have no information whatsoever about the amounts of money in the envelopes. You choose one and this time you do open the envelope. Suppose you see that the envelope contains x euros, say. Does it make sense to do so? More precisely, is 5. Random Vectors and Independence there a decision algorithm that enables you to end up with the highest amount of money, with probability strictly larger than 12?
It is perhaps surprising that such an algorithm does exist, and we describe it here now. We can write down the joint distribution of X1 , X2.
Investigate what happens when we assume that the amounts in the two envelopes are themselves random variables. Suppose that we have no information at all about the amounts in the two envelopes. In particular, we do not know whether the amounts are random, independent, etcetera.
What kind of distributions of Y would be useful for the decision making procedure? This is to say that the integral of f over this set exists. As in the case of continuous random variables, once we have that X1 ,. In concrete cases, this will be understood without further comments. We refer to the distribution of the vector as the joint ditribution, and to the distributions of its individual components as the marginal distributions.
Random Vectors and Independence Example 5. We conclude that Y has an exponential distribution with parameter 1. Hence 5. We say, as before, that f factorises. You may want to look at the proof of Theorem 2. Consider Example 5. Note that X and Y are not independent, although it appears that the joint density is a product of a function of x alone and a function of y alone. Do you see why? This expression looks rather obscure and unattractive, but the point of this density will become clear now.
We do this with a trick: we use our knowledge of one-dimensional normal distributions. Next, we want to compute the marginal distribution of X and Y. Well, this we have, in fact, already done in the previous calculation. Indeed, we showed that when we integrate out x, the result is the density of a standard normal distribution, and therefore, Y has a standard normal distribution.
Since f x, y is symmetric in x and y, it follows immediately that also X has a standard normal distribution. As in the discrete setting, the joint distribution does determine the marginals, but not the other way around.
Let X have density f. This is typically the case, and we illustrate this with some examples. Continuous Random Variables and Vectors Example 5. It is also possible to write down a few general results, which reduce the amount of work in many cases. Without loss of generality, we assume that g is non-decreasing. The result is a consequence of the classical change of variable theorem from calculus.
Let X and g be as in Example 5. We can rederive the result from Example 5. Substituting this in Theorem 5. The proof is similar to the one-dimensional case and follows from the classical change of variables formula from calculus. Can you explain why this is not an important issue here? There is one more piece of theory associated with functions of random variables.
In the discrete context, when X and Y are independent random variables, so are g X and h Y , for any functions g and h, see Theorem 2. When the random variables are continuous, this should also be the case. In order to prove this, we need a very weak condition which we will call regularity. Readers who are not interested in these details, can safely assume that all functions in this book and of practical use are regular, and apply the forthcoming Theorem 5.
Sums of Random Variables as unions of pairwise disjoint intervals. Suppose that X and Y are independent random variables. The answer is yes, and there are various ways to see this.
It then follows from Theorem 5. This trick can be used for many other functions of X and Y , and shows that all these functions lead to continuous random variables. This runs as follows. Suppose that X1 ,. Since T and Xn are independent, we can use 5. We shall come across this distribution in Chapter 7. There are various ways to prove this. We write f for the joint density of X and Y. The following Theorem 5. In the proof of this theorem, we will need the following lemma, the continuous analogue of Exercise 2.
Chapter 5. Let X be a continuous random variable with density f. Let g be such that g X is a continuous or discrete random variable. We will only give the proof in the case where g X is a continuous random variable.
For general g we use a trick that is very useful in general, when we want to extend a result from positive to general functions. Now from Theorem 5. More About the Expectation; Variance Example 5. According to Theorem 5. This formula is very useful for computations. This can be done with partial 2 integration. Compute the variance of an exponentially distributed random variable. In this case the two notions are equivalent. The following example shows that there are very natural random variables which are neither discrete nor continuous.
Call this waiting time X. What would be a reasonable distribution for X? Now observe that the waiting time X is not discrete, since it can take any value in R. We can view X as a mixture of a discrete and a continuous random variable. Let Y be a discrete random variable, and Z be a continuous random variable. Suppose that all random variables are independent. The expectation of the mixture X in 5. Motivate your suggestion. Continuous Random Variables and Vectors Mixtures are natural examples of random variables which are neither discrete nor continuous.
The simplest example is in dimension two. Clearly, X, X is not a discrete random vector, and it is not a mixture of a continuous and discrete random vector either. Nevertheless, we claim that the vector X, X has no joint density. To see this, we proceed by contradiction. Suppose, therefore, that it has joint density f.
Show that for the same X as in Example 5. As in the discrete setting, we would like to talk about the conditional distribution of Y given that X takes the value x, say. This means that the theory as developed in Chapter 2, is not useful here. But clearly, we would like to have a notion of conditional distributions, even in the continuous case. How can we do that? There are, in fact, several possible approaches to this problem.
Under certain weak regularity conditions which I do not specify here , this approach leads to the following computation, assuming that X and Y 5. However, relation 5. We must demand more relations to essentially guarantee uniqueness.
Let X and Y be discrete random variables. The following result was not stated in Chapter 2, but for our current purpose it is very useful. Continuous Random Variables and Vectors for any y and A. The second assertion is proved by contradiction. We now return to the continuous setting. Let X have a density fX and let Y be any discrete or continuous random variable. Let X have density fX. A Since the integrand is non-negative and piecewise continuous, this means that the integrand must be piecewise equal to 0.
Let X have density fX and let Y ba some random variable. A conditional distribution indexconditional! Note that this approach is quite general in the sense that we have not required X, Y to be a continuous random vector. In Example 5. The general approach in practice is often to guess the form of the conditional distribution, and after that verify 5.
Most practical cases will be covered by the following special case in which X and Y have a joint density. It will be understood from now on that all relevant densities are regular. Continuous Random Variables and Vectors compare this with 5. To check this, we simly plug in this formula into 5. Hence, according to Lemma 5. Make sure that you understand the last remark by writing down the corresponding discrete formulas. Let X and Y have joint density f.
Recall the joint density in Example 5. Let X, Y have the standard bivariate normal distribution of Example 5. Find the conditional density and conditional expectation of Y given X. Suppose that X, Y has a joint density and that X and Y are independent. In the next example, there is no joint density, and yet we will be able to compute conditional probabilities. Consider the random vector X, X in Example 5. Clearly, when 0 x, this probability should be 12 , by symmetry. We can now verify this by verifying 5.
Do a similar computation for the case x 5. The Law of Large Numbers but only about a conditional probability for all outcomes of the random variable X simultaneously. It will come as no surprise that Theorem 4. It is not so useful to repeat the proofs here completely. In fact, it is a very good exercise to do this for yourself. In the proof of Theorem 4. Prove Theorem 4. Show that X is a continuous random variable, and compute its density and expectation. Continuous Random Variables and Vectors Exercise 5.
Exercise 5. Let Z be a standard normal random variable. Two people agreed to meet each other on a particular day, between 5 and 6 pm. They arrive independently on a uniform time between 5 and 6, and wait for 15 minutes. What is the probability that they meet each other? Are X and Y independent? Find the marginal distributions of X and Y and compute their covariance.
Exercises Exercise 5. Explain why this is called the lack of memory property of the exponential distribution. Compute the expected waiting time. Let X and Y be independent exponentially distributed random variables with parameter 1. Let U and V be independent and uniformly distributed on 0, 1. Let X, Y and Z be independent 0, 1 uniform random variables. Compute cov X, Y. Compute E Y. Let X and Y be independent and uniformly distributed on 0, 1.
Compute the density of Z. Let X1 , X2 and X3 be independent uniform 0, 1 random variables. What is the probability that we can form a triangle with three sticks of length X1 , X2 and X3? What is the probability that it intersects a line? In order to answer this question, we have to make a few assumptions. Think of the law of large numbers. Let X be uniform on 0, 1.
So, for instance, we would like to talk about the probability that the average outcomes converge to an expection. Warrag, Idowu Adeyemi, Nerea R. Rodriguez, Inas M. Kroon, Cor J. The Journal of Physical Chemistry B , 14 , Journal of the American Chemical Society , 14 , Ashworth, Rebecca E. The Journal of Organic Chemistry , 83 7 , Britovsek, Marcel Swart, Paola Belanzoni.
A Computational Insight into the Mechanism. ACS Catalysis , 8 4 , Katz, Adam K. Schmitt, Jeffrey S. Journal of the American Chemical Society , 10 , Janakey Devi, P.
Sai, A. Liquid-Phase Modeling in Heterogeneous Catalysis. ACS Catalysis , 8 3 , ACS Omega , 3 2 , The Journal of Physical Chemistry B , 7 , Hamlin, Bas van Beek, Stephen D. Lindell, Daniel Blanco-Ania, F. Matthias Bickelhaupt, and Floris P. The Journal of Organic Chemistry , 83 4 , The Journal of Physical Chemistry A , 5 , Inorganic Chemistry , 57 3 , Ferro, C. Moya, D. Moreno, R. Santiago, J. Pedrosa, M. Larriba, I. Diaz, and J. Journal of Chemical Information and Modeling , 58 1 , Noble, Jason H.
Tyrell, Sandra Pluczyk, Anton P. Le Brun, Gordon G. Wallace, Joaquin Gonzalez, Michelle L. Coote, and Simone Ciampi. Electrochemical and Electrostatic Cleavage of Alkoxyamines.
Journal of the American Chemical Society , 2 , HillBenjamin B. NobleMichelle L. Journal of Chemical Information and Modeling , 57 12 , Gilson, Stefan Grimme, and Werner M. The Journal of Physical Chemistry B , 49 , Janssen, Michael Kather, Leif C. Syngas Dehydration with Ionic Liquids. Chemistry of Materials , 29 23 , Canton, and Xiaoyi Zhang.
Journal of the American Chemical Society , 48 , Inorganic Chemistry , 56 22 , Coleman Howard, James C. Pritchard, and T. Daniel Crawford. Journal of Chemical Theory and Computation , 13 11 , Provorse Long and Christine M. The Journal of Physical Chemistry B , 43 , Yordanova, E. Ritter, I.
Smirnova, and S. Langmuir , 33 43 , Flake, and Francisco R. Langmuir , 33 42 , Gayfulin, Anton I. Smolentsev, Svetlana G. Kozlova, Igor N. Novozhilov, Pavel E. Plyusnin, Nikolay B.
Kompankov, and Yuri V. Inorganic Chemistry , 56 20 , Organometallics , 36 19 , Kobylianskii, Ilya Shenderovich, and Peter Chen. Attenuation of London Dispersion in Dichloromethane Solutions. Journal of the American Chemical Society , 37 , Thao Nguyen, Paul Tinnemans, F. The Journal of Organic Chemistry , 82 18 , Ferrero Vallana, H.
Nimal Gunaratne, Lynette A. Holland, Alberto V. Puga, Kenneth R. Seddon, and Oreste Todini. Journal of Chemical Theory and Computation , 13 9 , Karimova and Christine M. Buchholz, Rebecca K. Sandler, and Dominic M. Di Toro. Vasilchenko, Semen N. Berdyugin, Sergey V. Inorganic Chemistry , 56 17 , The Journal of Physical Chemistry A , 34 , Prisle, Jonas Elm, Eleanor M. The Journal of Physical Chemistry B , 32 , Organometallics , 36 15 , Inorganic Chemistry , 56 15 , Heldebrant, Phillip K.
Chemical Reviews , 14 , Haworth, Qinrui Wang, and Michelle L. The Journal of Physical Chemistry A , 27 , Truflandier, and Jochen Autschbach.
Inorganic Chemistry , 56 13 , Firaha, Anna V. ACS Omega , 2 6 , Ritter, T. Gerlach, J. Jensen, I. Pugazhenthi, and Tamal Banerjee. Modeling Exposure in the Tox21 in Vitro Bioassays. Chemical Research in Toxicology , 30 5 , Sweere and Johannes G. Journal of Chemical Theory and Computation , 13 5 , Miranda, O. Analytical Chemistry , 89 9 , Journal of Chemical Information and Modeling , 57 4 , Kopp, and Kai Leonhard. The Journal of Physical Chemistry B , 13 , Ramanitra, Benjamin D.
Lindner, Andres Osvet, Christoph J. Brabec, Roger C. Hiorns, and Hans-Joachim Egelhaaf. Journal of Chemical Theory and Computation , 13 3 , Inorganic Chemistry , 56 5 , Orian, R.
Pilot, and R. Journal of Chemical Information and Modeling , 57 2 , Natural Gas Dehydration with Ionic Liquids. Journal of Chemical Theory and Computation , 13 2 , Conley, Daniel L. Silverio, Cory M. LeBlanc, Andreas Decken, T. Stanley Cameron, Jack Passmore, J. Mikko Rautiainen, and Thomas K. Inorganic Chemistry , 56 2 , Jasperson, Rubin J. The Journal of Physical Chemistry B , 50 , Silva, Peter Deglmann, and Josefredo R.
Pliego, Jr. Nagul, Chris Ritchie, Robert W. Gable, Matthew B. Poblet, and Colette Boskovic. Inorganic Chemistry , 55 23 , The Journal of Physical Chemistry B , 47 , Ducati, Alex Marchenko, and Jochen Autschbach. Inorganic Chemistry , 55 22 , Hu, Xuchu Deng, K. Sahan Thanthiriwatte, Virgil E. Dixon, Andrew R. Felmy, Kevin M. Rosso, and Jian Zhi Hu. The Journal of Physical Chemistry B , 44 , Inorganic Chemistry , 55 21 , Journal of the American Chemical Society , 43 , The Journal of Physical Chemistry A , 40 , The Journal of Physical Chemistry C , 40 , Journal of Chemical Theory and Computation , 12 10 , The Journal of Physical Chemistry A , 35 , Peeples and Georg Schreckenbach.
Journal of Chemical Theory and Computation , 12 8 , Johnston, Jing Zhou, Jeremy C. Smith, and Jerry M. Ali Hashim, and Inas M.
Kolb, Adrien J. The Journal of Physical Chemistry C , 29 , Poblet, and Leroy Cronin. Journal of the American Chemical Society , 28 , Daniliuc, and Gerhard Erker. Journal of the American Chemical Society , 27 , Organometallics , 35 13 , Organometallics , 35 12 , Wolters, Laura Orian, and F.
Matthias Bickelhaupt. Addition—Elimination or Nucleophilic Substitution? Journal of Chemical Theory and Computation , 12 6 , The Journal of Physical Chemistry B , 22 , Matthiesen, Jack M. Carraher, Monica Vasiliu, David A. Dixon, and Jean-Philippe Tessonnier.
The Journal of Organic Chemistry , 81 11 , The Journal of Physical Chemistry B , 19 , Chemical Reviews , 9 , Askes, and Sylvestre Bonnet. Inorganic Chemistry , 55 9 , Gibbs Energy of Solvation. Journal of Chemical Information and Modeling , 56 4 , The Journal of Physical Chemistry A , 15 , Journal of the American Chemical Society , 15 , Inorganic Chemistry , 55 8 , Journal of the American Chemical Society , 13 , Barton, Geoffrey M.
Chambers, and Thomas B. Rauchfuss , Federica Arrigoni and Giuseppe Zampella. Inorganic Chemistry , 55 7 , The Journal of Physical Chemistry A , 12 , Khake, Rahul A. Jagtap, Yuvraj B. Dangat, Rajesh G. Gonnade, Kumar Vanka, and Benudhar Punji. Organometallics , 35 6 , Singha Deb, Sk. Musharaf Ali, A. Debnath, D.
Aswal, and Ashok K. The Journal of Physical Chemistry B , 11 , Journal of the American Chemical Society , 8 , Williams, John R. Monnier, and Andreas Heyden. The Journal of Organic Chemistry , 81 3 , ACS Catalysis , 6 2 , Journal of Chemical Theory and Computation , 12 1 , Blumenthal, Christian M. Chemical Reviews , 24 , Inorganic Chemistry , 54 24 , Organometallics , 34 23 , Gagnon, and Abhik Ghosh.
Inorganic Chemistry , 54 23 , Chan, Ravi Tejwani, and Lotfi Derdour. Journal of Chemical Theory and Computation , 11 11 , ACS Catalysis , 5 11 , The Journal of Physical Chemistry A , 43 , Journal of the American Chemical Society , 41 , Inorganic Chemistry , 54 20 , Inorganic Chemistry , 54 19 , Wolters, Rick Koekkoek, and F.
ACS Catalysis , 5 10 , Organometallics , 34 18 , Azpiroz and Filippo De Angelis. The Journal of Organic Chemistry , 80 16 , Inorganic Chemistry , 54 16 , Journal of Chemical Theory and Computation , 11 8 , A Molecular Simulation Study. The Journal of Physical Chemistry B , 31 , Inorganic Chemistry , 54 15 , Blachly, Gregory M. Andrew McCammon, and Louis Noodleman. Inorganic Chemistry , 54 13 , Force Field Benchmark of Organic Liquids.
Journal of Chemical Information and Modeling , 55 6 , Is Tighter Binding Always Stronger?. Inorganic Chemistry , 54 12 , Organometallics , 34 11 , Experimental and DFT Studies. Thermodynamic Aspects of Aurophilic Hydrogelators. Inorganic Chemistry , 54 11 , The Journal of Physical Chemistry A , 21 , Cappa, Neil M. Donahue, Eva U. Percival, Francis Pope, Jonathan P.
Reid, M. Zardini, and Ilona Riipinen. Chemical Reviews , 10 , Karhu, Olli J. Textbook Sections: Chapter 4, Section 5. Open navigation menu. Close suggestions Search Search. User Settings. Skip carousel. Carousel Previous. Carousel Next.
What is Scribd? Pre Mfe Prob Feb Syllabus. Uploaded by superauditor. Did you find this document useful? Is this content inappropriate? Report this Document. Flag for inappropriate content. Download now. Related titles.
0コメント