Probability is a mathematical concept that quantifies the likelihood or chance of an event occurring, expressed as a number between 0 and 1, where 0 indicates impossibility and 1 indicates certainty.
(a) An Experiment: An action or operation resulting in two or more outcomes is called an experiment.
(b) Sample Space: The set of all possible outcomes of an experiment is called the sample space, denoted by S. An element of S is called a sample point.
(c) Event: Any subset of the sample space is an event.
(d) Simple Event: An event is simple if it is a singleton subset of the sample space S.
(e) Compound Events: The joint occurrence of two or more simple events.
(f) Equally Likely Events: Several simple events are equally likely if there is no reason for one event to occur in preference to any other event.
(g) Exhaustive Events: All the possible outcomes taken together in which an experiment can result are said to be exhaustive.
(h) Mutually Exclusive or Disjoint Events: If two events cannot occur simultaneously, they are mutually exclusive. If A and B are mutually exclusive, then A∩B=∅.
(i) Complement of an Event: The complement of an event A, denoted by Aˉ,A′orAC, is the set of all sample points of the space other than the sample points in A.
2. Mathematical Definition of Probability:
Let the outcomes of an experiment consist of n exhaustive, mutually exclusive, and equally likely cases. Then, the sample space S has n sample points. If an event A consists of m sample points, (0≤m≤n), then the probability of event A, denoted by P(A), is defined to be nm i.e. P(A)=nm.
Let S={a1,a2,…,an} be the sample space
(a)P(S)=nn=1 corresponding to the certain event.
(b)P(∅)=n0=0 corresponding to the null event ∅ or impossible event.
(c) If Ai={ai}, i=1,…,n then Ai is the event corresponding to a single sample point ai. Then P(Ai)=n1.
(d)0≤P(A)≤1
3. Odds Against and Odds in Favour of an Event:
Let there be m+n equally likely, mutually exclusive, and exhaustive cases out of which an event A can occur in m cases and does not occur in n cases.
Then by definition, the probability of the occurrence of event A=P(A)=m+nm
The probability of non-occurrence of event A=P(A′)=m+nn
P(A):P(A′)=m:n
Thus, the odds in favor of the occurrence of event A are defined by m:n i.e. P(A):P(A), and the odds against the occurrence of event A are defined by n:m i.e. P(A):P(A).
4. Addition Theorem:
(a) If A and B are any events in S, then P(A∪B)=P(A)+P(B)−P(A∩B)
Since the probability of an event is a non-negative number, it follows that P(A∪B)≤P(A)+P(B)
For three events A, B, and C in S, we have P(A∪B∪C)=P(A)+P(B)+P(C)−P(A∩B)−P(B∩C)−P(C∩A)+P(A∩B∩C)
The general form of addition theorem (Principle of Inclusion-Exclusion):
(b) If A and B are mutually exclusive, then P(A∩B)=0 so that P(A∪B)=P(A)+P(B).
5. Conditional Probability:
If A and B are any events in S, then the conditional probability of B relative to A, i.e. the probability of occurrence of B when A has occurred, is given by
P(AB)=P(A)P(B∩A), if P(A)=0
6. Multiplication Theorem:
Independent Event:
If A and B are two independent events, then the happening of B will have no effect on A.
When events are independent: P(BA)=P(A) and P(AB)=P(B)
Then P(A∩B)=P(A)⋅P(B) OR P(AB)=P(A)⋅P(B)
When events are not independent:
The probability of the simultaneous happening of two events A and B is equal to the probability of A multiplied by the conditional probability of B with respect to A (or the probability of B multiplied by the conditional probability of A with respect to B) i.e. P(A∩B)=P(A)⋅P(AB) or P(B)⋅P(BA)
7. Total Probability Theorem:
Let an event A of an experiment occur with its n mutually exclusive and exhaustive events B1,B2,B3,…,Bn.
The total probability of the occurrence of event A is: P(A)=P(AB1)+P(AB2)+…+P(ABn)=∑i=1nP(ABi)
Thus, P(A)=P(B1)P(B1A)+P(B2)P(B2A)+…+P(Bn)P(BnA)=∑i=1nP(Bi)P(BiA)
8. Baye's Theorem or Inverse Probability:
Let A1,A2,…,An be n mutually exclusive and exhaustive events of the sample space S and A be an event that can occur with any of the events Ai.
P(AAi)=∑i=1nP(Ai)P(AiA)P(Ai)P(AiA)
9. Binomial Distribution for Repeated Trials:
Binomial Experiment:
Any experiment with only two outcomes is known as a binomial experiment.
The outcomes of such an experiment are known as success and failure.
The probability of success is denoted by p, and the probability of failure is denoted by q. then, p+q=1
If the binomial experiment is repeated n times, then
(a) Probability of exactly r successes in n trials:
P(X=r)=(rn)prqn−r
(b) Probability of at most r successes in n trials:
P(X≤r)=l=0∑r(ln)plqn−l
(c) Probability of at least r successes in n trials:
P(X≥r)=l=r∑n(ln)plqn−l
(d) Probability of having 1st success at the r-th trial:
P(X=r)=pqr−1
The binomial distribution's mean, variance, and standard deviation are np, npq, and npq respectively.
Poisson Distribution:
The limiting form of binomial distribution under the following conditions is known as Poisson distribution:
(a) The number of trials is indefinitely large i.e. n→∞.
(b) Probability of success for each trial is indefinitely small i.e. p→0.
(c) np = λ, a finite positive number.
Then, the Poisson distribution is given by:
P(X=r)=r!e−λλr
The Poisson distribution's mean, variance, and standard deviation are λ, λ, and λ respectively.
11. Continuous Probability Distributions:
(a) Uniform Distribution: A continuous random variable X is said to follow a uniform distribution over (a,b) if its probability density function is given by
f(x)=b−a1 for a≤x≤b and f(x)=0 elsewhere.
The mean and variance of the uniform distribution are 2a+b and 12(b−a)2 respectively.
(b) Normal Distribution: A continuous random variable X is said to follow a normal distribution with parameters μ and σ2 if its probability density function is given by
f(x)=σ2π1e−2σ2(x−μ)2 for −∞<x<∞.
The normal distribution's mean, variance, and standard deviation are μ, σ2, and σ respectively.
A Probability Distribution spells out how a total probability of 1 is distributed over several values of a random variable.
If p represents a person’s chance of success in any venture and ‘M’ the sum of money which he will receive in case of success, then his expectations or probable value = pM
Mean of Binomial Probability Distribution (BPD) = np ; variance of BPD = npq.
12. Standard Normal Distribution:
A normal distribution with a mean of 0 and a standard deviation of 1 is called a standard normal distribution. Its probability density function is given by ϕ(x)=2π1e−2x2 for −∞<x<∞.
If X follows a normal distribution with mean μ and standard deviation σ, then the random variable Z=σX−μ follows a standard normal distribution.
13. Exponential Distribution:
A continuous random variable X is said to follow an exponential distribution with parameter λ if its probability density function is given by f(x)=λe−λx for x≥0 and f(x)=0 for x<0.
The mean and variance of the exponential distribution are λ1 and λ21 respectively.
14. Some Important Points to Note:
(a) Let A and B be two events, then
P(A)+P(Aˉ)=1
P(A+B)=1−P(AˉBˉ)
P(BA)=P(B)P(AB)
P(A+B)=P(AB)+P(AˉB)+P(ABˉ)
A⊂B⟺P(A)≤P(B)
P(AˉB)=P(B)−P(AB)
P(AB)≤P(A)P(B)≤P(A+B)≤P(A)+P(B)
P(AB)=P(A)+P(B)−P(A+B)
P(Exactly one event) = P(AˉB)+P(ABˉ)=P(A)+P(B)−2P(AB)=P(A+B)−P(AB)
P(Neither A nor B) = P(AˉBˉ)=1−P(A+B)
P(Aˉ+Bˉ)=1−P(AB)
(b) Number of exhaustive cases of tossing n coins simultaneously
(or of tossing a coin n times) = 2n
(c) Number of exhaustive cases of throwing n dice simultaneously
(or throwing one dice n times) = 6n
(d) Playing Cards
Total Cards : 52(26 red, 26 black)
Four suits: Heart, Diamond, Spade, Club - 13 cards each