Conditional Probability of Continuous Random Variable



5.2.3 Conditioning and Independence

Here, we will discuss conditioning for continuous random variables. In particular, we will discuss the conditional PDF, conditional CDF, and conditional expectation. We have discussed conditional probability for discrete random variables before. The ideas behind conditional probability for continuous random variables are very similar to the discrete case. The difference lies in the fact that we need to work with probability density in the case of continuous random variables. Nevertheless, we would like to emphasize again that there is only one main formula regarding conditional probability which is \begin{align}\label{} \nonumber P(A|B)=\frac{P(A \cap B)}{P(B)}, \textrm{ when } P(B)>0. \end{align} Any other formula regarding conditional probability can be derived from the above formula. In fact, for some problems we only need to apply the above formula. You have already used this in Example 5.17. As another example, if you have two random variables $X$ and $Y$, you can write \begin{align}\label{} \nonumber P(X \in C|Y \in D)=\frac{P(X \in C, Y \in D)}{P(Y \in D)}, \textrm{ where } C, D \subset \mathbb{R}. \end{align} However, sometimes we need to use the concepts of conditional PDFs and CDFs. The formulas for conditional PDFs and CDFs of continuous random variables are very similar to those of discrete random variables. Since there are no new fundamental ideas in this section, we usually provide the main formulas and guidelines, and then work on examples. Specifically, we do not spend much time deriving formulas. Nevertheless, to give you the basic idea of how to derive these formulas, we start by deriving a formula for the conditional CDF and PDF of a random variable $X$ given that $X \in I=[a,b]$. Consider a continuous random variable $X$. Suppose that we know that the event $X \in I=[a,b]$ has occurred. Call this event $A$. The conditional CDF of $X$ given $A$, denoted by $F_{X|A}(x)$ or $F_{X| a \leq X \leq b}(x)$, is \begin{align}%\label{} \nonumber F_{X|A}(x) &=P(X \leq x|A)\\ \nonumber &=P(X \leq x|a \leq X \leq b)\\ \nonumber &=\frac{P(X \leq x, a \leq X \leq b)}{P(A)}. \end{align} Now if $x < a$, then $F_{X|A}(x)=0$. On the other hand, if $a \leq x \leq b$, we have \begin{align}%\label{} \nonumber F_{X|A}(x)&=\frac{P(X \leq x, a \leq X \leq b)}{P(A)}\\ \nonumber &=\frac{P(a \leq X \leq x)}{P(A)}\\ \nonumber &=\frac{F_X(x)-F_X(a)}{F_X(b)-F_X(a)}. \end{align} Finally, if $x>b$, then $F_{X|A}(x)=1$. Thus, we obtain \begin{equation} \nonumber F_{X|A}(x) = \left\{ \begin{array}{l l} 1 & \quad x>b \\ & \quad \\ \frac{F_X(x)-F_X(a)}{F_X(b)-F_X(a)} & \quad a \leq x<b \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation} Note that since $X$ is a continuous random variable, we do not need to be careful about end points, i.e., changing $x>b$ to $x \geq b$ does not make a difference in the above formula. To obtain the conditional PDF of $X$, denoted by $f_{X|A}(x)$, we can differentiate $F_{X|A}(x)$. We obtain \begin{equation} \nonumber f_{X|A}(x) = \left\{ \begin{array}{l l} \frac{f_X(x)}{P(A)} & \quad a \leq x<b \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation} It is insightful if we derive the above formula for $f_{X|A}(x)$ directly from the definition of the PDF for continuous random variables. Recall that the PDF of $X$ can be defined as \begin{align}%\label{} \nonumber f_X(x)=\lim_{\Delta \rightarrow 0^+} \frac{P(x<X \leq x+\Delta)}{\Delta}. \end{align} Now, the conditional PDF of $X$ given $A$, denoted by $f_{X|A}(x)$, is \begin{align}%\label{} \nonumber f_{X|A}(x)&=\lim_{\Delta \rightarrow 0^+} \frac{P(x<X \leq x+\Delta|A)}{\Delta}\\ \nonumber &=\lim_{\Delta \rightarrow 0^+} \frac{P(x<X \leq x+\Delta,A)}{\Delta P(A)}\\ \nonumber &=\lim_{\Delta \rightarrow 0^+} \frac{P(x<X \leq x+\Delta,a \leq X \leq b)}{\Delta P(A)}. \end{align} Now consider two cases. If $a \leq x<b$, then \begin{align}%\label{} \nonumber f_{X|A}(x)&=\lim_{\Delta \rightarrow 0^+} \frac{P(x<X \leq x+\Delta,a \leq X \leq b)}{\Delta P(A)}\\ \nonumber &=\frac{1}{P(A)}\lim_{\Delta \rightarrow 0^+} \frac{P(x<X \leq x+\Delta)}{\Delta}\\ \nonumber &=\frac{f_X(x)}{P(A)}. \end{align} On the other hand, if $x<a$ or $x \geq b$, then \begin{align}%\label{} \nonumber f_{X|A}(x)&=\lim_{\Delta \rightarrow 0^+} \frac{P(x<X \leq x+\Delta,a \leq X \leq b)}{\Delta P(A)}\\ \nonumber &=0. \end{align}

If $X$ is a continuous random variable, and $A$ is the event that $a < X < b$ (where possibly $b=\infty$ or $a=-\infty$), then \begin{equation} \nonumber F_{X|A}(x) = \left\{ \begin{array}{l l} 1 & \quad x>b \\ & \quad \\ \frac{F_X(x)-F_X(a)}{F_X(b)-F_X(a)} & \quad a \leq x<b \\ & \quad \\ 0 & \quad x<a \end{array} \right. \end{equation} \begin{equation} \nonumber f_{X|A}(x) = \left\{ \begin{array}{l l} \frac{f_X(x)}{P(A)} & \quad a \leq x<b \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation}

The conditional expectation and variance are defined by replacing the PDF by conditional PDF in the definitions of expectation and variance. In general, for a random variable $X$ and an event $A$, we have the following:

\begin{align}%\label{} \nonumber &E[X|A]=\int_{-\infty}^{\infty}xf_{X|A}(x)dx, \\ \nonumber &E[g(X)|A]=\int_{-\infty}^{\infty}g(x)f_{X|A}(x)dx, \\ \nonumber &\textrm{Var}(X|A)=E[X^2|A]-(E[X|A])^2 \end{align}

Example
Let $X \sim Exponential(1)$.

  1. Find the conditional PDF and CDF of $X$ given $X>1$.
  2. Find $E[X|X>1]$.
  3. Find Var$(X|X>1)$.
  • Solution
      1. Let $A$ be the event that $X>1$. Then \begin{align}%\label{} \nonumber P(A)&= \int_{1}^{\infty}e^{-x}dx \\ \nonumber &=\frac{1}{e}. \end{align} Thus, \begin{equation} \nonumber f_{X|X>1}(x) = \left\{ \begin{array}{l l} e^{-x+1} & \quad x>1 \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation} For $x>1$, we have \begin{align}%\label{} \nonumber F_{X|A}(A)&=\frac{F_X(x)-F_X(1)}{P(A)}\\ \nonumber &=1-e^{-x+1}. \end{align} Thus, \begin{equation} \nonumber F_{X|A}(x) = \left\{ \begin{array}{l l} 1-e^{-x+1} & \quad x>1 \\ & \quad \\ 0 & \quad \textrm{otherwise} \end{array} \right. \end{equation}
      2. We have \begin{align}%\label{} \nonumber E[X|X>1]&= \int_{1}^{\infty}xf_{X|X>1}(x)dx \\ \nonumber &=\int_{1}^{\infty}xe^{-x+1}dx\\ \nonumber &=e \int_{1}^{\infty}xe^{-x}dx\\ \nonumber &=e\bigg[-e^{-x}-xe^{-x} \bigg]_{1}^{\infty}\\ \nonumber &=e \frac{2}{e}\\ \nonumber &=2. \end{align}
      3. We have \begin{align}%\label{} \nonumber E[X^2|X>1]&= \int_{1}^{\infty}x^2f_{X|X>1}(x)dx \\ \nonumber &=\int_{1}^{\infty}x^2e^{-x+1}dx\\ \nonumber &=e \int_{1}^{\infty}x^2e^{-x}dx\\ \nonumber &=e \bigg[-2e^{-x}-2xe^{-x}-x^2e^{-x} \bigg]_{1}^{\infty}\\ \nonumber &=e \frac{5}{e}\\ \nonumber &=5. \end{align} Thus, \begin{align}%\label{} \nonumber \textrm{Var}(X|X>1)&=E[X^2|X>1]-(E[X|X>1])^2 \\ \nonumber &=5-4=1. \end{align}


Conditioning by Another Random Variable:

If $X$ and $Y$ are two jointly continuous random variables, and we obtain some information regarding $Y$, we should update the PDF and CDF of $X$ based on the new information. In particular, if we get to observe the value of the random variable $Y$, then how do we need to update the PDF and CDF of $X$? Remember for the discrete case, the conditional PMF of $X$ given $Y=y$ is given by

\begin{align}%\label{} \nonumber P_{X|Y}(x_i|y_j)&=\frac{P_{XY}(x_i,y_j)}{P_Y(y_j)}. \end{align} Now, if $X$ and $Y$ are jointly continuous, the conditional PDF of $X$ given $Y$ is given by \begin{align}%\label{} \nonumber f_{X|Y}(x|y)=\frac{f_{XY}(x,y)}{f_Y(y)}. \end{align} This means that if we get to observe $Y=y$, then we need to use the above conditional density for the random variable $X$. To get an intuition about the formula, note that by definition, for small $\Delta_x$ and $\Delta_y$ we should have \begin{align}%\label{} \nonumber f_{X|Y}(x|y) &\approx \frac{P(x \leq X \leq x+\Delta_x | y \leq Y \leq y+\Delta_y)}{\Delta_x} \hspace{20pt} \textrm{(definition of PDF)}\\ \nonumber &=\frac{P(x \leq X \leq x+\Delta_x , y \leq Y \leq y+\Delta_y)}{P(y \leq Y \leq y+\Delta_y) \Delta_x}\\ \nonumber &\approx \frac{f_{XY}(x,y) \Delta_x \Delta_y}{f_Y(y) \Delta_y \Delta_x}\\ \nonumber &=\frac{f_{XY}(x,y)}{f_Y(y)}. \end{align} Similarly, we can write the conditional PDF of $Y$, given $X=x$, as \begin{align}%\label{} \nonumber f_{Y|X}(y|x)=\frac{f_{XY}(x,y)}{f_X(x)}. \end{align}

For two jointly continuous random variables $X$ and $Y$, we can define the following conditional concepts:

  1. The conditional PDF of $X$ given $Y=y$: \begin{align} \nonumber f_{X|Y}(x|y)=\frac{f_{XY}(x,y)}{f_Y(y)} \end{align}
  2. The conditional probability that $X \in A$ given $Y=y$: \begin{align} \nonumber P(X \in A|Y=y)=\int_{A} f_{X|Y}(x|y) dx \end{align}
  3. The conditional CDF of $X$ given $Y=y$: \begin{align} \nonumber F_{X|Y}(x|y)=P(X \leq x|Y=y)=\int_{-\infty}^{x} f_{X|Y}(x|y) dx \end{align}

Example Let $X$ and $Y$ be two jointly continuous random variables with joint PDF \begin{equation} \nonumber f_{XY}(x,y) = \left\{ \begin{array}{l l} \frac{x^2}{4}+\frac{y^2}{4}+\frac{xy}{6} & \quad 0 \leq x \leq 1, 0 \leq y \leq 2 \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation} For $0 \leq y \leq 2$, find

  1. the conditional PDF of $X$ given $Y=y$;
  2. $P(X<\frac{1}{2}|Y=y)$.
  • Solution
      1. Let us first find the marginal PDF of $Y$. We have \begin{align}%\label{} \nonumber f_Y(y)&=\int_{0}^{1} \frac{x^2}{4}+\frac{y^2}{4}+\frac{xy}{6} \hspace{5pt} dx \\ \nonumber &=\frac{3y^2+y+1}{12}, \hspace{20pt} \textrm{for }0 \leq y \leq 2. \end{align} Thus, for $0 \leq y \leq 2$, we obtain \begin{align} \nonumber f_{X|Y}(x|y)&=\frac{f_{XY}(x,y)}{f_Y(y)}\\ \nonumber &=\frac{3x^2+3y^2+2xy}{3y^2+y+1}, \hspace{20pt} \textrm{for }0 \leq x \leq 1. \end{align} Thus, for $0 \leq y \leq 2$, we have \begin{equation} \nonumber f_{X|Y}(x|y) = \left\{ \begin{array}{l l} \frac{3x^2+3y^2+2xy}{3y^2+y+1} & \quad 0 \leq x \leq 1 \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation}
      2. We have \begin{align}%\label{} \nonumber P\left(X<\frac{1}{2}|Y=y\right) &=\int_{0}^{\frac{1}{2}} \frac{3x^2+3y^2+2xy}{3y^2+y+1} \hspace{5pt} dx \\ \nonumber &=\frac{1}{3y^2+y+1} \bigg[ x^3+yx^2+3y^2x \bigg]_{0}^{\frac{1}{2}}\\ \nonumber &=\frac{\frac{3}{2}y^2+\frac{y}{4}+\frac{1}{8}}{3y^2+y+1}. \end{align} Note that, as we expect, $P\left(X<\frac{1}{2}|Y=y\right)$ depends on $y$.

Conditional expectation and variance are similarly defined. Given $Y=y$, we need to replace $f_X(x)$ by $f_{X|Y}(x|y)$ in the formulas for expectation:

For two jointly continuous random variables $X$ and $Y$, we have:

  1. Expected value of $X$ given $Y=y$: \begin{align} \nonumber E[X|Y=y]=\int_{-\infty}^{\infty} xf_{X|Y}(x|y) dx \end{align}
  2. Conditional LOTUS: \begin{align} \nonumber E[g(X)|Y=y]=\int_{-\infty}^{\infty} g(x)f_{X|Y}(x|y) dx \end{align}
  3. Conditional variance of $X$ given $Y=y$: \begin{align} \nonumber Var(X|Y=y)=E[X^2|Y=y]-(E[X|Y=y])^2 \end{align}

Example
Let $X$ and $Y$ be as in Example 5.21. Find $E[X|Y=1]$ and Var$(X|Y=1)$.

  • Solution
    • \begin{align} \nonumber E[X|Y=1]&=\int_{-\infty}^{\infty} xf_{X|Y}(x|1) dx\\ \nonumber &=\int_{0}^{1} x\frac{3x^2+3y^2+2xy}{3y^2+y+1}|_{y=1} \hspace{5pt} dx\\ \nonumber &=\int_{0}^{1} x\frac{3x^2+3+2x}{3+1+1} \hspace{5pt} dx \hspace{30pt} (y=1)\\ \nonumber &=\frac{1}{5} \int_{0}^{1} 3x^3+2x^2+3x \hspace{5pt} dx \\ \nonumber &=\frac{7}{12}, \end{align} \begin{align} \nonumber E[X^2|Y=1]&=\int_{-\infty}^{\infty} x^2f_{X|Y}(x|1) dx\\ \nonumber &=\frac{1}{5} \int_{0}^{1} 3x^4+2x^3+3x^2 \hspace{5pt} dx \\ \nonumber &=\frac{21}{50}. \end{align} So we have \begin{align} \nonumber \textrm{Var}(X|Y=1)&=E[X^2|Y=1]-(E[X|Y=1])^2\\ \nonumber &=\frac{21}{50}-\left(\frac{7}{12}\right)^2\\ \nonumber &=\frac{287}{3600}. \end{align}


Independent Random Variables:

When two jointly continuous random variables are independent, we must have \begin{align}%\label{} \nonumber f_{X|Y}(x|y)=f_X(x). \end{align} That is, knowing the value of $Y$ does not change the PDF of $X$. Since $f_{X|Y}(x|y)=\frac{f_{XY}(x,y)}{f_Y(y)}$, we conclude that for two independent continuous random variables we must have \begin{align}%\label{} \nonumber f_{XY}(x,y)=f_X(x)f_Y(y). \end{align}

Two continuous random variables $X$ and $Y$ are independent if \begin{align}%\label{} \nonumber f_{XY}(x,y)=f_X(x) f_Y(y), \hspace{10pt} \textrm{ for all }x,y. \end{align} Equivalently, $X$ and $Y$ are independent if \begin{align}%\label{} \nonumber F_{XY}(x,y)=F_X(x) F_Y(y), \hspace{10pt} \textrm{ for all }x,y. \end{align} If $X$ and $Y$ are independent, we have \begin{align}%\label{} \nonumber &E[XY]=EX EY, \\ \nonumber &E[g(X)h(Y)]=E[g(X)]E[h(Y)]. \end{align}

Suppose that we are given the joint PDF $f_{XY}(x,y)$ of two random variables $X$ and $Y$. If we can write \begin{align}%\label{} \nonumber f_{XY}(x,y)=f_1(x)f_2(y), \end{align} then $X$ and $Y$ are independent.


Example
Determine whether $X$ and $Y$ are independent:

  1. $f_{XY}(x,y) = \left\{ \begin{array}{l l} 2e^{-x-2y} & \quad x,y>0 \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right.$
  2. $f_{XY}(x,y) = \left\{ \begin{array}{l l} 8xy & \quad 0<x<y<1 \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right.$
  • Solution
      1. We can write \begin{align}%\label{} \nonumber f_{XY}(x,y)=\big[e^{-x}u(x)\big]\big[2e^{-2y}u(y)\big], \end{align} where $u(x)$ is the unit step function: \begin{align}%\label{} \nonumber u(x)= \left\{ \begin{array}{l l} 1 & \quad x \geq 0 \\ 0 & \quad \text{otherwise} \end{array} \right. \end{align} Thus, we conclude that $X$ and $Y$ are independent.
      2. For this case, it does not seem that we can write $f_{XY}(x,y)$ as a product of some $f_1(x)$ and $f_2(y)$. Note that the given region $0<x<y<1$ enforces that $x<y$. That is, we always have $X<Y$. Thus, we conclude that $X$ and $Y$ are not independent. To show this, we can obtain the marginal PDFs of $X$ and $Y$ and show that $f_{XY}(x,y) \neq f_X(x) f_Y(y), \textrm{ for some }x,y$. We have, for $0 \leq x \leq 1$, \begin{align}%\label{} \nonumber f_X(x)&=\int_{x}^{1}8xy dy \\ \nonumber &=4x(1-x^2). \end{align} Thus, \begin{equation} \nonumber f_X(x) = \left\{ \begin{array}{l l} 4x(1-x^2) & \quad 0<x<1 \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation} Similarly, we obtain \begin{equation} \nonumber f_Y(y) = \left\{ \begin{array}{l l} 4y^3 & \quad 0<y<1 \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation} As we see, $f_{XY}(x,y)\neq f_X(x) f_Y(y)$, thus $X$ and $Y$ are NOT independent.


Example
Consider the unit disc \begin{align}%\label{} \nonumber D=\{(x,y)|x^2+y^2 \leq 1\}. \end{align} Suppose that we choose a point $(X,Y)$ uniformly at random in $D$. That is, the joint PDF of $X$ and $Y$ is given by \begin{equation} \nonumber f_{XY}(x,y) = \left\{ \begin{array}{l l} c & \quad (x,y) \in D \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation}

  1. Find the constant $c$.
  2. Find the marginal PDFs $f_X(x)$ and $f_Y(y)$.
  3. Find the conditional PDF of $X$ given $Y=y$, where $-1 \leq y \leq 1$.
  4. Are $X$ and $Y$ independent?
  • Solution
      1. We have \begin{align}%\label{} \nonumber 1&=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f_{XY}(x,y)dxdy\\ \nonumber &=\iint \limits_{D} c \hspace{5pt} dxdy\\ \nonumber &=c (\textrm{area of } D)\\ \nonumber &=c (\pi). \end{align} Thus, $c=\frac{1}{\pi}$.
      2. For $-1 \leq x \leq 1$, we have \begin{align}%\label{} \nonumber f_X(x)&=\int_{-\infty}^{\infty} f_{XY}(x,y)dy\\ \nonumber &=\int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}} \frac{1}{\pi}\hspace{5pt} dy\\ \nonumber &=\frac{2}{\pi}\sqrt{1-x^2}. \end{align} Thus, \begin{equation} \nonumber f_X(x) = \left\{ \begin{array}{l l} \frac{2}{\pi}\sqrt{1-x^2} & \quad -1 \leq x \leq 1 \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation} Similarly, \begin{equation} \nonumber f_Y(y) = \left\{ \begin{array}{l l} \frac{2}{\pi}\sqrt{1-y^2} & \quad -1 \leq y \leq 1 \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{equation}
      3. We have \begin{align}%\label{} \nonumber f_{X|Y}(x|y)&=\frac{f_{XY}(x,y)}{f_Y(y)}\\ \nonumber &=\left\{ \begin{array}{l l} \frac{1}{2\sqrt{1-y^2}} & \quad -\sqrt{1-y^2} \leq x \leq \sqrt{1-y^2} \\ & \quad \\ 0 & \quad \text{otherwise} \end{array} \right. \end{align} Note that the above equation indicates that, given $Y=y$, $X$ is uniformly distributed on $[-\sqrt{1-y^2},\sqrt{1-y^2}]$. We write \begin{align}%\label{} \nonumber X|Y=y \hspace{5pt} \sim \hspace{5pt} Uniform(-\sqrt{1-y^2},\sqrt{1-y^2}). \end{align}
      4. Are $X$ and $Y$ independent? No, because $f_{XY}(x,y)\neq f_X(x) f_Y(y)$.


    Law of Total Probability:

    Now, we'll discuss the law of total probability for continuous random variables. This is completely analogous to the discrete case. In particular, the law of total probability, the law of total expectation (law of iterated expectations), and the law of total variance can be stated as follows:

    Law of Total Probability:

    \begin{align}\label{eq:LOTP-cont} P(A)=\int_{-\infty}^{\infty}P(A|X=x)f_X(x) \hspace{5pt} dx \hspace{20pt} (5.16) \end{align}


    Law of Total Expectation:

    \begin{align}\label{eq:LOTE-cont} \nonumber E[Y]&=\int_{-\infty}^{\infty}E[Y|X=x]f_X(x) \hspace{5pt} dx \hspace{20pt} (5.17)\\ &=E[E[Y|X]] \end{align}


    Law of Total Variance:

    \begin{align}\label{eq:LOTV-cont} \textrm{Var}(Y)=E[\textrm{Var}(Y|X)]+\textrm{Var}(E[Y|X]) \hspace{20pt} (5.18) \end{align}

    Let's look at some examples.


    Example
    Let $X$ and $Y$ be two independent $Uniform(0,1)$ random variables. Find $P(X^3+Y>1)$.

    • Solution
      • Using the law of total probability (Equation 5.16), we can write \begin{align}%\label{} \nonumber P(X^3+Y>1)&= \int_{-\infty}^{\infty}P(X^3+Y>1|X=x)f_X(x) \hspace{5pt} dx\\ \nonumber &=\int_{0}^{1}P(x^3+Y>1|X=x) \hspace{5pt} dx\\ \nonumber &=\int_{0}^{1}P(Y>1-x^3) \hspace{5pt} dx &\textrm{(since $X$ and $Y$ are independent)} \\ \nonumber &=\int_{0}^{1}x^3 \hspace{5pt} dx &\textrm{(since $Y \sim Uniform(0,1)$) }\\ \nonumber &=\frac{1}{4}. \end{align}


    Example
    Suppose $X \sim Uniform(1,2)$ and given $X=x$, $Y$ is an exponential random variable with parameter $\lambda=x$, so we can write \begin{align}%\label{} \nonumber Y|X=x \hspace{10pt} \sim \hspace{10pt} Exponential(x). \end{align} We sometimes write this as \begin{align}%\label{} \nonumber Y|X \hspace{10pt} \sim \hspace{10pt} Exponential(X). \end{align}

    1. Find $EY$.
    2. Find $Var(Y)$.
    • Solution
        1. We use the law of total expectation (Equation 5.17) to find $EY$. Remember that if $Y \sim Exponential(\lambda)$, then $EY=\frac{1}{\lambda}$. Thus we conclude \begin{align} \nonumber E[Y|X=x]=\frac{1}{x}. \end{align} Using the law of total expectation, we have \begin{align} \nonumber EY&=\int_{-\infty}^{\infty} E[Y|X=x] f_X(x)dx\\ \nonumber &=\int_{1}^{2} E[Y|X=x] \cdot 1 dx\\ \nonumber &=\int_{1}^{2}\frac{1}{x}dx\\ \nonumber &= \ln 2. \end{align} Another way to write the above calculation is \begin{align} \nonumber EY&=E[E[Y|X]] &(\textrm{law of total expectation})\\ \nonumber &=E\left[\frac{1}{X}\right] &(\textrm{since } E[Y|X]=\frac{1}{X})\\ \nonumber &=\int_{1}^{2}\frac{1}{x}dx\\ \nonumber &= \ln 2. \end{align}
        2. To find $Var(Y)$, we can write \begin{align} \nonumber Var(Y)&=E[Y^2]-(E[Y])^2\\ &=E[Y^2]-(\ln 2)^2\\ &=E\big[E[Y^2|X]\big]-(\ln 2)^2 & (\textrm{law of total expectation})\\ &=E\left[\frac{2}{X^2}\right]-(\ln 2)^2 & \big(\textrm{since } Y|X \sim Exponential(X)\big)\\ &=\int_{1}^{2}\frac{2}{x^2}dx-(\ln 2)^2 \\ &=1-(\ln 2)^2. \end{align} Another way to find $Var(Y)$ is to apply the law of total variance: \begin{align} \textrm{Var}(Y)=E[\textrm{Var}(Y|X)]+\textrm{Var}(E[Y|X]). \end{align} Since $Y|X \sim Exponential(X)$, we conclude \begin{align} &E[Y|X]=\frac{1}{X},\\ &Var(Y|X)=\frac{1}{X^2}. \end{align} Therefore \begin{align} \textrm{Var}(Y)&=E\left[\frac{1}{X^2}\right]+\textrm{Var}\left(\frac{1}{X}\right)\\ &=E\left[\frac{1}{X^2}\right]+E\left[\frac{1}{X^2}\right]-\left(E\left[\frac{1}{X}\right]\right)^2\\ &=E\left[\frac{2}{X^2}\right]-(\ln 2)^2\\ &=1-(\ln 2)^2. \end{align}


    The print version of the book is available through Amazon here.

    Book Cover

    smithworence.blogspot.com

    Source: https://www.probabilitycourse.com/chapter5/5_2_3_conditioning_independence.php

    0 Response to "Conditional Probability of Continuous Random Variable"

    Post a Comment

    Iklan Atas Artikel

    Iklan Tengah Artikel 1

    Iklan Tengah Artikel 2

    Iklan Bawah Artikel