Skip to content

Commit cc4365f

Browse files
Tom's July 27 edits of likelihood Bayes lecture
1 parent ec8652d commit cc4365f

1 file changed

Lines changed: 61 additions & 7 deletions

File tree

lectures/likelihood_bayes.md

Lines changed: 61 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -185,21 +185,40 @@ l_seq_f = np.cumprod(l_arr_f, axis=1)
185185

186186
## Likelihood Ratio Process and Bayes’ Law
187187

188-
Let $\pi_t$ be a Bayesian posterior defined as
188+
Let $\pi_{t+1}$ be a Bayesian posterior probability defined as
189189

190190
$$
191-
\pi_t = {\rm Prob}(q=f|w^t)
191+
\pi_{t+1} = {\rm Prob}(q=f|w^{t+1})
192192
$$
193193

194194
The likelihood ratio process is a principal actor in the formula that governs the evolution
195195
of the posterior probability $\pi_t$, an instance of **Bayes' Law**.
196196

197-
Bayes’ law implies that $\{\pi_t\}$ obeys the recursion
197+
Bayes' law is just the following application of the standardformula for conditional probability:
198+
199+
$$
200+
{\rm Prob}(q=f|w^{t+1}) = \frac { {\rm Prob}(q=f|w^{t} ) f(w_{t+1})}{ {\rm Prob}(q=f|w^{t} ) f(w_{t+1}) + (1 - {\rm Prob}(q=f|w^{t} )) g(w_{t+1})}
201+
$$
202+
203+
or
204+
205+
$$
206+
\pi_{t+1} = \frac { \pi_t f(w_{t+1})}{ \pi_t f(w_{t+1}) + (1 - \pi_t) g(w_{t+1})}
207+
$$ (eq:bayes150)
208+
209+
Evidently, the above equation asserts that
210+
211+
$$
212+
{\rm Prob}(q=f|w^{t+1}) = \frac{{\rm Prob}(q=f|w^{t}) f(w_{t+1} )} {{\rm Prob}(w_{t+1})}
213+
$$
214+
215+
216+
Dividing both the numerator and the denominator on the right side of the equation {eq}`eq:bayes150` by $g(w_{t+1})$ implies the recursion
198217
199218
```{math}
200219
:label: eq_recur1
201220
202-
\pi_t=\frac{\pi_{t-1} l_t(w_t)}{\pi_{t-1} l_t(w_t)+1-\pi_{t-1}}
221+
\pi_{t+1}=\frac{\pi_{t} l_t(w_{t+1})}{\pi_{t} l_t(w_t)+1-\pi_{t}}
203222
```
204223
205224
with $\pi_{0}$ being a Bayesian prior probability that $q = f$,
@@ -277,6 +296,16 @@ and the initial prior $\pi_{0}$
277296
278297
Formula {eq}`eq_Bayeslaw103` generalizes formula {eq}`eq_recur1`.
279298
299+
```{note}
300+
Fomula {eq}`eq_Bayeslaw103` can also be derived by starting from the formula for conditional probability
301+
302+
$$
303+
\pi_{t+1} \equiv {\rm Prob}(q=f|w^{t+1}) = \frac { \pi_0 f(w^{t+1})}{ \pi_0 f(w^{t+1}) + (1 - \pi_0) g(w^{t+1})}
304+
$$
305+
306+
and then dividing the numerator and the denominator on the right side by $g(w^{t+1})$.
307+
```
308+
280309
Formula {eq}`eq_Bayeslaw103` can be regarded as a one step revision of prior probability $\pi_0$ after seeing
281310
the batch of data $\left\{ w_{i}\right\} _{i=1}^{t+1}$.
282311
@@ -378,8 +407,33 @@ np.abs(π_seq - π_seq_f).max() < 1e-10
378407
```
379408
380409
We thus conclude that the likelihood ratio process is a key ingredient of the formula {eq}`eq_Bayeslaw103` for
381-
a Bayesian's posteior probabilty that nature has drawn history $w^t$ as repeated draws from density
382-
$g$.
410+
a Bayesian's posterior probabilty that nature has drawn history $w^t$ as repeated draws from density
411+
$f$.
412+
413+
414+
## Another timing protocol
415+
416+
Let's study how the posterior probability $\pi_t = {\rm Prob}(q=f|w^{t}) $ behaves when nature generates the
417+
history $w^t = w_1, w_2, \ldots, w_t$ under a different timing protocol.
418+
419+
Above we assumed that before time $1$ nature somehow chose to draw $w^t$ as an iid sequence from **either** $f$ **or** $g$.
420+
421+
Nature's decision about whether to draw from $f$ or $g$ was thus **permanent**.
422+
423+
We now assume another timing protocol in which before **each period** $t =1, 2, \ldots$ nature flips an unfair coin and with probability
424+
$x \in (0,1)$ draws from $f$ in period $t$ and with probability $1 - x $ draws from $g$.
425+
426+
Under this timing protocol, it is appropriate to interpret the Bayesian prior $\pi_0$ is the statistician's opinion about nature's $x$.
427+
428+
Let's write some Python code to study how $\pi_t$ behaves for various values of nature's mixing probability $x$.
429+
430+
**Note to Humphrey**: please write code to do this and give three example simulations. In these simulations, set $x=.5$ and set $\pi_0$ at three values -- .25, .5, and .75. It should be fun to watch $\pi_t$ converge to $x$!
431+
432+
433+
434+
435+
436+
383437
384438
385439
@@ -811,7 +865,7 @@ Notice how the conditional variance approaches $0$ for $\pi_{t-1}$ near either
811865
812866
The conditional variance is nearly zero only when the agent is almost sure that $w_t$ is drawn from $F$, or is almost sure it is drawn from $G$.
813867
814-
## Sequels
868+
## Related Lectures
815869
816870
This lecture has been devoted to building some useful infrastructure that will help us understand inferences that are the foundations of
817871
results described in {doc}`this lecture <odu>` and {doc}`this lecture <wald_friedman>` and {doc}`this lecture <navy_captain>`.

0 commit comments

Comments
 (0)