|
7 | 7 | <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes"> |
8 | 8 |
|
9 | 9 | <meta name="author" content="Tom Cunningham"> |
10 | | -<meta name="dcterms.date" content="2026-01-26"> |
| 10 | +<meta name="dcterms.date" content="2026-01-29"> |
11 | 11 | <meta name="description" content="Tom Cunningham blog"> |
12 | 12 |
|
13 | 13 | <title>The Bayesian Interpretation of Experiments | Tom Cunningham – Tom Cunningham</title> |
|
139 | 139 | <link rel="stylesheet" href="../styles.css"> |
140 | 140 | <meta name="twitter:title" content="The Bayesian Interpretation of Experiments | Tom Cunningham"> |
141 | 141 | <meta name="twitter:description" content="Tom Cunningham blog"> |
142 | | -<meta name="twitter:image" content="tecunningham.github.io/posts/2020-08-06-bayesian-interpretation-of-experiments_files/figure-html/unnamed-chunk-1-1.png"> |
143 | | -<meta name="twitter:image-height" content="883"> |
144 | | -<meta name="twitter:image-width" content="1105"> |
145 | | -<meta name="twitter:card" content="summary_large_image"> |
| 142 | +<meta name="twitter:card" content="summary"> |
146 | 143 | </head> |
147 | 144 |
|
148 | 145 | <body class="nav-fixed quarto-light"> |
@@ -215,7 +212,7 @@ <h1 class="title">The Bayesian Interpretation of Experiments</h1> |
215 | 212 | <div> |
216 | 213 | <div class="quarto-title-meta-heading">Published</div> |
217 | 214 | <div class="quarto-title-meta-contents"> |
218 | | - <p class="date">January 26, 2026</p> |
| 215 | + <p class="date">January 29, 2026</p> |
219 | 216 | </div> |
220 | 217 | </div> |
221 | 218 |
|
@@ -257,55 +254,10 @@ <h1>Bayesian vs Classical Inference</h1> |
257 | 254 | <li><strong>Bayesian.</strong> You set <code>posterior</code> to be a weighted average of <code>outcome</code> and of <code>prior</code>. The prior represents your best-estimate of the effect before the experiment ran. The position of the posterior between those two points depends on the relative tightness of the two distributions: if the confidence intervals from your experiment are tight relative to the uncertainty in your prior then the posterior will be closer to the outcome; if instead the confidence intervals are wide relative to your prior then the then the posterior will end up closer to your prior.</li> |
258 | 255 | </ol> |
259 | 256 | <p>Graphically we can show the three distributions:</p> |
260 | | -<div class="cell" data-layout-align="center"> |
261 | | -<div class="cell-output-display"> |
262 | | -<div class="quarto-figure quarto-figure-center"> |
263 | | -<figure class="figure"> |
264 | | -<p><img src="2020-08-06-bayesian-interpretation-of-experiments_files/figure-html/unnamed-chunk-1-1.png" class="img-fluid quarto-figure quarto-figure-center figure-img" width="384"></p> |
265 | | -</figure> |
266 | | -</div> |
267 | | -</div> |
268 | | -</div> |
269 | 257 | </section> |
270 | 258 | <section id="classical-inference-as-constrained-prior" class="level1"> |
271 | 259 | <h1>Classical Inference as Constrained Prior</h1> |
272 | 260 | <p>Suppose our prior was a spike at zero and otherwise uniform. This prior will cause Bayesian inference to behave similarly to classical inferences: when the outcome is small then the posterior will be heavily influenced by the spike, and so will shrink to be very near zero. When the outcome becomes larger then at some point it will escape the gravity of the central spike, and we’ll have <code>posterior~=outcome</code>.</p> |
273 | | -<div class="cell" data-layout-align="center"> |
274 | | -<div class="cell-output-display"> |
275 | | -<div class="quarto-figure quarto-figure-center"> |
276 | | -<figure class="figure"> |
277 | | -<p><img src="2020-08-06-bayesian-interpretation-of-experiments_files/figure-html/unnamed-chunk-2-1.png" class="img-fluid quarto-figure quarto-figure-center figure-img" width="192"></p> |
278 | | -</figure> |
279 | | -</div> |
280 | | -</div> |
281 | | -</div> |
282 | | -<div class="cell" data-layout-align="center"> |
283 | | -<div class="cell-output-display"> |
284 | | -<div class="quarto-figure quarto-figure-center"> |
285 | | -<figure class="figure"> |
286 | | -<p><img src="2020-08-06-bayesian-interpretation-of-experiments_files/figure-html/unnamed-chunk-3-1.png" class="img-fluid quarto-figure quarto-figure-center figure-img" width="192"></p> |
287 | | -</figure> |
288 | | -</div> |
289 | | -</div> |
290 | | -</div> |
291 | | -<div class="cell" data-layout-align="center"> |
292 | | -<div class="cell-output-display"> |
293 | | -<div class="quarto-figure quarto-figure-center"> |
294 | | -<figure class="figure"> |
295 | | -<p><img src="2020-08-06-bayesian-interpretation-of-experiments_files/figure-html/unnamed-chunk-4-1.png" class="img-fluid quarto-figure quarto-figure-center figure-img" width="192"></p> |
296 | | -</figure> |
297 | | -</div> |
298 | | -</div> |
299 | | -</div> |
300 | | -<div class="cell" data-layout-align="center"> |
301 | | -<div class="cell-output-display"> |
302 | | -<div class="quarto-figure quarto-figure-center"> |
303 | | -<figure class="figure"> |
304 | | -<p><img src="2020-08-06-bayesian-interpretation-of-experiments_files/figure-html/unnamed-chunk-5-1.png" class="img-fluid quarto-figure quarto-figure-center figure-img" width="192"></p> |
305 | | -</figure> |
306 | | -</div> |
307 | | -</div> |
308 | | -</div> |
309 | 261 | <p><strong>The point:</strong> the two graphs at the bottom of the figure are similar: i.e., using a “stat-sig rule” is a not-too-bad approximation of Bayesian inference when you have a fat-tailed prior (and in most cases your prior should be fat-tailed).</p> |
310 | 262 | </section> |
311 | 263 | <section id="applications" class="level1"> |
|
0 commit comments