Skip to content

Commit 801a78b

Browse files
committed
Update chunk names
[ci skip]
1 parent 6693c33 commit 801a78b

2 files changed

Lines changed: 19 additions & 19 deletions

File tree

vignettes/loo2-moment-matching.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ regression model as in the case study.
5454
## Coding the Stan model
5555

5656
Here is the Stan code for fitting the Poisson regression model, which
57-
we will use for modelling the number of roaches.
57+
we will use for modeling the number of roaches.
5858

5959
```{r stancode}
6060
stancode <- "

vignettes/loo2-weights.Rmd

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ weights or the similar pseudo-BMA weights.
5656
In addition to the __loo__ package we will also load the __rstanarm__ package
5757
for fitting the models.
5858

59-
```{r, setup, message=FALSE}
59+
```{r setup, message=FALSE}
6060
library(rstanarm)
6161
library(loo)
6262
```
@@ -68,7 +68,7 @@ example as follows:
6868

6969
> A popular hypothesis has it that primates with larger brains produce more energetic milk, so that brains can grow quickly. ... The question here is to what extent energy content of milk, measured here by kilocalories, is related to the percent of the brain mass that is neocortex. ... We'll end up needing female body mass as well, to see the masking that hides the relationships among the variables.
7070
71-
```{r}
71+
```{r data}
7272
data(milk)
7373
d <- milk[complete.cases(milk),]
7474
d$neocortex <- d$neocortex.perc /100
@@ -78,7 +78,7 @@ str(d)
7878
We repeat the analysis in Chapter 6 of _Statistical Rethinking_ using the
7979
following four models (here we use the default weakly informative priors in __rstanarm__, while flat priors were used in _Statistical Rethinking_).
8080

81-
```{r, results="hide"}
81+
```{r fits, results="hide"}
8282
fit1 <- stan_glm(kcal.per.g ~ 1, data = d, seed = 2030)
8383
fit2 <- update(fit1, formula = kcal.per.g ~ neocortex)
8484
fit3 <- update(fit1, formula = kcal.per.g ~ log(mass))
@@ -94,7 +94,7 @@ by the __rstanarm__ package (a wrapper around `waic` from the __loo__ package),
9494
which allows us to just pass in our fitted model objects instead of first
9595
extracting the log-likelihood values.
9696

97-
```{r}
97+
```{r waic}
9898
waic1 <- waic(fit1)
9999
waic2 <- waic(fit2)
100100
waic3 <- waic(fit3)
@@ -117,7 +117,7 @@ needed values to pass to the __loo__ package. (Like __rstanarm__, some other R
117117
packages for fitting Stan models, e.g. __brms__, also provide similar methods
118118
for interfacing with the __loo__ package.)
119119

120-
```{r}
120+
```{r loo}
121121
# note: the loo function accepts a 'cores' argument that we recommend specifying
122122
# when working with bigger datasets
123123
@@ -136,7 +136,7 @@ lpd_point <- cbind(
136136
With `loo` we don't get any warnings for models 3 and 4, but for illustration of
137137
good results, we display the diagnostic details for these models anyway.
138138

139-
```{r}
139+
```{r print-loo}
140140
print(loo3)
141141
print(loo4)
142142
```
@@ -149,7 +149,7 @@ Next we compute and compare 1) WAIC weights, 2) Pseudo-BMA weights without
149149
Bayesian bootstrap, 3) Pseudo-BMA+ weights with Bayesian bootstrap, and 4)
150150
Bayesian stacking weights.
151151

152-
```{r}
152+
```{r weights}
153153
waic_wts <- exp(waics) / sum(exp(waics))
154154
pbma_wts <- pseudobma_weights(lpd_point, BB=FALSE)
155155
pbma_BB_wts <- pseudobma_weights(lpd_point) # default is BB=TRUE
@@ -173,7 +173,7 @@ bunch of times, but you can imagine that instead we would have ten alternative
173173
models with about the same predictive performance. WAIC weights for such a
174174
scenario would be close to the following:
175175

176-
```{r}
176+
```{r waic_wts_demo}
177177
waic_wts_demo <-
178178
exp(waics[c(1,1,1,1,1,1,1,1,1,1,2,3,4)]) /
179179
sum(exp(waics[c(1,1,1,1,1,1,1,1,1,1,2,3,4)]))
@@ -192,7 +192,7 @@ very similar models (in this toy example repeated models) to share their weight
192192
while more unique models keep their original weights. In our example
193193
we can see this difference clearly:
194194

195-
```{r}
195+
```{r stacking_weights}
196196
stacking_weights(lpd_point[,c(1,1,1,1,1,1,1,1,1,1,2,3,4)])
197197
```
198198
Using stacking, the weight for the best model stays essentially unchanged.
@@ -208,7 +208,7 @@ McElreath describes as follows:
208208
We build models predicting the total number of tools given the log
209209
population size and the contact rate (high vs. low).
210210

211-
```{r}
211+
```{r Kline}
212212
data(Kline)
213213
d <- Kline
214214
d$log_pop <- log(d$population)
@@ -219,7 +219,7 @@ str(d)
219219
We start with a Poisson regression model with the log population size,
220220
the contact rate, and an interaction term between them (priors are
221221
informative priors as in _Statistical Rethinking_).
222-
```{r, results="hide"}
222+
```{r fit10, results="hide"}
223223
fit10 <-
224224
stan_glm(
225225
total_tools ~ log_pop + contact_high + log_pop * contact_high,
@@ -234,7 +234,7 @@ fit10 <-
234234
Before running other models, we check whether Poisson is good choice
235235
as the conditional observation model.
236236

237-
```{r}
237+
```{r loo10}
238238
loo10 <- loo(fit10)
239239
print(loo10)
240240
```
@@ -249,7 +249,7 @@ We can compute LOO more accurately by running Stan again for the
249249
leave-one-out folds with high $k$ estimates. When using __rstanarm__
250250
this can be done by specifying the `k_threshold` argument:
251251

252-
```{r}
252+
```{r loo10-threshold}
253253
loo10 <- loo(fit10, k_threshold=0.7)
254254
print(loo10)
255255
```
@@ -258,7 +258,7 @@ In this case we see that there is not much difference, and thus it is relatively
258258
safe to continue.
259259

260260
As a comparison we also compute WAIC:
261-
```{r}
261+
```{r waic10}
262262
waic10 <- waic(fit10)
263263
print(waic10)
264264
```
@@ -269,7 +269,7 @@ optimistic. We recommend using the PSIS-LOO results instead.
269269
To assess whether the contact rate and interaction term are useful, we can make
270270
a comparison to models without these terms.
271271

272-
```{r, results="hide"}
272+
```{r contact_high, results="hide"}
273273
fit11 <- update(fit10, formula = total_tools ~ log_pop + contact_high)
274274
fit12 <- update(fit10, formula = total_tools ~ log_pop)
275275
```
@@ -288,7 +288,7 @@ lpd_point <- cbind(
288288
```
289289

290290
For comparison we'll also compute WAIC values for these additional models:
291-
```{r}
291+
```{r waic-contact_high}
292292
waic11 <- waic(fit11)
293293
waic12 <- waic(fit12)
294294
waics <- c(
@@ -304,7 +304,7 @@ Finally, we compute 1) WAIC weights, 2) Pseudo-BMA weights without
304304
Bayesian bootstrap, 3) Pseudo-BMA+ weights with Bayesian bootstrap, and
305305
4) Bayesian stacking weights.
306306

307-
```{r}
307+
```{r weights-contact_high}
308308
waic_wts <- exp(waics) / sum(exp(waics))
309309
pbma_wts <- pseudobma_weights(lpd_point, BB=FALSE)
310310
pbma_BB_wts <- pseudobma_weights(lpd_point) # default is BB=TRUE
@@ -336,7 +336,7 @@ model objects from __rstanarm__) as well as fitted model objects from
336336
other packages (e.g. __brms__) that do the preparation work for the user
337337
(see, e.g., the examples at `help("loo_model_weights", package = "rstanarm")`).
338338

339-
```{r}
339+
```{r loo_model_weights}
340340
# using list of loo objects
341341
loo_list <- list(loo10, loo11, loo12)
342342
loo_model_weights(loo_list)

0 commit comments

Comments
 (0)