-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy path08-JAGS.Rmd
679 lines (524 loc) · 21.9 KB
/
08-JAGS.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
# JAGS -- Just Another Gibbs Sampler
This chapter focuses on a very simple model -- one for which JAGS is overkill.
This allows us to get familiar with JAGS and the various tools to investigate
JAGS models in a simple setting before moving on to more interesting models soon.
## What JAGS is
JAGS (Just Another Gibbs Sampler) is an implementation of an MCMC algorithm
called Gibbs sampling to sample the posterior distribution of a Bayesian model.
We will interact with JAGS from within R using the following packages:
* R2jags -- interface between R and JAGS
* coda -- general tools for analyzing and graphing MCMC algorithms
* bayesplot -- a number of useful plots using `ggplot2`
* CalvinBayes -- includes some of the functions from Kruschke's text and other things to make our lives better.
### JAGS documentation
You can find JAGS documentation at <http://people.stat.sc.edu/hansont/stat740/jags_user_manual.pdf>.
This can be useful if you need to find out particulars about things like
the distributions that are availble in JAGS.
### Updating C and CLANG
Based on limited testing, it appears that things are good to go and you
should not need to do this.
~~To use the newest versions of JAGS, Stan, and the R packages that accompany
them, we need to use a newer version of some software than is standard for
<rstudio.calvin.edu>. I have taken care of this at the system level,
and that may suffice, but if things don't work in your account,
take the following steps:~~
1. ~~Open a terminal with Tools > Terminal > New Terminal~~
2. ~~Copy-and-paste this into the terminal window.~~
```{bash, eval = FALSE}
echo "source scl_source enable devtoolset-7 llvm-toolset-7" >> ~/.bashrc
```
~~This tells the server to use a newer version of C++ and CLANG.~~
3. ~~Close the terminal~~
4. ~~Restart R with Session > Restart R~~
~~**You should only need to go through these steps once.**~~
## Example 1: estimating a proportion
### The Model
```{r ch08-fig-bernoulli-model, echo = FALSE}
knitr::include_graphics("images/Bernoulli-model.png")
```
That is
\begin{align*}
Y_i &\sim {\sf Bern}(\theta)
\\
\theta &\sim {\sf Beta}(a, b)
\end{align*}
### Load Data
The data sets provided as csv files by Kruschke also live in the `CalvinBayes`
package, so you can read this file with
```{r ch08-glimpse-s15N50}
library(CalvinBayes)
data("z15N50")
glimpse(z15N50)
```
We see that the data are coded as 50 0's and 1's in a variable named `y`.
(You should use better names when creating your own data sets.)
### Specify the model
There are at least three R packages that provide an interface to JAGS:
rjags, R2jags, and runjags. We will primarily use R2jags.
Kruschke primarily uses rjags.
The main advantage of R2jags is that
we can specify the model by creating a special kind of function. [^08-1]
The avoids the need to create temporary files (as rjags requires)
and keeps things tidier in our R markdown documents.
[^08-1]: This is a bit of a trick that R2jags uses. The function created is never
run. The code is inspected and taken as the description of the model.
If you were to run the funtion, all it would do is create R formulas.
The main part of the model description is the same in either style, but notice
that the using the function style, we do not need to include `model{ ... }`
in our description. Here's how we describe our simple model.
```{r ch08-bern-model}
bern_model <- function() {
for (i in 1:N) {
y[i] ~ dbern(theta) # each response is Bernoulli with fixed parameter theta
}
theta ~ dbeta(1, 1) # prior for theta
}
```
Some things to note:
* `dbern()` and `dbeta()` are JAGS functions. The JAGS distribution
functions are similar to, but not identical to the ones in R. (R doesn't
have Bernoulli at all, for example.) Sometimes the parameterization
are different. Importantly, JAGS doesn't have named arguments, so the
arguments must go in the order JAGS requires. Notice that we are only
giving the distribution name and its parameters. (So the first argument
that R requires is not part of this in JAGS.)
* JAGS is also not vectorized the way R is, so we will need to write
some explicit for loops to say "do this to every that". In the example
above, the for loops says that for each row of the data (`i in 1:N`),
the response (`y[i]`) is Bernoulli with paramter $\theta$ (`dbern(theta)`).
### Run the model
`R2jags::jags()` can be used to run our JAGS model. We need to specify
three things:
(1) the model we are using (as defined above),
(2) the data we are using,
(3) the parameters we want saved in the posterior sampling.
(`theta` is the only parameter in this model,
but in larger models, we might choose to save only some of the parameters).
The data do not need to be in a data frame, and this usually means a bit
more work on our part to tell JAGS things like how much data there is.
We will prepare all the information JAGS needs about the data in a **list**
using `list()`.
There are some additional, optional things we might want to control as well.
More on those later. For now, let's fit the model using the default values
for everything else.
```{r ch08-load-r2jags}
# Load the R2jags package
library(R2jags)
# Make the same "random" choices each time this is run.
# This makes the Rmd file stable so you can comment on specific results.
set.seed(123)
# Fit the model
bern_jags <-
jags(
data = list(y = z15N50$y, N = nrow(z15N50)),
model.file = bern_model,
parameters.to.save = c("theta")
)
```
Let's take a quick look at what we have.
```{r ch08-bern-jags-look}
bern_jags
```
Some notes on the output above:
* `3 chains`: The Gibbs sampler was run 3 times with 3 different starting
values. Each chain ran for 2000 steps, but only the last 1000 steps were saved.
* `n.sims = 3000` (1000 in each of 3 chains).
* `mu.vect`: We see that the average value of `theta` in our
posterior sample is
`r round(mean(bern_jags$BUGSoutput$sims.list$theta),3)`.
* `n.eff = 3000` is the number of effective samples.
In this case, JAGS is being very efficient,
as we would expect since it is just sampling directly from the
posterior distribution.
* `Rhat = 1`: This is a check for possible convergence problems.
If an MCMC sampler has converged, `Rhat` will be 1. So if the value
we see is not very close to 1, that is a sign of problems. Any value
greater than 1.1 is a cause for concern.
* We'll talk more about deviance later.
## Extracting information from a JAGS run
### posterior()
We can plot the posterior distribution, using `posterior()` to extract
the posterior samples as a data frame. Since we know the posterior
distribution should be Beta(16, 36), we'll add that to our plot as a
reference to see how well our posterior sample is doing.
```{r ch08-berg-jags-post}
library(CalvinBayes)
head(posterior(bern_jags))
gf_dhistogram(~theta, data = posterior(bern_jags), bins = 50) %>%
gf_dens(~theta, size = 1.5, alpha = 0.8) %>%
gf_dist("beta", shape1 = 16, shape2 = 36, color = "red")
```
### Side note: posterior sampling and the grid method
It is also possible to generate posterior samples when we use the grid method.
The mosaic package include the `resample()` function that will sample rows
of a data frame with replacement using specified probabilities
(given by the posterior, for example). Here's how that works.
```{r ch08-grid-resampling}
Grid <-
expand.grid(
theta = seq(0, 1, by = 0.001)
) %>%
mutate(
prior = dbeta(theta, 1, 1),
likelihood = dbinom(15, 50, theta),
posterior = prior * likelihood,
posterior = posterior / sum(posterior) / 0.001
)
Posterior <- resample(Grid, size = 5000, prob = Grid$posterior)
gf_dhistogram(~ theta, data = Posterior, bins = 50) %>%
gf_dens(~theta, size = 1.5, alpha = 0.8) %>%
gf_dist("beta", shape1 = 16, shape2 = 36, color = "red")
```
### Using coda
The `coda` package provides output analysis and diagnostics for
MCMC algorithms. In order to use it, we must convert our JAGS object
into something `coda` recognizes.
We do with with the `as.mcmc()` function.
```{r ch08-bern-mcmc, fig.width = 7, fig.height = 5}
bern_mcmc <- as.mcmc(bern_jags)
plot(bern_mcmc)
```
**Note:** Kruschke uses `rjags` without `R2jags`, so he does this step using
`rjags::coda.samples()` instead of `as.mcmc()`. Both functions result
in the same thing -- posterior samples in a format that `coda` expects, but
they have different starting points.
### Using bayesplot
The mcmc object we extracted with `as.mcmc()` can be used by the
utilities in the `bayesplot()`. Here, for example is the `bayesplot`
plot of the posterior distribution for theta.
By default, a vertical line segment is drawn at the median of the posterior
distribution.
```{r ch08-mcmc-areas, fig.height = 2}
library(bayesplot)
mcmc_areas(
bern_mcmc,
pars = c("theta"), # make a plot for the theta parameter
prob = 0.90) # shade the central 90%
```
One advantage of `bayesplot` is that the plots use the `ggplot2` system and
so interoperate well with `ggformula`.
```{r ch08-mcmc-trace}
mcmc_trace(bern_mcmc, pars = "theta")
# spearate the chains using facets and modify the color scheme
mcmc_trace(bern_mcmc, pars = "theta") %>%
gf_facet_grid(Chain ~ .) %>%
gf_refine(scale_color_viridis_d())
```
We will encounter additional plots from `bayesplot` as we go along.
### Using Kruschke's functions
I have put (modified versions of) some of functions from Kruschke's book
into the `CalvinBayes` package so that you don't have to source his files
to use them.
#### diag_mcmc() [diagMCMC()] {-}
```{r ch08-diagmcmc, fig.height = 4}
diag_mcmc(bern_mcmc, par = "theta")
```
#### plot_post() [plotPost()] {-}
The `plot_post()` function takes as its first argument a vector of posterior
sampled values for one of the parameters. We can extract such a vector
a couple different ways:
```{r ch08-plot_post, fig.keep = "last", eval = 2 }
# these produce the same output
plot_post(bern_mcmc[, "theta"], main = "theta", xlab = expression(theta))
plot_post(posterior(bern_jags)$theta, main = "theta", xlab = expression(theta))
```
There are a number of options that allow you to add some additional
information to the plot. Specifying `quietly = TRUE` will turn off
the numerical display that `plot_post()` generates along with the plot.
Here is an example.[^08-2]
```{r ch08-plot_post-args, fig.height = 4}
plot_post(bern_mcmc[, "theta"], main = "theta", xlab = expression(theta),
cenTend = "median", compVal = 0.5, ROPE = c(0.45, 0.55),
credMass = 0.90, quietly = TRUE)
```
[08-2]: We'll have more to say about what ROPE means later.
## Optional arguments to jags()
### Number and size of chains
Sometimes we want to use more or longer chains (or fewer or shorter chains
if we are doing a quick preliminary check before running longer chains later).
`jags()` has three arguments for this:
* `n.chains`: number of chains
* `n.iter`: number of iterations per chain
* `n.burnin`: number of burn in steps per chain
* `n.thin`: keep one sample per `n.thin`.
The default value of `n.thin` is set to save about 1000 values per chain.
So in the example below, we end up with only 4000 samples (1000 per chain)
rather than the 16000 you might have expected.
```{r ch08-bern-jags2, results = "hide"}
set.seed(76543)
bern_jags2 <-
jags(
data = list(y = z15N50$y, N = nrow(z15N50)),
model.file = bern_model,
parameters.to.save = c("theta"),
n.chains = 4, n.iter = 5000, n.burnin = 1000,
)
bern_jags2
```
Setting `n.thin = 1` will save them all.
```{r ch08-bern-jags2a, results = "hide"}
set.seed(76543)
bern_jags2a <-
jags(
data = list(y = z15N50$y, N = nrow(z15N50)),
model.file = bern_model,
parameters.to.save = c("theta"),
n.chains = 4, n.iter = 5000, n.burnin = 1000,
n.thin = 1
)
bern_jags2a
```
### Starting point for chains
We can also control the starting point for the chains. Starting different
chains and quite different parameter values can help
* verify that the MCMC algorithm is not overly sensitive to where we are
starting from, and
* ensure that the MCMC algorithm has explored the posterior distribution
sufficiently.
On the other hand, if we start a chain too far from the peak of the
posterior distribution, the chain may have trouble converging.
We can provide either specific starting points for each chain or a function
that generates random starting points.
```{r ch08-bern-jags3, results = "hide", fig.height = 2}
gf_dist("beta", shape1 = 3, shape2 = 3)
set.seed(2345)
bern_jags3 <-
jags(
data = list(y = z15N50$y, N = nrow(z15N50)),
model.file = bern_model,
parameters.to.save = c("theta"),
# start each chain by sampling from the prior
inits = function() list(theta = rbeta(1, 3, 3))
)
bern_jags4 <-
jags(
data = list(y = z15N50$y, N = nrow(z15N50)),
model.file = bern_model,
parameters.to.save = c("theta"),
# choose specific starting point for each chain
inits = list(
list(theta = 0.5), list(theta = 0.7), list(theta = 0.9)
)
)
mcmc_trace(as.mcmc(bern_jags4), pars = "theta")
```
It is a good sign that our three traces look very similar and overlap a lot.
This indicates that the chains are mixing well and not overly affected
by their starting point.
### Running chains in parallel
Although this model runs very quickly, others models may take considerably
longer.
We can use `jags.parallel()` in place of `jags()`
to take advantage of multiple cores to run more than one chain at a
time. `jags.seed` can be used to set the seed for the parallel
random number generator used. (Note: `set.seed()` does not work when
using `jags.parallel()` and `jags.seed` has no effect when using `jags()`.)
```{r ch08-bern-jags5}
library(R2jags)
bern_jags5 <-
jags.parallel(
data = list(y = z15N50$y, N = nrow(z15N50)),
model.file = bern_model,
parameters.to.save = c("theta"),
n.chains = 4, n.iter = 5000, n.burnin = 1000,
jags.seed = 12345
)
```
## Example 2: comparing two proportions
We have seen this situation before when we compared two coins.
This time we'll be a little more personal and compare two people.
We will also work with a data set in a slightly different form.
But the main point will be to see how we describe this familiar model
to JAGS.
### The data
Suppose we want to compare Reginald and Tony's abilities to hit a target (with a
dart, perhaps). For each attempt, we record two pieces of information: the
person making the attempt (the subject) and whether the attempt succeeded
(0 or 1).
Kruschke's provides a data frame for this, but the names he uses are not
good practice, so let's remanme them to be more like what you might see
in a real data set.
```{r ch08-z6N8z2N7}
library(mosaic)
head(z6N8z2N7)
# Let's do some renaming
Target <- z6N8z2N7 %>%
rename(hit = y, subject = s)
df_stats(hit ~ subject, data = Target, props, attempts = length)
```
Reginald was more successful than Tony, but neither had very many attempts.
### The model
Now our model is that each person has his own success rate --
we have **two $\theta$'s**,
one for Reginald and one for Tony.
```{r ch08-model-image02, echo = FALSE}
knitr::include_graphics("images/Bernoulli-model2.png")
```
We express this as
\begin{align*}
Y_i|s &\sim {\sf Bern}(\theta_{s})
\\
\theta_s &\sim {\sf Beta}(a, b)
\end{align*}
### Describing the model to JAGS
```{r ch08-bern2-model}
bern2_model <- function() {
for (i in 1:Nobs) {
# each response is Bernoulli with the appropriate theta
hit[i] ~ dbern(theta[subject[i]])
}
for (s in 1:Nsub) {
theta[s] ~ dbeta(2, 2) # prior for each theta
}
}
```
JAGS will also need access to four pieces of information from our data set:
* a vector of `hit` values
* a vector of `subject ` values -- coded as integers 1 and 2 (so that `subject[i]`
makes sense to JAGS. (In general, JAGS is much less fluid in handling data than
R is, so we often need to do some manual data conversion for JAGS.)
* `Nobs` -- the total number of observations
* `Nsub` -- the number of subjects
We will prepare these as a list.
```{r ch08-target-list}
TargetList <-
list(
Nobs = nrow(Target),
Nsub = 2,
hit = Target$hit,
subject = as.numeric(as.factor(Target$subject))
)
TargetList
```
### Fitting the model
```{r ch08-bern2-jags, results = "hide"}
bern2_jags <-
jags(
data = TargetList,
model = bern2_model,
parameters.to.save = "theta")
```
### Inspecting the results
```{r ch08-bern2-mcmc, fig.height = 5, fig.width = 8}
bern2_mcmc <- as.mcmc(bern2_jags)
# Kruschke diagnostic plots
diag_mcmc(bern2_mcmc)
# bayesplot plots
mcmc_acf(bern2_mcmc)
mcmc_acf_bar(bern2_mcmc)
mcmc_pairs(bern2_mcmc, pars = c("theta[1]", "theta[2]"))
mcmc_combo(bern2_mcmc)
mcmc_combo(bern2_mcmc, combo = c("dens", "dens_overlay", "trace", "scatter"),
pars = c("theta[1]", "theta[2]"))
```
Here is a list of `mcmc_` functions available:
```{r ch08-apropos}
apropos("^mcmc_")
```
The functions ending in `_data()` return the data used to make the corresponding plot. This can be useful
if you want to display that same information in a different way or
if you just want to inspect the data to make sure you understand the plot.
### Difference in proportions
If we are primarily interested in the difference between Reginald and Tony,
we can plot the difference in their theta values.
```{r ch08-bern2-post-density}
head(posterior(bern2_jags))
gf_density( ~(theta.1 - theta.2), data = posterior(bern2_jags))
```
### Sampling from the prior
To sample from the prior, we must do the following:
* remove the response variable from our data list
* change `Nobs` to 0
* set `DIC = FALSE` in the call to `jags()`.
This will run the model without any data,
which means the posterior will be the same as the prior.
```{r ch08-bern2-jags0}
# make a copy of our data list
TargetList0 <- list(
Nobs = 0,
Nsub = 2,
subject = as.numeric(as.factor(Target$subject))
)
bern2_jags0 <-
jags(
data = TargetList0,
model.file = bern2_model,
parameters.to.save = c("theta"),
n.chains = 2, n.iter = 5000, n.burnin = 1000,
DIC = FALSE)
```
#### Note about : in JAGS and in R
From the JAGS documentation:
> The sequence operator `:` can only produce increasing sequences. If n < m then `m:n`
produces a vector of length zero and when this is used in a for loop index expression the
contents of loop inside the curly brackets are skipped. Note that this behavior is different
from the sequence operator in R, where `m:n` will produce a decreasing sequence if `n < m`.
So in our JAGS model, `1:0` correctly represents no data (and no trips through the for loop).
#### What good is it to generate samples from the prior?
Our model set priors for $\theta_1$ and $\theta_2$, but this implies a distribution
for $\theta_1 - \theta_2$, and we might like to see what that distribution looks like.
```{r ch08-bern2-jags0-density, fig.height = 2}
gf_density( ~(theta.1 - theta.2), data = posterior(bern2_jags0))
```
## Exercises {#ch08-exercises}
<!-- Exercise 8.4. [Purpose: Explore the prior on a difference of parameters implied from the priors on the individual parameters.] -->
1. **Sampling from priors.**
You want to know who is the better free throw shooter, Alice or Bob.
You decide to have each shoot a number of shots and record their makes
and misses. You are primarily interested in the difference between their
free throw shooting proportions ($\theta_2 - \theta_1$), and you are curious
to know how your choice of priors for $\theta_1$ and $\theta_2$ affects the
prior for $\theta_2 - \theta_1$. For each situation below, use JAGS to
sample from the prior distribution for $\theta_2 - \theta_1$ and create
a density plot. (In each case, assume the priors for $\theta_1$ and
$\theta_2$ are independent.)
a. Both priors are uniform. What distribution do you think the
prior for $\theta_2 - \theta_1$ is?
b. Both priors are ${\sf Beta}(0.2, 0.2)$. Explain why the prior
for $\theta_2 - \theta_1$ looks the way it does.
Hint: Don't forget to set `DIC = FALSE` when sampling from the prior.
2. Now suppose that Alice makes 25 out of 30 shots and Bob makes 18 out of 32.
Is this enough evidence to conclude that Alice is the better shooter?
Do this two ways. In each case, use a ${\sf Beta}(4, 2)$ prior for both
$\theta_1$ and $\theta_2$.
a. Create data and use a model like the one used elsewhere in this chapter.
[Hint: `rep()` is handy for creating a bunch of values that are the same.]
b. Instead of using `dbern()`, use `dbin()` (JAGS version of the binomial
distribution). This should allow you to get by with simpler data that
consists only of the numbers 25, 30, 18, and 32. Note: the order of
arguments for `dbin()` is probability first, then number of trials. This is
reversed from the order in R.
3. In the previous problem, what does this prior say about
the beliefs about Alice and Bob's shooting before gathering data?
4. Let's think about Alice and Bob some more.
We don't know how to do this yet,
but explain why if you knew very little about basketball,
you might like to have a prior for $\theta_1$ and $\theta_2$
that was not indpendent.
How might the priors for $\theta_1$ and $\theta_2$ be related?
5. Consider the model below.
a. Create a plot of the **prior** distribution of
`theat1` and `theta2`. A scatter plot with overlayed density plot
works well. How does this prior compare to the priors in problem 1.
(Hint: Don't forget to use `DIC = FALSE` when sampling from the
prior.)
b. Fit the model to the Alice and Bob data. How does this
choice of prior change things?
```{r ch08-diff-model, results = "hide"}
diff_model <- function() {
z1 ~ dbin(theta1, n1)
z2 ~ dbin(theta2, n2)
theta2 <- theta1 + delta
theta1 ~ dbeta(4, 2)
delta ~ dnorm(0, 400) # Normal with mean 0 and sd = 0.05
}
diff_jags <-
jags.parallel(
model = diff_model,
data = list(n1 = 30, z1 = 25, n2 = 32, z2 = 18),
parameters.to.save = c("theta1", "theta2", "delta"),
n.iter = 5000,
n.chains = 4
)
```
6. Redo problem 5 using the grid method.