Skip to content

Commit

Permalink
Update 2015-07-15-code.md
Browse files Browse the repository at this point in the history
  • Loading branch information
krishna-das-m committed Oct 1, 2024
1 parent 9bf4cd3 commit 67bb47d
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions _posts/2015-07-15-code.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,11 +179,11 @@ Now, consider a stronger prior, $$\beta(10,10)$$, along with a larger dataset (1
The **likelihood function** with the larger sample is also much sharper and concentrated around $$\theta = 0.2$$. This sharpness reflects the informativeness of the data: with more samples, the data has reduced uncertainty, and the peak clearly suggests that $$\theta = 0.2$$ is the most likely value.
Finally, the **posterior distribution** in this case is sharper and more concentrated around $$\theta = 0.2$$, indicating much higher certainty about the estimate of $$\theta$$. With the larger dataset, the prior has much less influence compared to the likelihood, which dominates and drives the sharper posterior.

Now, let’s consider a what happens we use the posterior distribution from the previous step as the new prior. This updated prior now reflects a more informed belief about the unknown parameter $$\theta$$.
Since this new prior is derived from the posterior of the previous round, it’s much more concentrated compared to the original prior we started with. This concentration indicates that, based on the data we've already seen, we now have a much stronger belief that $$\theta$$ lies within a narrow range (around 0.15 in this case).
The likelihood function remains the same as in the earlier example with 100 samples. It still shows a peak around the same value of $$\theta$$, indicating that the observed data strongly suggests that $\theta$ falls within a particular range.
The updated posterior distribution is even sharper and more concentrated than before, reflecting a very strong belief about the value of $\theta$. Compared to the earlier posterior (based on a broader prior), this one is peaked around 0.12, showing that we now have an even greater degree of certainty about $\theta$.
The key takeaway here is that as we gather more data and update our beliefs (using Bayes' Theorem), our estimates become more precise. The prior becomes more informative, and the posterior narrows further, reflecting reduced uncertainty about the unknown parameter . In this case, after combining our already strong prior with new data, we have an even clearer picture of where $$\theta$$ lies.
Now, let’s consider a what happens when we use the posterior distribution from the previous step as the new prior. This updated prior now reflects a more informed belief about the unknown parameter $$\theta$$.
Since this new prior is derived from the posterior of the previous round, it’s much more concentrated compared to the original prior we started with. We now have a much stronger belief that $$\theta$$ lies within a narrow range (around 0.15 in this case).
The likelihood function looks almost the same as in the previous figure and still shows a peak around the same value of $$\theta$$, indicating that the observed data strongly suggests that $$\theta$$ falls within a same range.
The updated posterior distribution is even sharper and more concentrated than before, reflecting a very strong belief about the value of $$\theta$$. Compared to the earlier posterior (based on a broader prior), this one is peaked around 0.12, showing that we now have an even greater degree of certainty about $$\theta$$.
The key takeaway here is that as we gather more data and update our beliefs (using Bayes' Theorem), our estimates become more precise. The prior becomes more informative, and the posterior narrows further, reflecting reduced uncertainty about the unknown parameter .

<div class="row mt-3">
<div class="col-sm mt-3 mt-md-0">
Expand Down

0 comments on commit 67bb47d

Please sign in to comment.