Skip to content

Commit

Permalink
Deploying to gh-pages from @ a18c0e1 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
krishna-das-m committed Oct 30, 2024
1 parent 9a095b4 commit 06f03f2
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion assets/jupyter/blog.ipynb.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion feed.xml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.3.3">Jekyll</generator><link href="https://krishna-das-m.github.io/feed.xml" rel="self" type="application/atom+xml"/><link href="https://krishna-das-m.github.io/" rel="alternate" type="text/html" hreflang="en"/><updated>2024-10-30T13:52:38+00:00</updated><id>https://krishna-das-m.github.io/feed.xml</id><title type="html">blank</title><subtitle>A simple, whitespace theme for academics. Based on [*folio](https://github.com/bogoli/-folio) design. </subtitle><entry><title type="html">Maximum Likelihood Estimation- part II</title><link href="https://krishna-das-m.github.io/blog/2024/mle2/" rel="alternate" type="text/html" title="Maximum Likelihood Estimation- part II"/><published>2024-10-28T15:04:00+00:00</published><updated>2024-10-28T15:04:00+00:00</updated><id>https://krishna-das-m.github.io/blog/2024/mle2</id><content type="html" xml:base="https://krishna-das-m.github.io/blog/2024/mle2/"><![CDATA[<p>This page will be updated soon. Thank you for your interest.</p>]]></content><author><name></name></author><category term="ML-concepts"/><category term="statistics"/><category term="code"/><summary type="html"><![CDATA[MLE for continuous features]]></summary></entry><entry><title type="html">Maximum Likelihood Estimation- part I</title><link href="https://krishna-das-m.github.io/blog/2024/mle/" rel="alternate" type="text/html" title="Maximum Likelihood Estimation- part I"/><published>2024-10-25T15:04:00+00:00</published><updated>2024-10-25T15:04:00+00:00</updated><id>https://krishna-das-m.github.io/blog/2024/mle</id><content type="html" xml:base="https://krishna-das-m.github.io/blog/2024/mle/"><![CDATA[<p>In this post, we’ll dive into the concept of <strong>Maximum Likelihood Estimation (MLE)</strong>, particularly focusing on features that are discrete in nature. To make the discussion practical and relatable, we’ll consider the example of disease testing, building on the ideas discussed in my previous post, <a href="https://krishna-das-m.github.io/blog/2024/bayes-theorem/">Exploring Prior, Likelihood, and Posterior Distributions</a>. First, let’s get familiar with the intuition behind MLE.</p> <h3 id="what-is-maximum-likelihood-estimation-mle">What is Maximum Likelihood Estimation (MLE)?</h3> <p>Maximum Likelihood Estimation (MLE) is a statistical method used to estimate the parameters of a probability distribution. It does so by maximizing the <strong>likelihood function</strong>, which represents the probability of the observed data given specific parameters. In simpler terms, MLE helps us find the “best fit” parameters of a model based on the data we have.</p> <ul> <li><strong>Likelihood</strong>: The likelihood refers to the probability of observing the data for a given set of parameters.</li> <li><strong>MLE</strong>: MLE finds the parameter values that make the observed data most probable by maximizing the likelihood function.</li> </ul> <h3 id="example-estimating-the-probability-that-a-person-tests-positive-for-a-disease">Example: Estimating the Probability that a Person Tests Positive for a Disease</h3> <p>In our example, let’s say we want to estimate the probability that a randomly selected person tests positive for a certain disease. Since the test result is binary (either positive or negative), this is a classic case of a <strong>Bernoulli</strong> (or binomial) distribution problem.</p> <p>Let’s set it up mathematically.</p> <p>Suppose we have a random sample \(X_1, X_2, \dots, X_n\), where:</p> <ul> <li>\(X_i = 0\) if a randomly selected person tests <strong>negative</strong> for the disease.</li> <li>\(X_i = 1\) if a randomly selected person tests <strong>positive</strong> for the disease.</li> </ul> <p>Assuming that each test result is independent of the others (i.e., the \(X_i\) ‘s are independent Bernoulli random variables), we aim to estimate the parameter \(p\) , the probability that a person tests positive for the disease.</p> <h4 id="the-likelihood-function">The Likelihood Function</h4> <p>For \(n\) independent observations, the <strong>likelihood function</strong> is the joint probability of observing the data given the parameter \(p\). This can be written as:</p> <p>\begin{equation} L(p) = \prod_{i=1}^n P(X_i | p) \end{equation}</p> <p>For a Bernoulli distribution, the probability density function for each \(X_i\) is:</p> <p>\begin{equation} P(X_i = 1 | p) = p \quad \text{and} \quad P(X_i = 0 | p) = (1 - p) \end{equation}</p> <p>Thus, the likelihood function for the entire dataset is:</p> <p>\begin{equation} L(p) = \prod_{i=1}^n p^{X_i} (1 - p)^{1 - X_i} \end{equation}</p> <p>This represents the probability of observing the given test results (both positive and negative) for a specific value of \(p\).</p> <p>Our goal here is to estimate \(p\), which is the probability that a random individual will test positive for the disease. Let’s create a random sample as described mathematically above. We’ll also look at how sample size effects the likelihood.</p> <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="n">numpy</span> <span class="k">as</span> <span class="n">np</span>
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.3.3">Jekyll</generator><link href="https://krishna-das-m.github.io/feed.xml" rel="self" type="application/atom+xml"/><link href="https://krishna-das-m.github.io/" rel="alternate" type="text/html" hreflang="en"/><updated>2024-10-30T13:59:49+00:00</updated><id>https://krishna-das-m.github.io/feed.xml</id><title type="html">blank</title><subtitle>A simple, whitespace theme for academics. Based on [*folio](https://github.com/bogoli/-folio) design. </subtitle><entry><title type="html">Maximum Likelihood Estimation- part II</title><link href="https://krishna-das-m.github.io/blog/2024/mle2/" rel="alternate" type="text/html" title="Maximum Likelihood Estimation- part II"/><published>2024-10-28T15:04:00+00:00</published><updated>2024-10-28T15:04:00+00:00</updated><id>https://krishna-das-m.github.io/blog/2024/mle2</id><content type="html" xml:base="https://krishna-das-m.github.io/blog/2024/mle2/"><![CDATA[<p>This page will be updated soon. Thank you for your interest.</p>]]></content><author><name></name></author><category term="ML-concepts"/><category term="statistics"/><category term="code"/><summary type="html"><![CDATA[MLE for continuous features]]></summary></entry><entry><title type="html">Maximum Likelihood Estimation- part I</title><link href="https://krishna-das-m.github.io/blog/2024/mle/" rel="alternate" type="text/html" title="Maximum Likelihood Estimation- part I"/><published>2024-10-25T15:04:00+00:00</published><updated>2024-10-25T15:04:00+00:00</updated><id>https://krishna-das-m.github.io/blog/2024/mle</id><content type="html" xml:base="https://krishna-das-m.github.io/blog/2024/mle/"><![CDATA[<p>In this post, we’ll dive into the concept of <strong>Maximum Likelihood Estimation (MLE)</strong>, particularly focusing on features that are discrete in nature. To make the discussion practical and relatable, we’ll consider the example of disease testing, building on the ideas discussed in my previous post, <a href="https://krishna-das-m.github.io/blog/2024/bayes-theorem/">Exploring Prior, Likelihood, and Posterior Distributions</a>. First, let’s get familiar with the intuition behind MLE.</p> <h3 id="what-is-maximum-likelihood-estimation-mle">What is Maximum Likelihood Estimation (MLE)?</h3> <p>Maximum Likelihood Estimation (MLE) is a statistical method used to estimate the parameters of a probability distribution. It does so by maximizing the <strong>likelihood function</strong>, which represents the probability of the observed data given specific parameters. In simpler terms, MLE helps us find the “best fit” parameters of a model based on the data we have.</p> <ul> <li><strong>Likelihood</strong>: The likelihood refers to the probability of observing the data for a given set of parameters.</li> <li><strong>MLE</strong>: MLE finds the parameter values that make the observed data most probable by maximizing the likelihood function.</li> </ul> <h3 id="example-estimating-the-probability-that-a-person-tests-positive-for-a-disease">Example: Estimating the Probability that a Person Tests Positive for a Disease</h3> <p>In our example, let’s say we want to estimate the probability that a randomly selected person tests positive for a certain disease. Since the test result is binary (either positive or negative), this is a classic case of a <strong>Bernoulli</strong> (or binomial) distribution problem.</p> <p>Let’s set it up mathematically.</p> <p>Suppose we have a random sample \(X_1, X_2, \dots, X_n\), where:</p> <ul> <li>\(X_i = 0\) if a randomly selected person tests <strong>negative</strong> for the disease.</li> <li>\(X_i = 1\) if a randomly selected person tests <strong>positive</strong> for the disease.</li> </ul> <p>Assuming that each test result is independent of the others (i.e., the \(X_i\) ‘s are independent Bernoulli random variables), we aim to estimate the parameter \(p\) , the probability that a person tests positive for the disease.</p> <h4 id="the-likelihood-function">The Likelihood Function</h4> <p>For \(n\) independent observations, the <strong>likelihood function</strong> is the joint probability of observing the data given the parameter \(p\). This can be written as:</p> <p>\begin{equation} L(p) = \prod_{i=1}^n P(X_i | p) \end{equation}</p> <p>For a Bernoulli distribution, the probability density function for each \(X_i\) is:</p> <p>\begin{equation} P(X_i = 1 | p) = p \quad \text{and} \quad P(X_i = 0 | p) = (1 - p) \end{equation}</p> <p>Thus, the likelihood function for the entire dataset is:</p> <p>\begin{equation} L(p) = \prod_{i=1}^n p^{X_i} (1 - p)^{1 - X_i} \end{equation}</p> <p>This represents the probability of observing the given test results (both positive and negative) for a specific value of \(p\).</p> <p>Our goal here is to estimate \(p\), which is the probability that a random individual will test positive for the disease. Let’s create a random sample as described mathematically above. We’ll also look at how sample size effects the likelihood.</p> <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="n">numpy</span> <span class="k">as</span> <span class="n">np</span>
<span class="kn">from</span> <span class="n">scipy.special</span> <span class="kn">import</span> <span class="n">comb</span>
<span class="kn">import</span> <span class="n">matplotlib.pyplot</span> <span class="k">as</span> <span class="n">plt</span>
<span class="k">def</span> <span class="nf">samples</span><span class="p">(</span><span class="n">N</span><span class="p">,</span> <span class="n">percent</span><span class="p">):</span>
Expand Down
2 changes: 1 addition & 1 deletion sitemap.xml
Original file line number Diff line number Diff line change
@@ -1 +1 @@
<?xml version="1.0" encoding="UTF-8"?> <urlset xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd" xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>https://krishna-das-m.github.io/news/announcement_1/</loc> <lastmod>2015-10-22T19:59:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/news/announcement_2/</loc> <lastmod>2015-11-07T20:11:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/news/announcement_3/</loc> <lastmod>2016-01-15T11:59:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/blog/2024/bayes-theorem/</loc> <lastmod>2024-09-30T15:09:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/blog/2024/mle/</loc> <lastmod>2024-10-25T15:04:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/blog/2024/mle2/</loc> <lastmod>2024-10-28T15:04:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/projects/1_project/</loc> <lastmod>2024-10-30T13:52:38+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/projects/9_project/</loc> <lastmod>2024-10-30T13:52:38+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/</loc> </url> <url> <loc>https://krishna-das-m.github.io/cv/</loc> </url> <url> <loc>https://krishna-das-m.github.io/_pages/dropdown/</loc> </url> <url> <loc>https://krishna-das-m.github.io/projects/</loc> </url> <url> <loc>https://krishna-das-m.github.io/publications/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/tag/statistics/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/tag/code/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/tag/probability/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/category/ml-concepts/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/2024/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/</loc> </url> </urlset>
<?xml version="1.0" encoding="UTF-8"?> <urlset xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd" xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>https://krishna-das-m.github.io/news/announcement_1/</loc> <lastmod>2015-10-22T19:59:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/news/announcement_2/</loc> <lastmod>2015-11-07T20:11:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/news/announcement_3/</loc> <lastmod>2016-01-15T11:59:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/blog/2024/bayes-theorem/</loc> <lastmod>2024-09-30T15:09:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/blog/2024/mle/</loc> <lastmod>2024-10-25T15:04:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/blog/2024/mle2/</loc> <lastmod>2024-10-28T15:04:00+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/projects/1_project/</loc> <lastmod>2024-10-30T13:59:49+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/projects/9_project/</loc> <lastmod>2024-10-30T13:59:49+00:00</lastmod> </url> <url> <loc>https://krishna-das-m.github.io/</loc> </url> <url> <loc>https://krishna-das-m.github.io/cv/</loc> </url> <url> <loc>https://krishna-das-m.github.io/_pages/dropdown/</loc> </url> <url> <loc>https://krishna-das-m.github.io/projects/</loc> </url> <url> <loc>https://krishna-das-m.github.io/publications/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/tag/statistics/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/tag/code/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/tag/probability/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/category/ml-concepts/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/2024/</loc> </url> <url> <loc>https://krishna-das-m.github.io/blog/</loc> </url> </urlset>

0 comments on commit 06f03f2

Please sign in to comment.