Skip to content

Commit

Permalink
kb autocommit
Browse files Browse the repository at this point in the history
  • Loading branch information
Jemoka committed May 7, 2024
1 parent 1096ded commit d858711
Show file tree
Hide file tree
Showing 9 changed files with 286 additions and 1 deletion.
32 changes: 31 additions & 1 deletion content/posts/KBhgaussian_distribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,36 @@ author = ["Houjun Liu"]
draft = false
+++

\begin{equation}
\mathcal{N}(x|\mu, \Sigma) = \qty(2\pi)^{-\frac{n}{2}} |\Sigma|^{-\frac{1}{2}} \exp \qty(-\frac{1}{2} \qty(x-\mu)^{\top} \Sigma^{-1}(x-\mu))
\end{equation}

where \\(\Sigma\\) is positive semidefinite


## conditioning Gaussian distributions {#conditioning-gaussian-distributions}

For distributions that follow [Gaussian distribution]({{< relref "KBhgaussian_distribution.md" >}})s, \\(a, b\\), we obtain:

\begin{align}
\mqty[a \\\ b] \sim \mathcal{N} \qty(\mqty[\mu\_{a}\\\ \mu\_{b}], \mqty(A & C \\\ C^{\top} & B))
\end{align}

meaning, each one can be marginalized as:

\begin{align}
a \sim \mathcal{N}(\mu\_{a}, A) \\\\
b \sim \mathcal{N}(\mu\_{b}, B) \\\\
\end{align}

Conditioning works too with those terms:

\begin{align}
\mu\_{a|b} &= \mu\_a + CB^{-1}\qty(b - \mu\_{b}) \\\\
\sigma\_{a|b} &= A - CB^{-1}C^{\top}
\end{align}


## standard normal density function {#standard-normal-density-function}

This is a function used to model many Gaussian distributions.
Expand Down Expand Up @@ -89,7 +119,7 @@ Z=\mathcal{N}(0,1)
mean 0, variance 1. You can transform anything into a standard normal via the following linear transform:


#### transformation into [standard normal](#standard-normal) {#transformation-into-standard-normal--org430b977}
#### transformation into [standard normal](#standard-normal) {#transformation-into-standard-normal--org2b532dd}

\begin{equation}
X \sim \mathcal{N}(\mu, \sigma^{2})
Expand Down
7 changes: 7 additions & 0 deletions content/posts/KBhsignal_processing_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,10 @@ draft = false
- [SU-ENGR76 APR252024]({{< relref "KBhsu_engr76_apr252024.md" >}})
- [SU-ENGR76 APR302024]({{< relref "KBhsu_engr76_apr302024.md" >}})
- [SU-ENGR76 MAY022024]({{< relref "KBhsu_engr76_may022024.md" >}})


### Unit 2 {#unit-2}

[SU-ENGR76 Unit 2 Index]({{< relref "KBhsu_engr76_unit_2_index.md" >}})

- [SU-ENGR76 MAY072024]({{< relref "KBhsu_engr76_may072024.md" >}})
126 changes: 126 additions & 0 deletions content/posts/KBhsu_cs361_may072024.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
+++
title = "SU-CS361 MAY072024"
author = ["Houjun Liu"]
draft = false
+++

## Generalization Error {#generalization-error}

\begin{equation}
\epsilon\_{gen} = \mathbb{E}\_{x \sim \mathcal{X}} \qty[\qty(f(x) - \hat{f}(x))^{2}]
\end{equation}

we usually instead of compute it by averaging specific points we measured.


## Probabilistic Surrogate Models {#probabilistic-surrogate-models}


### Gaussian Process {#gaussian-process}

A [Gaussian Process](#gaussian-process) is a [Gaussian distribution]({{< relref "KBhgaussian_distribution.md" >}}) over [function]({{< relref "KBhfunction.md" >}})s!

Consider a mean function \\(m(x)\\), and a covariance ([kernel]({{< relref "KBhnull_space.md" >}})) function \\(k(x, x')\\). And, for a set of objective values \\(y\_{j} \in \mathbb{R}\\), which we are trying to infer using \\(m\\) and \\(k\\).

\begin{equation}
\mqty[y\_1 \\\ \dots \\\ y\_{m}] \sim \mathcal{N} \qty(\mqty[m(x\_1) \\\ \dots \\\ m(x\_{m})], \mqty[k(x\_1, x\_1) & \dots & k(x\_1, x\_{m}) \\\&\dots&\\\ k(x\_{m}, x\_{1}) &\dots &k(x\_{m}, x\_{m})])
\end{equation}

The choice of [kernel]({{< relref "KBhnull_space.md" >}}) makes or breaks the your ability to model your system. Its the way by which your input values are "smoothed" together to create a probabilistic estimate.


#### Choice of Kernels {#choice-of-kernels}

<!--list-separator-->

- squared exponential kernel

\begin{equation}
k(x,x') = \exp \qty( \frac{-(x-x')^{2}}{2 l^{2}})
\end{equation}

where, \\(l\\) is the parameter controlling the "length scale" (i.e. distance required for the function to change significantly). As \\(l\\) gets larger, there's more smoothing.

<!--list-separator-->

- Matérn Kernel

This is a very common kernel. Look it up.


#### Prediction {#prediction}

Given known means and variances of the sampled points from the original system, we can compute:

\begin{equation}
P(Y^{\*}|Y)
\end{equation}

through using [conditioning Gaussian distributions]({{< relref "KBhgaussian_distribution.md#conditioning-gaussian-distributions" >}}).

Specifically:

\begin{equation}
\mqty[\hat{y} \\\ y] \sim \mathcal{N} \qty(\mqty[m(X^{\*}) \\\ m(X)], \mqty[K(X^{\*}, X^{\*}) & K(X^{\*},X) \\\ K(X, X^{\*}) & K(X, X)])
\end{equation}


#### Noisy Measurements {#noisy-measurements}

We can account for zero-mean noise by adding some noise to your covariance:

\begin{equation}
\mqty[\hat{y} \\\ y] \sim \mathcal{N} \qty(\mqty[m(X^{\*}) \\\ m(X)], \mqty[K(X^{\*}, X^{\*}) & K(X^{\*},X) \\\ K(X, X^{\*}) & K(X, X) + v I])
\end{equation}


## Surrogate Optimization {#surrogate-optimization}


### Prediction Based Exploration {#prediction-based-exploration}

Given your existing points \\(D\\), evaluate \\(\mu\_{x|D}\\), and optimize for the next design point that has the smallest \\(\mu\_{x|D}\\).

This is **all exploitation, no exploration**.


### Error Based Exploration {#error-based-exploration}

Use the 95% [confidence interval]({{< relref "KBhconfidence_interval.md" >}}) from the [Gaussian Process](#gaussian-process), find the areas with with the biggest gap and then lower those.

This is \*all exploration, no exploitation.


### Lower Confidence Bound Exploration {#lower-confidence-bound-exploration}

tradeoff between exploration and exploitation. Try to minimize:

\begin{equation}
LB(x) = \hat{\mu}(x) - \alpha \hat{\Sigma}(x)
\end{equation}

try to minimize both the LOWER BOUND as well as the optimum. This is a probabilistic generalization of the [Shubert-Piyavskill Method]({{< relref "KBhsu_cs361_apr092024.md#shubert-piyavskill-method" >}})---and no [Lipschitz Constant]({{< relref "KBhuniqueness_and_existance.md#lipschitz-condition" >}}) needed!

Reminder, though, these are **probabilistic bounds**---unlike [Shubert-Piyavskill Method]({{< relref "KBhsu_cs361_apr092024.md#shubert-piyavskill-method" >}}).


### Probability of Improvement Exploration {#probability-of-improvement-exploration}

We define "improvement" as:

\begin{equation}
I(y) = \begin{cases}
y\_{\min} - y, \text{if}\ y < y\_{\min} \\\\
0, \text{otherwise}
\end{cases}
\end{equation}

And then, you choose what to look at by:

\begin{equation}
P(y < y\_{\min}) = \int\_{\infty}^{y\_{\min}} \mathcal{N}(y | \hat{\mu}, \hat{\Sigma}) \dd{y}
\end{equation}

(i.e. we want to find points that are very possible to improve).

You can also do this by the expected value of improvement
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,11 @@ draft = false
- [Linear Model]({{< relref "KBhsu_cs361_may022024.md#linear-model" >}})
- [Basis Functions]({{< relref "KBhsu_cs361_may022024.md#basis-functions" >}})
- [Regularization]({{< relref "KBhsu_cs224n_apr162024.md#regularization" >}})
- [Probabilistic Surrogate Models]({{< relref "KBhsu_cs361_may072024.md#probabilistic-surrogate-models" >}})
- [Gaussian Process]({{< relref "KBhsu_cs361_may072024.md#gaussian-process" >}})
- [squared exponential kernel]({{< relref "KBhsu_cs361_may072024.md#squared-exponential-kernel" >}})
- [Surrogate Optimization]({{< relref "KBhsu_cs361_may072024.md#surrogate-optimization" >}})
- [Prediction Based Exploration]({{< relref "KBhsu_cs361_may072024.md#prediction-based-exploration" >}})
- [Error Based Exploration]({{< relref "KBhsu_cs361_may072024.md#error-based-exploration" >}})
- [Lower Confidence Bound Exploration]({{< relref "KBhsu_cs361_may072024.md#lower-confidence-bound-exploration" >}})
- [Probability of Improvement Exploration]({{< relref "KBhsu_cs361_may072024.md#probability-of-improvement-exploration" >}}) and [Expected Improvement Exploration]({{< relref "KBhsu_cs361_may072024.md#probability-of-improvement-exploration" >}})
23 changes: 23 additions & 0 deletions content/posts/KBhsu_engr76_may022024.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,29 @@ This gives a **smooth signal**; and if sampling was done correctly with the [nyq

#### Shannon's Nyquist Theorem {#shannon-s-nyquist-theorem}

Let \\(X\\) be a [Finite-Bandwidth Signal]({{< relref "KBhsu_engr76_apr252024.md#finite-bandwidth-signal" >}}) where \\([0, B]\\) Hz.

if:

\begin{equation}
\hat{X}(t) = \sum\_{m=0}^{\infty} X(mTs) \text{sinc} \qty( \frac{t-mTs}{Ts})
\end{equation}

where:

\begin{equation}
\text{sinc}(t) = \frac{\sin \qty(\pi t)}{\pi t}
\end{equation}

- if \\(Ts < \frac{1}{2B}\\), that is, \\(fs > 2B\\), then \\(\hat{X}(t) = X(t)\\) (this is a STRICT inequality!)
- otherwise, if \\(Ts > \frac{1}{2B}\\), then \\(\hat{X}(t) \neq X(t)\\), yet \\(\hat{X}(mTs) = X(mTs)\\), and \\(\hat{X}\\) will be bandwidth limited to \\([0, \frac{fs}{2}]\\).

This second case is callled "aliasing", or "strocoscopic effect".

---

Alternate way of presenting the same info:

\begin{equation}
\hat{X}(t) = \sum\_{m=0}^{\infty} X(mTs) \text{sinc} \qty( \frac{t-mT\_{s}}{T\_{s}})
\end{equation}
Expand Down
76 changes: 76 additions & 0 deletions content/posts/KBhsu_engr76_may072024.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
+++
title = "SU-ENGR76 MAY072024"
author = ["Houjun Liu"]
draft = false
+++

Welcome to Unit 2.


## Fundamental Problem of Communication {#fundamental-problem-of-communication}

"The fundamental task of communication is that of reproducing at one point either exactly or approximately a message selected at another point"

Now: all communication signals are subject to some noise, so we need to design systems to responds to them.

Most designs center around changing transmitter and receiver, sometimes you can change the channel but often not.


## communication {#communication}


### analog communication {#analog-communication}

1. convert sound waves into continuous-time electrical signal \\(s(t)\\)
2. apply \\(s(t)\\) directly to the voltage of my channel
3. on the other end, decode \\(s'(t)\\), a noisy version of the original signal
4. speak \\(s'(t)\\)


### digital communication {#digital-communication}

1. convert sound waves into continuous-time electrical signal \\(s(t)\\)
2. sample \\(s(t)\\) at a known sampling rate
3. quantize the results to a fixed number of levels, turning them into discrete symbols
4. use [Huffman Coding]({{< relref "KBhhuffman_coding.md" >}}) to turn them into bits
5. generate a continuous-time signal of voltage using the bits
6. communicate this resulting signal over the cable
7. get the noisy result signal and recovering from bits from it
8. decode that by using interpolation + codebook
9. speak data

this format allows us to generalize all communication as taking on type (Bits =&gt; Bits), i.e. we only have to design Tx and Rx. This could be much more flexible than [Analog Communication]({{< relref "KBhanalog_vs_digital_signal.md#analog-communication" >}}).

communicating bits, too, can allow for control of distortion because it bounds the codomain of the output signal into bits which one cloud filter out, unlike [Analog Communication]({{< relref "KBhanalog_vs_digital_signal.md#analog-communication" >}}).

Tx and Rx maps **boolean [signal]({{< relref "KBhsu_engr76_apr162024.md#signal" >}})** and map it into something that can be transmitted across a channel, meaning it usually converts binary symbols into continuous time signal to be played back.


## digital encoding {#digital-encoding}

"how do we map a sequence of bits 0100100.... and map it to a continuous time signal \\(X(t)\\)?"


### simplest digital encoding approcah {#simplest-digital-encoding-approcah}

choose some voltage \\(V\\). Assign 1-bit voltage \\(V\\), assign 0-bit voltage \\(0\\), and simply play a voltage for a set amount of time \\(t\\) and move on to the next symbol for encoding.

To decode, we sample midpoints, shifted forward by \\(\frac{T}{2}\\).

Two parameters:

- \\(V\\): the voltage for the active signal
- \\(T\\): the time to communicate each symbol

Each symbol \\(T\\), if held for a smaller amount of time, communicating would take a smaller amount of time.

However, if too small, the bandwidth of the communication will increase (it will spike faster)---this is bad because...

1. channels can be frequency-selective (i.e. too high frequencies may not have the same propagation schemes, such as a speaker/microphones)
2. channels may have frequency restriction (FCC may give you a certain band, that is, you may only be licensed to communicate a band)
3. you may not be able to sample fast enough


#### High-Passing {#high-passing}

because you are almost never allowed to just give [Baseband Signal]({{< relref "KBhsu_engr76_may022024.md#passband-signal" >}}), we also need to be able to communicate these bits with a [Bandwidth]({{< relref "KBhsu_engr76_apr252024.md#bandwidth" >}}) limit both below and above.
15 changes: 15 additions & 0 deletions content/posts/KBhsu_engr76_unit_2_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
+++
title = "SU-ENGR76 Unit 2 Index"
author = ["Houjun Liu"]
draft = false
+++

Communication System Design!

[Fundamental Problem of Communication]({{< relref "KBhsu_engr76_may072024.md#fundamental-problem-of-communication" >}})

- [communication]({{< relref "KBhsu_engr76_may072024.md#communication" >}})
- [Analog Communication]({{< relref "KBhanalog_vs_digital_signal.md#analog-communication" >}})
- [Digital Communication]({{< relref "KBhanalog_vs_digital_signal.md#digital-communication" >}})
- [digital encoding]({{< relref "KBhsu_engr76_may072024.md#digital-encoding" >}})
- [simplest digital encoding approcah]({{< relref "KBhsu_engr76_may072024.md#simplest-digital-encoding-approcah" >}})
Binary file added static/ox-hugo/2024-05-07_10-02-42_screenshot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added static/ox-hugo/2024-05-07_10-02-53_screenshot.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit d858711

Please sign in to comment.