Skip to content

Commit

Permalink
kb autocommit
Browse files Browse the repository at this point in the history
  • Loading branch information
Jemoka committed Feb 21, 2024
1 parent b8c3326 commit 62d4cf0
Show file tree
Hide file tree
Showing 5 changed files with 109 additions and 2 deletions.
4 changes: 2 additions & 2 deletions content/posts/KBhdispatching.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ a [dispatcher](#dispatcher) performs a [context switch](#context-switch), which

remember to push and pop the [register]({{< relref "KBhassembly.md#register" >}})s in the same order.... otherwise the registers won't be in the right order.

this makes a [context switch](#context-switch) a function that **calls on one thread** and **returns on another thread**.
this makes a [context switch](#context-switch) a function that **calls on one thread** and **returns on another thread**---"we start executing from one stack, and end executing from another".

Example:

Expand Down Expand Up @@ -79,4 +79,4 @@ context switch

We can't `ret` to a function that never called `context_switch`, which is the case for **new threads**.

To do this, we create a fake freeze frame on the stack for that new thread, and calls `context_switch` normally.
To do this, we create a fake freeze frame on the stack for that new thread which looks like you are just about to call the thread function, and calls `context_switch` normally.
1 change: 1 addition & 0 deletions content/posts/KBhos_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@ how do we [trust]({{< relref "KBhprivacy.md#trust" >}}) software?
- [trap]({{< relref "KBhdispatching.md#trap" >}})
- [interrupts]({{< relref "KBhdispatching.md#interrupt" >}})
- [context switch]({{< relref "KBhdispatching.md#context-switch" >}})
- [scheduling]({{< relref "KBhscheduling.md" >}})

---

Expand Down
12 changes: 12 additions & 0 deletions content/posts/KBhprocess_control_block.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,15 @@ This is why you need to **CLOSE** all open file descriptors once every **PROCESS
## [thread]({{< relref "KBhmultithreading.md#thread" >}}) state {#thread--kbhmultithreading-dot-md--state}

Recall that [thread]({{< relref "KBhmultithreading.md#thread" >}})s are the **unit of execution**. The [process control block]({{< relref "KBhprocess_control_block.md" >}}) keeps track of the [\*stack pointer]({{< relref "KBhassembly.md#stack-pointer" >}})\* of the thread `%rsp`, which means if a thread is put to sleep the state can be stored somewhere on the stack.

1. **running**
2. **blockde** - waiting for an event like disk, network, etc.
3. **ready** - able to run, but not on CPU yet

{{< figure src="/ox-hugo/2024-02-21_13-50-23_screenshot.png" >}}


## IO vs. CPU bound {#io-vs-dot-cpu-bound}

- [I/O Bound Thread](#io-vs-dot-cpu-bound) is a thread that needs to wait for disk events, and don't need CPU that much
- [CPU Thread](#io-vs-dot-cpu-bound) is a thread that needs CPU time
94 changes: 94 additions & 0 deletions content/posts/KBhscheduling.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
+++
title = "scheduling"
author = ["Houjun Liu"]
draft = false
+++

[scheduling]({{< relref "KBhscheduling.md" >}}) is the tool to figure out which thread can run. Because [thread]({{< relref "KBhmultithreading.md#thread" >}})s exist in different [thread state]({{< relref "KBhprocess_control_block.md#id-b4b86ccc-70f3-4d30-b437-2f5fff63b0e6-thread-state" >}})s:

1. **running**
2. **blockde** - waiting for an event like disk, network, etc.
3. **ready** - able to run, but not on CPU yet

a scheduler needs to do the task of ordering ready threads to run, moving running threads to ready when it has ran for enough time. Possible pathways:

1. ready =&gt; running
2. blocked =&gt; running
3. blocked =&gt; ready =&gt; running

You can't go from **ready** to **blocked** because you have to _do something_ to know you are blocked.


## [scheduling]({{< relref "KBhscheduling.md" >}}) "ready" threads {#scheduling--kbhscheduling-dot-md--ready-threads}

The following assumes one core.

Tradeoffs:

1. minimize time to a useful result---(**assumption**: a "useful result" = a thread blocking or completes)
2. using resources efficiently (keeping cores/disks busy)
3. fairness (multiple users / many jobs for one users)

Typically, we focus on (1); approaches that maximize useful results quickly is unfair beacuse you are prioritizing. We can measure this based on "average completion time": tracking the average time elapsed for a particular queue based on the start of scheduling that queue to the time when each thread ends.


### first-come first-serve {#first-come-first-serve}

- keep all threads in ready in a **queue**
- run the first thread on the front until it finishes/it blocks for however long
- repeat

**Problem**: a thread can run away with the entire system, accidentally, through infinite loops


### round robin {#round-robin}

- keep all threads in a **round robin**
- each thread can run for a set amount of time called a [time slice](#round-robin) (10ms or so)
- if a thread terminates before that time, great; if a thread does not, we swap it off and put it to the end of the round robin

**Problem**: what's a good [time slice](#round-robin)?

- too small: the overhead of context switching is higher than the overhead of running the program
- too large: threads can monopolize cores, can't handle user input, etc.

Linux uses 4ms.


### shortest remaining processing time {#shortest-remaining-processing-time}

Run first the thread in queue that will finish the **most quickly** and run it **fully to competition**.

It **gives preference to those that need it the least**: a good side effect is that it gives preference to [I/O Bound Thread]({{< relref "KBhprocess_control_block.md#io-vs-dot-cpu-bound" >}}) first, so we can wait on them during disk operations while [CPU Thread]({{< relref "KBhprocess_control_block.md#io-vs-dot-cpu-bound" >}})s run after the [I/O Bound Thread]({{< relref "KBhprocess_control_block.md#io-vs-dot-cpu-bound" >}}) has ran.

THIS IS **not implementable**----we can't build this beacuse we have to know which thread will finish the most quickly, which we can't because you have to solve the halting problem to know.

Our goal, then is to get as close as possible to the performance of [SRPT](#shortest-remaining-processing-time).

**Problem**:

1. we don't know which one will finish the most quickly
2. if we have many threads and one long-running thread, the long running thread won't be able to run ever


### priority based scheduling {#priority-based-scheduling}

Key idea: **behavior tends to be consistent in a thread**. We build multiple **priority queues** to address this.

[priority based scheduling](#priority-based-scheduling) is an approximation of [SRPT](#shortest-remaining-processing-time), using the past performance of the thread to estimate the running time of the thread. Over time, [thread]({{< relref "KBhmultithreading.md#thread" >}})s will move between priority queues, and we **run the topmost thread from the highest priority queue**

1. threads that aren't using much CPU stay in higher priority queue
2. threads that are using much CPU gets bumped down to lower priority queues


#### procedure {#procedure}

a [thread]({{< relref "KBhmultithreading.md#thread" >}}) always enters in the **highest** priority queue

1. if the [thread]({{< relref "KBhmultithreading.md#thread" >}}) uses all of its [time slice](#round-robin) and didn't exit, bump them down a priority queue
2. if a [thread]({{< relref "KBhmultithreading.md#thread" >}}) blocked before it used all of its [time slice](#round-robin), bump them up a priority queue


#### fixing neglect {#fixing-neglect}

we artificially bump threads up if it hasn't ran for a while. the priorities can be assigned perhaps as "CPU time used in the last n minutes"
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 62d4cf0

Please sign in to comment.