Skip to content

Kernel created for 15-410 Operating Systems class at Carnegie Mellon

Notifications You must be signed in to change notification settings

dhanna11/Custom-OS-Kernel

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

/**

@mainpage 15-410 Project 3

@author Aatish Nayak (aatishn)
@author Christopher Wei (cjwei)


Design Decision Rationales:

Frame manager design - Our first naive implementation of a frame manager was to
use a linked list of avaliable physical pages. This worked well for checkpoint
1, however we figured later on that we needed a smarter, more efficient manager.
What we ended up with was a buddy memory allocator. The buddy allocator allows
us to keep track of a large number of pages by grouping physical pages into
larger frames, each of which has 2^i pages. This saves a great deal of space if
we choose an appropriate number of bins. In addition, allocating space is
significantly easier: all we need to know is how many pages we will end up
needing, and then calling fm_alloc once to get an appropriate sized frame. A
call to fm_alloc goes along the line of finding the log_2 ceiling of the
requestd number of pages, then going to that index in the frame manager's pool,
and removing one frame from the pool, or, if the pool is empty, by recursively
splitting larger blocks into smaller blocks until we can satisfy the request.
Upon returning split blocks to the frame manager, they will coalesce into larger
blocks using the "buddying" mechanism: each split frame has a "buddy" which
together forms the parent block which overally decreases the amount of external
fragmentation we have. The downside to this design is the obvious internal
fragmentation resulting from allocating only in powers of two. However, all the
other perks of this implementation - decrease in speed and space overhead -
outweighs this inefficiency.

New pages & remove pages - One interesting thing about remove pages is that
it does not need to know how long the length of the allocated chunk of memory
that data should be remembered from a call to new_pages. We do this by using 2
of the unused bits in the page table entry flags. Bit 9 denotes the beginning
of a new_pages allocation and bit 10 denotes the end of a new_pages allocation.
Therefore, if we call remove_pages on a memory location that does not have the
9th bit set, we can immediately return an error code. Otherwise, we simply
while loop and increase the virtual address until we find a mapping that has
the 10th bit set.

Sleeping pool design - In order to maintain constant time context switching,
we decided on implementing the sleeping pool as a linked list that maintains
an ordering. This allows us to check whether or not we need to wake up a thread
in constant time by simply looking at the first element - which has the lowest
lookup time - and comparing it to the current time for every tick. Granted, a
sorted insertion into a linked list takes O(n) time, this is very much worth
it for the constant time wakeup procedure.

Global scheduler lock - The reason we have a global scheduler lock
rather than one inside the scheduler is to avoid the following situation:
Consider thread 1 in fork() that grabs the scheduler's mutex. After it
grabs the mutex, a timer fires and a context switch is being serviced.
The context switcher also accesses the scheduler and tries to acquire
it's mutex, but it can't because thread 1 controls it. And thus,
execution stops because the scheduler waits forever.
With a global scheduler lock, syscalls the modify the scheduler are thread
safe.

Scheduler TCB pool data structure - Right from the beginning we abstracted
the tcb pool data structure out. This allowed us to change the internals
of the pool as the project progressed. For ck1 and ck2, we used a linked
list of tcb pools. Every pool access was O(n) where n = total number
of threads. Then, the runnable and waiting pools were queues and
not abstracted out. However, we encountered problems because our queue
implementation used malloc for every deq and enq; it was grossly
inefficient combined with the linked list of threads. We decided to
revamp the whole data structure to ensure least amount of mallocs and
constant access time.
We wrote a hashtable data structure capable of storing generic data with
an accompanying key. Then in the tcb_pool, we store linked list nodes
each holding a tcb into the hash table. The key is the tid of the stored
tcb. Essentially, it is a hash table of linked list nodes, where each
node belongs to either the runnable, waiting, or sleeping pools. This
allows each transfer of nodes between pools to be constant time, and
thus syscalls like deschedule and make_runnable O(1). Each tcb also has a
status that indicates which pool it belongs to. Although writing this
data structure was arduous, it proved to be a really good solution to
minimize access time and number of malloc/free pairs.

Reaper Thread Rationale - while writing wait/vanish we were contemplating how a
thread would clean up after itself. When a thread is destroyed, its kstack must
be freed. Additionally, in the case the pcb is also destroyed, we must free the
pcb's data structures including most importantly the page directory. This
presents a problem because we cannot free the kstack while we are currently
using it. It's a similar situation with the page directory. Thus, the three
options to free a thread's resources are by the thread itself, by some
interrupt handler possibly the timer, or by another thread. We ruled out the
first option. The second option, although viable, would significantly delay
servicing a timer interrupt and thus the context switch. The only other option
is by another thread. A reaper thread was the best option since its seperate
from any other process and thus does not take up cycles of another process.
When vanish places a zombie into the zombie pool, it also signals a zombie
semaphore that the reaper is waiting on. This ensures that the reaper thread
is only run when there are zombies to reap.
Additionally, after doing some research, a reaper thread has precedent. Java
uses a reaper thread that wakes up at predetermined intervals to collect dead
processes.

Scheduler lock rationale - Threads modify the scheduler's data
structures and when a context switch happens, the data structures are once
again accessed. Thus, we need to prevent any context switch from
happening while modifying these data structures, particularly a context
switch triggered by a timer handler. For this reason, we disable and
enable interrupts as part of the scheduler lock mechanism. However, we
only enable and disable if the scheduler has been started. This is to
ensure that when the kernel is adding the idle, reaper, and init threads,
and scheduler has not started, we don't enable interrupts when they
should not be enabled.

Easter Eggs:

Run
'cat shrek.txt'
'cat donkey.txt'
for a good time


*/

About

Kernel created for 15-410 Operating Systems class at Carnegie Mellon

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 88.5%
  • Makefile 5.4%
  • Assembly 5.3%
  • Shell 0.8%