-
Notifications
You must be signed in to change notification settings - Fork 2
20. FS Data Structures
This chapter reveals what's under the hood of a file system. To implement a correct file system, the most important thing is to figure out how to do indexing: using proper data structures to map logical addresses (file offsets) to physical blocks, and doing so in a time- & space-efficient way. Another thing that file systems need to take care of is about concurrent access from multiple processes.
File systems vary a lot in what data structures they maintain. Beyond indexing, file system implementations also involve lots of other features and optimizations, for example:
- Advanced indexing structures & algorithms
- Journaling & logging for consistency
- Caching and other performance optimizations to reduce latency
- Improving the scalability under concurrent scenarios
- Device-oriented optimizations, e.g., log-structured FS for disks
- Distributed file systems over the network
Scan through them before going forth:
- File System Implementation chapter of the OSTEP book: the very simple file system (VSFS) design β
- File system data structures of xv6
- File Systems page: "File System Theory" section
The first step towards understanding a file system is to be clear of the data structures it maintains for files. We study the data structures of our VSFS from two perspectives:
- On-disk data structures: what is actually persisted on storage media
- In-memory data structures: what are maintained in main memory during runtime
Continuing the discussion of the last chapter, we know that UNIX-flavor file systems associate an inode structure to each file that serves as its metadata. In VSFS, we follow this tradition and use inodes as the indexing structures.
Every inode records the type and the addresses of current data blocks of a file. Definition @ src/filesys/vsfs.h
:
/**
* An inode points to 16 direct blocks, 8 singly-indirect blocks, and
* 1 doubly-indirect block. With an FS block size of 1KB, the maximum
* file size = 1 * 256^2 + 8 * 256 + 16 KiB = 66MiB 16KiB.
*/
#define NUM_DIRECT 16
#define NUM_INDIRECT1 8
#define NUM_INDIRECT2 1
#define UINT32_PB (BLOCK_SIZE / 4)
#define FILE_MAX_BLOCKS (NUM_INDIRECT2 * UINT32_PB*UINT32_PB \
+ NUM_INDIRECT1 * UINT32_PB \
+ NUM_DIRECT)
/** On-disk inode structure of exactly 128 bytes in size. */
#define INODE_SIZE 128
#define INODE_TYPE_EMPTY 0
#define INODE_TYPE_FILE 1
#define INODE_TYPE_DIR 2
struct inode {
uint32_t type; /** 0 = empty, 1 = file, 2 = directory. */
uint32_t size; /** File size in bytes. */
uint32_t data0[NUM_DIRECT]; /** Direct blocks. */
uint32_t data1[NUM_INDIRECT1]; /** 1-level indirect blocks. */
uint32_t data2[NUM_INDIRECT2]; /** 2-level indirect blocks. */
/** Rest bytes are unused. */
} __attribute__((packed));
typedef struct inode inode_t;
/** Helper macros for calculating on-disk address. */
#define DISK_ADDR_INODE(i) (superblock.inode_start * BLOCK_SIZE + (i) * INODE_SIZE)
#define DISK_ADDR_DATA_BLOCK(d) ((superblock.data_start + (d)) * BLOCK_SIZE)
If we store a flat array of one address per data block directly in the inode, it will soon bloat the space up: we need a 4 byte address for every 1KiB data block. A 128-byte inode could index no more than 126 (subtracting necessary type
& size
fields) data blocks, limiting the maximum size of a file to 126KiB, which is apparently too small.
To solve this issue, Hux again adopts indirection, as what we did in paging. We add some indirect indexing blocks into the array, which points to dedicated data blocks holding addresses. We are using a 3-level indexing array: 16 direct blocks, 8 1-level indirect blocks, and 1 2-level indirect blocks. This means that the maximum file size supported is:
FILE_MAX_BLOCKS = (16 + 8*256 + 1*256^2) * BLOCK_SIZE = 66MiB 16KiB
The reason of still reserving some direct blocks (instead of making them all 2-level indirect blocks to support larger files) is that most files in a system would be small files. By having the first few blocks as direct blocks, small files' indexing speed will not get penalized by indirection. This is a classic trade-off of space vs. time, and the configurations should be carefully chosen to balance the two β.
Since inodes are the starting point of almost any addressing inside a file (we must go through the indexes to locate a data block, given a file offset), it would be unacceptable to involve disk I/O in every lookup of an inode. VSFS maintains an in-memory cache, icache
, of inodes of currently-open files.
Definitions @ src/filesys/file.h
:
/** In-memory copy of open inode, be sure struct size <= 128 bytes. */
struct mem_inode {
uint8_t ref_cnt; /** Reference count (from file handles). */
uint32_t inumber; /** Inode number identifier of `d_inode`. */
parklock_t lock; /** Parking lock held when waiting for disk I/O. */
inode_t d_inode; /** Read in on-disk inode structure. */
};
typedef struct mem_inode mem_inode_t;
/** Maximum number of in-memory cached inodes. */
#define MAX_MEM_INODES 100
/** Extern the tables to `vsfs.c`. */
extern mem_inode_t icache[];
extern spinlock_t icache_lock;
// src/filesys/file.c
/** Open inode cache - list of in-memory inode structures. */
mem_inode_t icache[MAX_MEM_INODES];
spinlock_t icache_lock;
Inodes one-one map to actual files on disk. However, multiple processes could be opening the same file at the same time and may have different file offsets to operate on. There must be a higher-level in-memory structure that provide a process-private view of its open file. This structure (file_t
) has a pointer to the in-memory inode (mem_inode_t
) of the underlying file.
Definitions @ src/filesys/file.h
:
/** Open file handle structure. */
struct file {
uint8_t ref_cnt; /** Reference count (from forked processes). */
bool readable; /** Open as readable? */
bool writable; /** Open as writable? */
mem_inode_t *inode; /** Inode structure of the file. */
uint32_t offset; /** Current file offset in bytes. */
};
typedef struct file file_t;
/** Maximum number of open files in the system. */
#define MAX_OPEN_FILES 200
/** Maximum number of open files per process. */
#define MAX_FILES_PER_PROC 16
/** Extern the tables to `vsfs.c`. */
extern file_t ftable[];
extern spinlock_t ftable_lock;
// src/filesys/file.c
/** Open file table - list of open file structures. */
file_t ftable[MAX_OPEN_FILES];
spinlock_t ftable_lock;
And an extra field in the PCB @ src/process/process.h
:
/** Process control block (PCB). */
struct process {
...
file_t *files[MAX_FILES_PER_PROC]; /** File descriptor -> open file. */
};
typedef struct process process_t;
The open file structure also has a reference count due to two reasons: 1.
fork()
ed processes naturally share the same open files and their file offsets, and 2. thedup()
syscall (though not supported in Hux yet).
A directory is also a file, but unlike a normal file which can store whatever it wants in its data blocks, a directory has a well-defined data format - an array of directory entries (dentry
s). A directory entry maps a string name to an inode number (inumber, which is the unique identifier the FS actually uses to name a file).
Definitions @ src/filesys/vsfs.h
:
/** Root directory must be the 0-th inode. */
#define ROOT_INUMBER 0
/**
* Directory entry structure. A directory's data is simply a contiguous
* array of such 128-bytes entry structures.
*/
#define DENTRY_SIZE 128
#define MAX_FILENAME 100
struct dentry {
uint32_t valid; /** 0 = unused, 1 = valid. */
uint32_t inumber; /** Inumber of the file. */
char filename[DENTRY_SIZE - 8]; /** String name. */
} __attribute__((packed));
typedef struct dentry dentry_t;
Inumbers in VSFS are the same as the index of that inode's slot on disk. The root directory "/"
has inumber 0
.
In UNIX file systems which support linking, another common term on inodes is called links. Be sure to clearly distinguish among "references on open file", "references on in-memory inode", and "links on inode", as they mean totally different things β:
-
References on an open file structure: multiple forked processes or multipel file descriptors of the same processes holding the same open file structure (
file_t
) in the ftable; -
References on an in-memory inode: multiple open file structures (
file_t
) pointing to the same in-memory inode (mem_inode_t
), happens when multiple processes open the same on-disk file at the same time; -
Links on an inode: multiple directory entries, possibly with different string names, having the same inode number (
inumber
) so pointing to the same on-disk file.
Hux's VSFS does not support linking, so we don't have to worry about links for now. Every unique directory entry is a unique inode number and hence a unique on-disk file.
The data structures mentioned above are kinda like "micro-structures" related to individual files. Besides these, the file system also needs to define "macro-structures" that describe the layout of things on disk.
Hux follows the VSFS layout described in this chapter of the OSTEP book, only with slightly different numbers (nice figure from the book):
- Block size 1KiB, so 1 block = 2 disk sectors
- Total size of FS image is 256MiB (262144 blocks)
- Block 0 is the superblock holding meta information of the FS
- Block 1~6 are the inode slots bitmap
- Block 7~38 are the data blocks bitmap
- Blocks 39~6143 (upto 6 MiB offset) are inode blocks
- All the rest blocks up to 256 MiB are data blocks
The first block is a superblock which stores overall layout information of the file system on that disk. Definition @ src/filesys/vsfs.h
:
/**
* VSFS of Hux has the following on-disk layout:
*
* - Block 0 is the superblock holding meta information of the FS;
* - Block 1~6 are the inode slots bitmap;
* - Block 7~38 are the data blocks bitmap;
* - Blocks 39~6143 (upto 6 MiB offset) are inode blocks;
* - All the rest blocks up to 256 MiB are data blocks.
*
* * Block size is 1 KiB = 2 disk sectors
* * Inode structure is 128 bytes, so an inode block has 8 slots
* * File system size is 256 MiB = 262144 blocks
* * Inode 0 is the root path directory "/"
*
* The mkfs script builds an initial VSFS disk image which should follow
* the above description.
*/
struct superblock {
uint32_t fs_blocks; /** Should be 262144. */
uint32_t inode_bitmap_start; /** Should be 1. */
uint32_t inode_bitmap_blocks; /** Should be 6. */
uint32_t data_bitmap_start; /** Should be 7. */
uint32_t data_bitmap_blocks; /** Should be 32. */
uint32_t inode_start; /** Should be 39. */
uint32_t inode_blocks; /** Should be 6105. */
uint32_t data_start; /** Should be 6144. */
uint32_t data_blocks; /** Should be 256000. */
} __attribute__((packed));
typedef struct superblock superblock_t;
Following the superblock is the inode bitmap that marks which inode slots are in-use/free. Following the inode bitmap is the data bitmap that marks which data blocks are in-use/free. They behave in the same way as the frame bitmap we used for memory management in paging, so we make a common library for them @ src/common/bitmap.h
:
/** Bitmap is simply a contiguous array of bits. */
struct bitmap {
uint8_t *bits; /** Must ensure zero'ed out at intialization. */
uint32_t slots; /** Must be a multiple of 8. */
spinlock_t lock; /** Lock protecting this bitmap. */
};
typedef struct bitmap bitmap_t;
void bitmap_set(bitmap_t *bitmap, uint32_t slot_no);
void bitmap_clear(bitmap_t *bitmap, uint32_t slot_no);
bool bitmap_check(bitmap_t *bitmap, uint32_t slot_no);
uint32_t bitmap_alloc(bitmap_t *bitmap);
void bitmap_init(bitmap_t *bitmap, uint8_t *bits, uint32_t slots);
// src/common/bitmap.c
... // Implementation details omitted...
At FS initialization, the first step would be to read out the superblock and the two bitmaps from disk. Code @ src/filesys/vsfs.c
:
/** In-memory runtime copy of the FS meta-data structures. */
superblock_t superblock;
static bitmap_t inode_bitmap;
static bitmap_t data_bitmap;
/**
* Initialize the file system by reading out the image from the
* IDE disk and parse according to VSFS layout.
*/
void
filesys_init(void)
{
/** Block 0 must be the superblock. */
if (!block_read_at_boot((char *) &superblock, 0, sizeof(superblock_t)))
error("filesys_init: failed to read superblock from disk");
/**
* Currently the VSFS layout is hard-defined, so just do asserts here
* to ensure that the mkfs script followed the expected layout. In real
* systems, mkfs should have the flexibility to generate FS image as
* long as the layout is valid, and we read out the actual layout here.
*/
assert(superblock.fs_blocks == 262144);
assert(superblock.inode_bitmap_start == 1);
assert(superblock.inode_bitmap_blocks == 6);
assert(superblock.data_bitmap_start == 7);
assert(superblock.data_bitmap_blocks == 32);
assert(superblock.inode_start == 39);
assert(superblock.inode_blocks == 6105);
assert(superblock.data_start == 6144);
assert(superblock.data_blocks == 256000);
/** Read in the two bitmaps into memory. */
uint32_t num_inodes = superblock.inode_blocks * (BLOCK_SIZE / INODE_SIZE);
uint32_t *inode_bits = (uint32_t *) kalloc(num_inodes / 8);
bitmap_init(&inode_bitmap, inode_bits, num_inodes);
if (!block_read_at_boot((char *) inode_bitmap.bits,
superblock.inode_bitmap_start * BLOCK_SIZE,
num_inodes / 8)) {
error("filesys_init: failed to read inode bitmap from disk");
}
uint32_t num_dblocks = superblock.data_blocks;
uint32_t *data_bits = (uint32_t *) kalloc(num_dblocks / 8);
bitmap_init(&data_bitmap, data_bits, num_dblocks);
if (!block_read_at_boot((char *) data_bitmap.bits,
superblock.data_bitmap_start * BLOCK_SIZE,
num_dblocks / 8)) {
error("filesys_init: failed to read data bitmap from disk");
}
/** Fill open file table and inode table with empty slots. */
for (size_t i = 0; i < MAX_OPEN_FILES; ++i) {
ftable[i].ref_cnt = 0; /** Indicates UNUSED. */
ftable[i].readable = false;
ftable[i].writable = false;
ftable[i].inode = NULL;
ftable[i].offset = 0;
}
spinlock_init(&ftable_lock, "ftable_lock");
for (size_t i = 0; i < MAX_MEM_INODES; ++i) {
icache[i].ref_cnt = 0; /** Indicates UNUSED. */
icache[i].inumber = 0;
parklock_init(&(icache[i].lock), "inode's parklock");
}
spinlock_init(&icache_lock, "icache_lock");
}
Note that the two bitmaps have different granularity: a bit in the inode bitmap marks a 128-byte inode slot, while a bit in the data bitmap marks a 1KiB block.
A disk image must be formatted into the correct initial FS layout to become loadable into the system. We previously left a dummy filesys
target in the Makefile which merely produces blocks of zeros. Since we have defined the FS layout, it's time to write a real mkfs
script @ scripts/mkfs.py
:
... # Omitted due to lack of space...
def build_dtree(bins):
"""
Build the directory tree out of what's under `user/`. The `dtree` is a
dict of:
string name -> 2-list [inumber, element]
, where element could be:
- Raw bytes for regular file
- A `dict` for directory, which recurses on
"""
def next_inumber():
"""
Allocate the next available inumber.
"""
global curr_inumber
inumber = curr_inumber
curr_inumber += 1
return inumber
for b in bins:
bpath = Path(b)
if not bpath.is_file():
print("Error: user binary '{}' is not a regular file".format(b))
exit(1)
parts = PurePath(b).parts
parents = parts[1:-1]
binary = parts[-1]
if parts[0] != "user":
print("Error: user binray '{}' is not under 'user/'".format(b))
exit(1)
if not binary.endswith(".bin"):
print("Error: user binray '{}' does not end with '.bin'".format(b))
exit(1)
binary = binary[:-4]
curr_dir = dtree
for d in parents:
if d not in curr_dir:
curr_dir[d] = [next_inumber(), dict()]
curr_dir = curr_dir[d][1]
with bpath.open(mode='br') as bfile:
curr_dir[binary] = [next_inumber(), bytearray(bfile.read())]
def solidize_dtree():
"""
Solidize the directory tree into the file system image bytearray.
Runs a pre-order depth-first search (pre-DFS) over the tree, so that
files inside a directory will be added after the directory.
"""
def solidize_elem(elem, my_inumber, parent_inumber):
"""
Recursively solidizes an element. If ELEM is a bytearray, it is
a regular file. If ELEM is a dict, it is a directory and should
recurse deeper if the directory contains things.
"""
if isinstance(elem, bytearray):
add_file(elem, my_inumber)
elif isinstance(elem, dict):
# First put the directory.
add_dir(elem, my_inumber, parent_inumber)
# Then the children so that a sub-directory would know '..'s inumber.
for key, val in elem.items():
solidize_elem(val[1], val[0], my_inumber)
else:
print("Error: unrecognized element type {}".format(type(content)))
exit(1)
solidize_elem(dtree, 0, 0) # "/"s '..' also points to "/"
def main():
if len(sys.argv) < 2:
print("Usage: python3 {} output_name.img [user_binaries]".format(sys.argv[0]))
exit(1)
output_img = sys.argv[1]
user_binaries = sys.argv[2:]
if os.path.isfile(output_img):
print("WARN: output image file '{}' exists, skipping!".format(output_img))
exit(0)
# Build the initial file system image.
gen_superblock()
build_dtree(user_binaries)
solidize_dtree()
gen_bitmaps()
print("Initial FS image directory tree:")
MyPrinter().pprint(dtree)
# Flush the image bytes to output file.
with open(output_img, mode='bw') as output_file:
output_file.write(img)
I chose Python due to simplicity. It takes a list of files we want to put into the initial file system, translates them into a directory tree rooted at "/"
, and adds in the files through a recursive pre-order depth-first search (pre-DFS). The reason of doing pre-DFS is that when adding a directory, it must know the inumber for entry ".."
which is the inumber of its parent. Please refer the script for the complete implementation.
The Makefile target should now be:
.PHONY: filesys
filesys:
@echo
@echo $(HUX_MSG) "Making the file system image..."
python3 scripts/mkfs.py $(FILESYS_IMG) $(USER_LINKEDS)
.PHONY: clean
clean:
@echo
@echo $(HUX_MSG) "Cleaning the build..."
rm -f $(S_OBJECTS) $(C_OBJECTS) $(ULIB_S_OBJECTS) $(ULIB_C_OBJECTS) \
$(INIT_OBJECT) $(INIT_LINKED) $(INIT_BINARY) \
$(USER_OBJECTS) $(USER_LINKEDS) $(FILESYS_IMG) \
$(TARGET_BIN) $(TARGET_ISO) $(TARGET_SYM)
Let's run the mkfs
script over a test directory tree to see if it produces the correct VSFS image! We don't have user binaries other than init
under user/
yet, so just run the script over an artificial directory tree:
$ tree user/
user
βββ cat.bin
βββ ls.bin
βββ nano.bin
βββ shell.bin
βββ tests
βΒ Β βββ forktest.bin
βΒ Β βββ proctest.bin
βββ vim.bin
$ python3 mkfs.py vsfs.img ... # List of files...
Initial FS image directory tree:
{'cat': [1, bytearray(b'')],
'ls': [2, bytearray(b'')],
'nano': [6, bytearray(b'')],
'shell': [7, bytearray(b'')],
'tests': [3,
{'forktest': [4, # This file has content, others don't
bytearray(b'MN\x10\x9e\x07...')],
'proctest': [5, bytearray(b'')]}],
'vim': [8, bytearray(b'')]}
If we examine the produced image file with hexdump
(note that Intel x86 is little endian, so a group of four bytes like 00 e8 03 00
as an integer is 0x0003e800
):
$ hexdump -C vsfs.img # Use `-C` to get byte-by-byte dump.
It should output something like:
Addr Content
00000000 00 00 04 00 01 00 00 00 06 00 00 00 07 00 00 00 |................| # Superblock
00000010 20 00 00 00 27 00 00 00 d9 17 00 00 00 18 00 00 | ...'...........|
00000020 00 e8 03 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000400 ff 80 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| # Inode bitmap
*
00001c00 f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| # Data bitmap
*
00009c00 02 00 00 00 00 04 00 00 00 00 60 00 00 00 00 00 |..........`.....| # Inode 0 (/)
*
00009c80 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| # Inode 1 (cat)
*
00009d00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| # Inode 2 (ls)
*
00009d80 02 00 00 00 00 02 00 00 00 04 60 00 00 00 00 00 |..........`.....| # Inode 3 (tests/)
*
00009e00 01 00 00 00 00 08 00 00 00 08 60 00 00 0c 60 00 |..........`...`.| # Inode 4 (forktest)
*
00009e80 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| # Inode 5 (proctest)
*
00009f00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| # Inode 6 (nano)
*
00009f80 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| # Inode 7 (shell)
*
0000a000 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| # Inode 8 (vim)
*
00600000 00 00 00 00 2e 00 00 00 00 00 00 00 00 00 00 00 |................| # Data / dentry 0
*
00600080 00 00 00 00 2e 2e 00 00 00 00 00 00 00 00 00 00 |................| # Data / dentry 1
*
00600100 01 00 00 00 63 61 74 00 00 00 00 00 00 00 00 00 |....cat.........| # Data / dentry 2
*
00600180 02 00 00 00 6c 73 00 00 00 00 00 00 00 00 00 00 |....ls..........| # Data / dentry 3
*
00600200 03 00 00 00 74 65 73 74 73 00 00 00 00 00 00 00 |....tests.......| # Data / dentry 4
*
00600280 06 00 00 00 6e 61 6e 6f 00 00 00 00 00 00 00 00 |....nano........| # Data / dentry 5
*
00600300 07 00 00 00 73 68 65 6c 6c 00 00 00 00 00 00 00 |....shell.......| # Data / dentry 6
*
00600380 08 00 00 00 76 69 6d 00 00 00 00 00 00 00 00 00 |....vim.........| # Data / dentry 7
*
00600400 03 00 00 00 2e 00 00 00 00 00 00 00 00 00 00 00 |................| # Data tests/ dentry 0
*
00600480 00 00 00 00 2e 2e 00 00 00 00 00 00 00 00 00 00 |................| # Data tests/ dentry 1
*
00600500 04 00 00 00 66 6f 72 6b 74 65 73 74 00 00 00 00 |....forktest....| # Data tests/ dentry 2
*
00600580 05 00 00 00 70 72 6f 63 74 65 73 74 00 00 00 00 |....proctest....| # Data tests/ dentry 3
*
00600800 4d 4e 10 9e 07 47 0f 3f 1f 54 f9 25 c3 3e ed 50 |MN...G.?.T.%.>.P| # Data of forktest
00600810 48 57 05 7a 3f 1f bb 73 68 69 2c 32 e5 9f 0d a6 |HW.z?..shi,2....|
...
Current repo structure:
hux-kernel
βββ Makefile
βββ scripts
βΒ Β βββ gdb_init
βΒ Β βββ grub.cfg
βΒ Β βββ kernel.ld
βΒ Β βββ mkfs.py
βββ src
βΒ Β βββ boot
βΒ Β βΒ Β βββ boot.s
βΒ Β βΒ Β βββ elf.h
βΒ Β βΒ Β βββ multiboot.h
βΒ Β βββ common
βΒ Β βΒ Β βββ bitmap.c
βΒ Β βΒ Β βββ bitmap.h
βΒ Β βΒ Β βββ debug.c
βΒ Β βΒ Β βββ debug.h
βΒ Β βΒ Β βββ intstate.c
βΒ Β βΒ Β βββ intstate.h
βΒ Β βΒ Β βββ parklock.c
βΒ Β βΒ Β βββ parklock.h
βΒ Β βΒ Β βββ port.c
βΒ Β βΒ Β βββ port.h
βΒ Β βΒ Β βββ printf.c
βΒ Β βΒ Β βββ printf.h
βΒ Β βΒ Β βββ spinlock.c
βΒ Β βΒ Β βββ spinlock.h
βΒ Β βΒ Β βββ string.c
βΒ Β βΒ Β βββ string.h
βΒ Β βΒ Β βββ types.c
βΒ Β βΒ Β βββ types.h
βΒ Β βββ device
βΒ Β βΒ Β βββ idedisk.c
βΒ Β βΒ Β βββ idedisk.h
βΒ Β βΒ Β βββ keyboard.c
βΒ Β βΒ Β βββ keyboard.h
βΒ Β βΒ Β βββ sysdev.c
βΒ Β βΒ Β βββ sysdev.h
βΒ Β βΒ Β βββ timer.c
βΒ Β βΒ Β βββ timer.h
βΒ Β βββ display
βΒ Β βΒ Β βββ sysdisp.c
βΒ Β βΒ Β βββ sysdisp.h
βΒ Β βΒ Β βββ terminal.c
βΒ Β βΒ Β βββ terminal.h
βΒ Β βΒ Β βββ vga.h
βΒ Β βββ filesys
βΒ Β βΒ Β βββ block.c
βΒ Β βΒ Β βββ block.h
βΒ Β βΒ Β βββ file.c
βΒ Β βΒ Β βββ file.h
βΒ Β βΒ Β βββ vsfs.c
βΒ Β βΒ Β βββ vsfs.h
βΒ Β βββ interrupt
βΒ Β βΒ Β βββ idt-load.s
βΒ Β βΒ Β βββ idt.c
βΒ Β βΒ Β βββ idt.h
βΒ Β βΒ Β βββ isr-stub.s
βΒ Β βΒ Β βββ isr.c
βΒ Β βΒ Β βββ isr.h
βΒ Β βΒ Β βββ syscall.c
βΒ Β βΒ Β βββ syscall.h
βΒ Β βββ memory
βΒ Β βΒ Β βββ gdt-load.s
βΒ Β βΒ Β βββ gdt.c
βΒ Β βΒ Β βββ gdt.h
βΒ Β βΒ Β βββ kheap.c
βΒ Β βΒ Β βββ kheap.h
βΒ Β βΒ Β βββ paging.c
βΒ Β βΒ Β βββ paging.h
βΒ Β βΒ Β βββ slabs.c
βΒ Β βΒ Β βββ slabs.h
βΒ Β βΒ Β βββ sysmem.c
βΒ Β βΒ Β βββ sysmem.h
βΒ Β βββ process
βΒ Β βΒ Β βββ layout.h
βΒ Β βΒ Β βββ process.c
βΒ Β βΒ Β βββ process.h
βΒ Β βΒ Β βββ scheduler.c
βΒ Β βΒ Β βββ scheduler.h
βΒ Β βΒ Β βββ switch.s
βΒ Β βΒ Β βββ sysproc.c
βΒ Β βΒ Β βββ sysproc.h
βΒ Β βββ kernel.c
βββ user
βΒ Β βββ lib
βΒ Β βΒ Β βββ debug.h
βΒ Β βΒ Β βββ malloc.c
βΒ Β βΒ Β βββ malloc.h
βΒ Β βΒ Β βββ printf.c
βΒ Β βΒ Β βββ printf.h
βΒ Β βΒ Β βββ string.c
βΒ Β βΒ Β βββ string.h
βΒ Β βΒ Β βββ syscall.h
βΒ Β βΒ Β βββ syscall.s
βΒ Β βΒ Β βββ syslist.s
βΒ Β βΒ Β βββ types.c
βΒ Β βΒ Β βββ types.h
βΒ Β βββ init.c
Guanzhou Jose Hu @ 2021