CS444 hw3 Multitasking Tiny-Unix Kernel

Due: Friday, Nov. 22

We are expanding the Tiny-Unix kernel to provide multitasking for three tiny-Unix programs.  The system as provided supports programs which use only the system calls write (to TTY0/1) and exit.  Here, the console output port (COM2) is a shared device, shared between the three processes.  As in hw1 and hw2, we are using a small output buffer (6 char capacity) accessed via the queue package.   As in hw1 and hw2, we allow writers to proceed when all of their string is at least in the buffer.  When the output buffer is full, we make the processes block in write, giving up the CPU for other processes to use.   No more spin loops in write in the kernel!  This is a producer-consumer situation, with 3 producers filling the output buffer and one consumer, the output interrupt handler, consuming the characters.  As in hw2, the output is interrupt-driven.

 

Added. For single-person projects, implement a non-preemptive scheduler.  For two-person projects, a preemptive scheduler.  A non-preemptive scheduler never calls schedule from an interrupt handler, so a user program that loope forever hangs the system. A preemptive scheduler counts down ticks for each process, and calls schedule from the tick handler when the process’s quantum is exhausted.

Let us adopt a common process entry as follows.  You can add a few fields at the end of it as desired. 

/* Process Entry in proc.h */
typedef struct {
   int p_savedregs[N_SAVED_REGS]; /* saved non-scratch registers */
   ProcStatus p_status;   /* RUN, BLOCKED, or ZOMBIE */
   WaitCode p_waitcode;   /* valid only if status=BLOCKED: TTY0_OUT, etc. */
   int p_exitval;         /* valid only if status=ZOMBIE */
} PEntry;

We will follow UNIX/Linux in reserving process 0 for the kernel itself, and require that it never block, so that it is ready at all time to run if no user process has anything to do: the "null" process or “idle” process.   The description of the null process is below.

As in hw2, we will have the user process running in supervisor mode, and let the kernel stack grow on top of the user stack, for each process.  For hw3, we need separate stacks for each of the three processes.

We will use a non-preemptive scheduler, one which never grabs the CPU away from a process which is runnable, i.e., never preempts a process.  Thus process switching will occur only when a process blocks (for output or input or in system call sleep) or exits.  After a process exits, it is a "zombie" in UNIX/Linux parlance: all that is really left of it is its exit value, waiting to be picked up by its parent.  Here we don't have a parent process, so we simply print out the exit values with the shutdown message of the kernel, after all three processes have completed. 

 

Shared between user and kernel:
tsyscall.h: syscall numbers, as in hw2
tty_public.h: TTY device numbers, as in hw2

Kernel files:
ioconf.h, io.h, tty.h: i/o headers used only by kernel, as in hw2
tsystm.h: syscall dispatch, kernel fn prototypes, add to as appropriate
timer.h, timer.c: handle ticks

startup0.s, startup1.c, sysentry.s:  assembly & C startup, syscall handler, as in hw2
tunix.c:  kernel startup, shutdown, C trap handler, much like hw2
sched.[ch]:  scheduler code: functions schedule, sleep, wakeup.
asmswtch.s:  process switch, in assembly, provided.
ioconf.c, io.c:  device indep. i/o system, as in hw2
tty.c: terminal driver, from hw2 but changed to block, unblock

User-level files:
tunistd.h: syscall prototypes, as in hw2

crt01.s: as in hw2 crt0.s, but calls main1, starts at _ustart1
crt02.s, crt03.s: calling main2, main3, starting at _ustart2, _ustart3.
ulib.s:  same as hw2, plus another function for the new system call sleep
uprog1.c uprog2.c, uprog3.c: tiny-UNIX programs starting at main1, main2, main3,
   using only write and exit system calls
vprog1.c vprog2.c, vprog3.c: tiny-UNIX programs starting at main1, main2, main3,
   using write, exit, and sleep
wprog1.c wprog2.c, wprog3.c: tiny-UNIX programs starting at main1, main2, main3,
   using write, exit, sleep, and read

As in hw2, each user program setup (here a bundle of 3 user programs)  to be run on the SAPC has to be separately built with tunix, downloaded and run. Startup0, then startup1 execute, and call the kernel initialization, which sets up the system with the three processes marked as RUN, initializes their saved-pc entries (to ustart1, ustart2 or ustart3, resp.), their saved-esp entries to three different stack areas, and their savedebp field to 0 (it controls backtrace during debugging) and transfers control to the scheduler, which looks for a runnable process and starts it via asmswtch.

The kernel sets ESP for each user process.  The C user startup module calls the respective main (main1, main2, or main3).  The syscalls  in the user code cause execution of the kernel, eventually returning to the user code.  When ttywrite encounters a full output buffer, it blocks the process by calling tsleep(OUTPUT) in the scheduler. The scheduler saves that process context and chooses another process to run, using the assembler asmswtch.s code to do the tricky stuff. When asmswtch returns the system should be running in the new process.

As the output drains at interrupt level (i.e., via the chain of runs of the interrupt handler) and a spot becomes available in the output queue, invoke twakeup(OUTPUT) which will make all of the processes blocked on OUTPUT runnable (the traditional UNIX wakeup algorithm.)  Finally the user code does a sys call exit.  The kernel gets control, sets the process state to ZOMBIE and calls schedule and that finds another to run.

Process 0: this is the first process to run, that is, the startup code turns itself into process 0 by filling in its own PEntry.  Once the initialization phase is complete, the process 0 settles down to a loop calling the scheduler, trying to find and schedule a runnable user process.  But, if there are none it keeps looping.   The loop ends when the number of ZOMBIE processes equals the number of user processes.  In addition to calling the scheduler, it turns on interrupts on in the loop and then off again (if interrupts are never turned on, the queue can't be drained and blocked processes will remain blocked).  The scheduler should be called with interrupts off, as it is clearly critical section code. Once the main loop ends, process 0 should go into a wait loop to make sure the output queue drains and then do some finishing up, printing the exit code of each user process.

As provided, the scheduler runs one user process at a time, along with process 0, which runs when the current user process is blocked. You will recode the scheduler to allow it to interleave user process execution as is normal for timesharing. 

Once you have the scheduler working properly, provide a sleep system  call, sleep(n), that blocks the user process for n milliseconds, and  then unblocks it.  Also, make read block instead of busy-loop. 

Suggested steps

1.
Start from the provided sources. Note that the provided system has a "debuglog" service that writes notes to memory, and prints out the log when the kernel shuts down.  Note that you can add your own entries in the log to figure out what's happening. You can build an executable from the provided sources with "make U=uprog".  Study how this works--but note it only runs one user process at a time, and only supports the write and exit system calls, used in uprog1.c, uprog2.c, and uprog3.c, the first set of user programs.

2. Note that the provided sources compile, but with warnings. Fix the warnings about ticks by adding an include directive.  The warning about schedule_one will disappear when you replace schedule_one with schedule.

 

3. In tty.c, add calls to block the current process when the output queue is full and unblock all processes when space becomes available in the ouput queue.

4. To make the scheduler work with multiple user processes, make the function schedule (the replacement for the temporary schedule_one function) loop through the proctab until you find a RUN user process (1, then 2, then 3) and then call asmswtch with old-process = process 0, new-process = proc 1, to switch from proc 0 to proc 1, the first time. But when proc 0 blocks, proc 2 runs and blocks, proc 3 runs and loops, then enqueues some output and blocks, so proc 1 runs again, etc. Eventually the scheduler calls asmswtch  with (procx, proc0) to switch back to proc 0 when no more  RUN processes exist. Then the system shuts down using process 0.

4. Note that the tick handler is working, so it is easy to support the idea of system time by incrementing a global variable, already added to tunix.c.  Make it count up in milliseconds by editing the tick handler. Use this concept of system time to implement a sleep(int ms) system call for your  processes.  The process that calls sleep should block for the argued number of millisecs (or as close as possible). You will need to add a data member to the process entry for this.  Use the vprogx.c (x = 1,2,3) series to test this.

5. Make read block. User the wprogx.c series to test this.

For collection:
README--authorship, late days if needed
tsystm.h--possibly new prototypes here
tunix.c--kernel initialization, shutdown, process 0
sched.[ch]--scheduler, C part
timer.c, for edit to tick handler
tty.c--calls to tsleep and twakeup added