Class 15

 

Summary for Midterm Wednesday:  it’s open book, open notes, can bring posted solutions and your own hw.

Note practice midterm solutions linked to class web page.

 

 

Is there a theme connecting all the things we’ve covered so far?  I think so, as follows.  We have concentrated on the transitions, or handoffs, between the three major players, the user code, kernel code, and hardware.  A major concern is user-kernel separation, needed to keep the system well-understood and secure.  This separation is implemented by being very careful about transitions.  The user code is kept “bottled up” in its virtual machine.  System calls communicate a small amount of information (the syscall # and arguments, return value) across the user-kernel boundary and back.

Note that the CPU is running in kernel mode or user mode at each point in time (or each CPU, for multiprocessors).  It will continue to run that way until a particular event takes place: an interrupt, or execution of a system call, or execution of iret (or equivalent), or some exception (including page faults, which we will study later).

 

Hardware-user code relationship

 

Hardware-kernel code relationship

 

 

User-Kernel relationship

 

Hardware/Software terms that sometimes cause confusion on exams

 

instruction:  like mov, push, in, out, etc., belong to “instruction set”, used in assembly language

register:  array of hardware bits, part of CPU, memory, i/o devices, holds a value in its bits

CPU register:  like eax, ecx, esp, etc. for x86.

device register:  like UART’s transmit register, receiver reg, LSR

port or i/o port: 16-bit number providing an address for various device registers, in x86 architecture.  Ex: 0x2f8 for COM2’s transmit register.

memory address: 32-bit number of a byte of memory, in separate space from i/o ports.

interrupt vector: the address of the interrupt handler, held in IDT[nn], and specifying the entry point of the assembler interrupt handler

command: string used to tell a program what to do, ex: “ls” is a shell command

system call: two meanings, system API call such as write(fd, buf, nbytes), certain instruction execution (int, ta)

user stack: one for each thread, holds execution state of user code.

kernel stack: one for each thread, holds execution state of current system call execution, or is empty

interrupt stack: we assume this is built on top of the kernel stack of the thread executing at the time of the int.

 

 

 

 

 

 

Midterm Reading: midterm is open books, open notes, handouts, open solutions—yours or mine or both.

 

Tanenbaum,

From class 1: Chap 1, specifically, Sec. 1.1, [1.2 optional history], 1.4, 1.5, 1.6, 1.8

From class 10: Chap 2, Sections 2.1, 2.2: all to pg. 106, skip 2.2.4, read 109-110, skip 2.2.6, .7, .8. read 114-end of 2.2.

 

From class 13: Sec 2.3 to Sec. 2.3.4, pg. 125

Sec. 2.3.5 Semaphores, to pg. 131 (stop at user level threads)

Sec. 2.3.7 Monitors, to pg. 137 (stop before condition variables)

Sec. 2.3.8 Message Passing, to pg. 144

Note we are skipping user level threads, condition variables, and monitors that explicitly sleep. We are covering monitors that provide mutex. We use semaphores for long-term blocking such as producer on full buffer or consumer on empty buffer.

 

Also: Skip 2.3.9 Barriers

 

From class 14: Sec 2.4 Scheduling to just before Categories of Scheduling Algorithms, pg 149.

Producer-Consumer Implementations

The basic program is on pg. 130, but assumes semaphore initialization and thread creation. Still it is meaningful, and in fact it turns out that pThreads has a mutex type for C programming that does self-initialize—see added note in class 13. Be sure you know how this program runs.

Tanenbaum has two Java programs for producer-consumer: one on pg. 135 that uses condition variables, so we’re skipping it, and one on pg. 141 that uses a monitor that explicitly sleeps, so we’re skipping that too.

Instead, we looked at a straightforward Java program using Semaphores, in handout CS444 Producer Consumer Program using Java Semaphores. It uses two semaphores for blocking and a third semaphore for the needed mutex to protect the integrity of the shared buffer. The shared buffer is implemented by a Queue<Integer> based on a LinkedList<Integer>.

Also Read

Chap. 5 to pg. 336. We are using i/o instructions, not memory-mapped i/o.

Skip Sec. 5.1.4, DMA 

Read Sec. 5.1.5. Correct the typo on pg. 339 referring back to Chap. 1: should be 1.3.5, not 1.4.5.

Read Sec. 5.2 except the part on DMA. Note that the code uses memory-mapped i/o, so where you see “*printer_status_register != READY”, replace it with an inb to the printer status register, followed by testing the resulting value in EAX.

Read 5.3.1, Interrupt handlers

Read 5.3.2, Device drivers. We have a tty device driver in hw2.  Don’t worry about block devices yet. Our tty driver is a character device.

Read 5.3.3, Device-independent I/O Software: our setup for hw1-hw2 is a device-independent i/o package, usable for both serial and parallel ports as shown in hw1.

 

Chap. 10 Linux Sec 10.3.2. Processe Management Calls in Linux.

Lecture notes through today.

 

hw1 ideas:

 

hw2 ideas: ideas only, not code details

 

 

Scheduling

We’ve already discussed CPU-bound = compute-bound processes.  Similarly i/o-bound. Also preemption and preemptive schedulers.

Scheduling is about sharing resources appropriately among processes.  There are 3 major resources to focus on:

Each process wants some fraction of total resource in each direction, and if these fractions (in some direction) add up to more than 100%, then things slow down, even with good scheduling.

Tanenbaum concentrates on CPU scheduling here.  Let’s downplay batch and real-time systems and concentrate on Interactive scheduling.

Interactive systems have users inputting commands and expecting fast response. Response Time, pg. 151.

Round-robin—simplest scheme for interactive systems. We’ll use this in hw3.

Quantum = CPU time a process/thread gets before it is preempted, about 50 ms. (pg. 155 20-50 ms)

Often a thread blocks before it reaches this point.

Quantum is set at about 50 ms, as compromise between throughput and delay in responsiveness.

A preemption causes a “process switch” or “context switch”.  The second term is too vague, since context just means state.  And sometimes it is used for just a mode switch, which happens in a system call.  A system call causes execution to go from user mode to kernel mode in the same process.  So it’s not a process switch, and in fact is a much lighter-weight action than a process switch.  Good thing, since system calls (and mode switches in interrupts) happen much more often.

Process Switch is a mysterious operation. Recall that the CPU is handed over to a process for their 50 ms, then an interrupt occurs (or a system call) and the kernel decides to hand the CPU over to another process.  But the kernel is using the CPU to make this decision!

Priority scheduling.  Note the UNIX “nice” command, that allows you to run a user process at a lowered priority

Real-time systems—just know what they are.

Thread scheduling: In both Solaris and Win2K, threads are the primary scheduling unit.  For each thread, the process it belongs to is known, of course.