Dr. Mark Humphrys

School of Computing. Dublin City University.

Home      Blog      Teaching      Research      Contact

Search:

CA249      CA318      CA425      CA651

w2mind.computing.dcu.ie      w2mind.org


Computer and OS Structure



Device speed

What do we mean "I/O devices are slower than CPU" ?




Interrupts

Modern OS driven by interrupts.

Originally - Keep CPU busy, so have asynchronous I/O, don't have CPU waiting.

Now - Keep CPU free (responsive to user) so, also, asynchronous I/O, no CPU waiting.

Interrupts are to do with the massive asymmetry between CPU speeds and device speeds. They are how the OS "program" runs on hardware with very slow devices. Instead of the hardware just running the OS stream of instructions, it runs it in fits and starts:

repeat at random intervals:

 hardware interrupt
 OS save state
 run OS interrupt handler
so OS can run at different speed to the hardware devices.




Examples of Interrupts




Interrupts sidebar - Infinite loops

Question - Does infinite loop cause interrupt?

The Halting Problem (Turing, 1936).

Might just be a long loop. We have no way of knowing. But still, even if long loop, control must switch occasionally - time-slicing. It is a timer interrupt that switches control. Loop runs forever, time-sliced.


Unsolved problems in mathematics.
If could detect infinite loop, then could solve all problems of the form:
Does there exist a solution to f(n)
for n > t?
by asking the OS if the following:
repeat
 n := n+1
 test solution
until solution
is an infinite loop, or just a long loop? Then our OS could solve many of the world's great mathematical problems.
Many mathematical problems can be phrased as infinite-loop problems.




Note that many "infinite loops" actually terminate with a crash, because they are using up some resource each time round the loop. e.g. This program:

f ( int x )
{
 f(x);
}

f(1);
will eventually crash with a stack overflow.
This program however:
while true { }
will run forever, time-sliced.




Interrupts - Keeping the OS in charge

Interrupt idea is a way of periodically keeping the OS in charge so nothing runs forever without reference to the OS. e.g. In time-slicing, periodic timer interrupt to give OS a chance to do something else with the CPU.

Remember, the CPU is just looking for a stream of instructions to process, whether they come from the "OS" or from "user programs" it doesn't much matter to it. When the OS "runs" a program, it points the CPU towards a stream of instructions in the user program and basically hands over control. How do we know program cannot now control CPU for ever?

Note: Interrupt does not necessarily mean that OS immediately attends to the process that sent it (in the sense of giving it CPU time). It just means the OS is now aware that that process wants attention. The OS will note that its "state" has changed.

Single-step mode - Trap after every instruction.




Dual mode

"Keeping the OS in charge" means that there is a concept of an "OS-type program" (which can do anything, including scheduling ordinary programs) and an "ordinary program" (which is restricted in what it can do on the machine).

The restrictions on the "ordinary program" are normally not a problem - they are basically just: "Other programs have to be able to run at the same time as you".

This obviously makes sense in a multi-user system, but in fact it makes sense on a PC as well. When you're writing a program and you make an error, you don't want your crashed program to be able to crash the whole OS. You still want to be able to turn to the OS (which should still be responsive) and say "terminate that program".

Also user might run malicious prog by mistake (virus, client-side program on Internet).

Mode bit added to hardware to indicate current mode: user mode or system/monitor mode.

boot in system mode, load OS
when run program, switch to user mode

when interrupt, switch to system mode 
 and jump to OS code

when resume, switch back to user mode
 and return to next instruction in user code
Privileged instructions can only be executed in system mode.
e.g. Perhaps any I/O at all - user can't do I/O, user has to ask OS to do I/O for it (OS will co-ordinate it with the I/O of other processes).




Kernel

Consider from low-level to high-level:

  1. Parts of OS that are not scheduled themselves. Are always in memory. Operate in system mode. This part of the OS is called the kernel.
    Includes:
    • memory management
    • process scheduling
    • device I/O

    
    
  2. Parts of OS that can be scheduled. Come in and out of memory. Operate in user mode.
    Includes:
    • command-line
    • GUI
    • OS utilities

    Parts of OS that could be in either of the above:

    • file system
    • network interface

    
    
  3. Applications. All scheduled. Come in and out of memory. Operate in user mode.




Dual mode - Security

Need hardware support:




Memory protection

Basic idea is that there is such a thing as a "bad" memory access. User process runs in a different memory space to OS process.

On multi-process system (whether single-user or multi-user) each process has its own memory space in which (read-only) code and (read-write) data live. i.e. Each process has defined limits to its memory space:

Even if no malicious people, process memory areas need to be protected from each other - Why?

Above, each process has 2 registers - a base register and a limit register.
e.g. The base register for process 2 has contents 5200000. Its limit register has contents 3200000.
And then every memory reference is checked against these limits:


When reference is made to memory location x:

if x > base
{
 if x < (base+limit)
  then return pointer to memory location
 else
  OS error interrupt
}
else
 OS error interrupt


This check is not in software but is hard-coded in hardware. (Why?)

This check not done in system mode - unrestricted access. Why?
Load values into base or limit registers are privileged instructions. Why?

As we shall see later, memory protection has evolved quite far beyond this simple model.




Command Interpreter

Two approaches:



Virtual Machine (VM)


Virtual machine



Java Virtual Machines

The idea of implementing a virtual piece of "hardware" has returned with the idea of a Java Virtual Machine. Java is a language designed for a machine that does not exist, but that can be simulated on top of almost any machine. Promises:

  1. Portability. Write an application once, run everywhere.
  2. Run Internet programs on client-side. Idea is that you can run programs of a remote machine (website), but you run them on your machine. i.e. Client-side processing as opposed to server-side processing (CGI).

Java is a HLL. It is "compiled" to "virtual assembly" (instructions/operands, bytecodes that can be transmitted over network), these run on Java VM, where they are actually mapped to native code. Java VM exists everywhere. In fact, the Compiler also exists everywhere, so Java can actually be used as interpreted language - the Java HLL can be transmitted over the network.

Common client-side programs:



Write OS in Assembly or HLL?

HLL more portable, and easier to write/debug/change:

"Assembly is faster" - But nothing is as fast as a good algorithm. Improved algorithms and OS data structures have historically been far more important than what language written in. (This is true for other large systems).

Also, an expert Assembly programmer may produce faster code by hand, but will you really? The knowledge of many Assembly experts is built-in to your compiler. Running the compiler on your HLL code with the -optimise switch may well generate faster Assembly code than anything you could have written yourself.



Feeds      HumphrysFamilyTree.com

Bookmark and Share           On Internet since 1987.