Operating Systems



In the early ages of the computers, they where considered advanced card machines and therefore the jobs they performed where like: "find all females in this bunch of cards (or records)". Therefore, utilisation was high since one delivered a job to the computing department, which prepared and executed the job on the computer, delivering the final result to you. The advances in electroniv engineering increased the processing power serveral times, now leaving input/output devices (card readers, line printers) far behind. This ment that the CPU had to wait for the data it required to perform a given task. Soon, engineers thought: "what if we could both prepare, process and output data at the same time" and multitasking was born. Now one could read data for the next job while executing the current job and outputting the results of a previously job, thereby increasing the utilisation of the very expensive computer.

Cheap terminals allowed the users themselves to input data to the computer and to execute jobs (having the department do it often took days) and see results immediately on the screen, which introduced what was called interactive tasks. They required a console to be updated when a key was pressed on the keyboard (again a task with slow input). Same thing happens today, where your computer actually does no work most of the time - it just waits for your input. Therefore using multitasking where serveral tasks run on the same computer improves performance.

Multitasking is the process of letting the operating system perform multiple task at what seems to the user simultaniously. In SMP (symmetric Multi Processor systems) this is the case, since there are serveral CPU's to execute programs on - in systems with only a single CPU this is done by switching execution very rapidly between each program, thus givin the impression of simultanious execution. This process is also known as task swithcing or timesharing. Practically all modern OS has this ability.

Multitasking is, on single-processor machines, implemented by letting the running process own the CPU for a while (a timeslice) and when required gets replaced with another process, which then owns the CPU. The two most common methods for sharing the CPU time is either cooperative multitasking or preempetive multitasking.

Cooperative Multitasking

The simplest form of multitasking is cooperative multitasking. It lets the programs decide when they wish to let other tasks run. This method is not good since it lets one process monopolise the CPU and never let other processes run. This way a program may be reluctant to give away processing power in the fear of another process hogging all CPU-time. Early versions of the MacOS (up til MacOS 8) and versions of Windows earlier than Win95/WinNT used cooperative multitasking (Win95 when running old apps).

Preempetive Multitasking

Preempetive multitasking moves the control of the CPU to the OS, letting each process run for a given amount of time (a timeslice) and then switching to another task. This method prevents one process from taking complete control of the system and thereby making it seem as if it is crashed. This method is most common today, implemented by among others OS/2, Win95/98, WinNT, Unix, Linux, BeOS, QNX, OS9 and most mainframe OS. The assignment of CPU time is taken care of by the scheduler.


A task can be in serveral possible states:

These states are nescessary to understand some other concepts.


When using preemptive multitasking or SMP, new problems suddenly arrive - you can never be sure when your task is executed, which leads to problems if two tasks are depending on each other. The solution for this is using some sort of interface between the two tasks to synchronize their execution so they may tell another task what point in execution they have reached. Another problem is in sharing data between the two tasks - if one task is half-through writing something to memory and the other task starts to read that data - what happens? This is a so-called race condition. The solution for both of these tasks are a mechanism known as a semaphore.


A semaphore is a simple counter, which has the property of performing the two elementary tasks of WAIT(S) and SIGNAL(S) completely before another task is allowed to access the same semaphore S (so-called atomic access). The two operations are:

The waiting for S>0 is also called suspending a task, waiting for a semaphore.

In a complex system where there are many tasks running, synchronizing with semaphores may be very difficult.


The scheduler is the component of the OS, which is responsible for the assignment of CPU time to tasks. It usually implements a priority engine which lets it assign more CPU time to high priority tasks.

Many different schemes are used to implement the multitasking architecture. One of the most common is round-robin.

Round robin task scheduling

Quite simple, in fact. The scheduler assigns the same amount of time to each task that is ready in a ring - all tasks have the same priority. This is slightly problematic, since one often wish certain tasks to be put in the background (eg. printing) while others get more CPU time (like games). Therefore a priority system must be implemented.

Highest Runnable task

The simplest form of scheduling with priorities are always to select the process with the highest priority that is ready. This method is very predictable and yields OK results.

Priority and round robin task scheduling

If you assign each task a priority class and assign CPU time to each class corrosponding to its priority and then within each class you use round robin, you obtain a very efficient scheduling architecture. Imagine that a total of 100 timeslices are available and that 2 is the lowest class. Now, if we say that a class-0 task gets 3 times as much time as a class-2 task and twice as much as a class-1, the division of slices is 3:2:1 for each task. If we imagine all tasks running at class-2, class-1 tasks would be twice in a round and class-0 would be 3 times in a round (thus ensuring the priority). Therefore the total timeslices per round would be 3*3+2*2+1=14 slices/round. The total slice count was 100 leaving a total of 3*100/14 to a class-0 task, 2*100/14 to a class-1 task and 100/14 to a class-2 task.

If we want to know in which order they are to be executed, it must be something like (priority-levels): 0-1-0-1-0-2-.

One implementation of this would be that for each class, one has a counter that tells us how many timeslices a class must be given before the system is reset to standards (in our example c0:3, c1:2, c2:1). Then, one starts at the highest level, gives one timeslice to c0, finds the first class below, that earns a slice and gives it, and then going to the top again. Repeating this would give us a sharing in our example 0-1-0-1-0-2-, making a quite even assignment between priority classes. An alternate method will be starting at c0 and run down the class-list and restarting when reaching the button. This is repeated until all classes counters is 0 (this would in our example give 0-1-2-0-1-0-). This algorithm has one problem - a high-priority task may have to wait quite a while if many tasks are being multitasked.

note that these methods must be modified a bit to fit scheduling with less difference between classes (the examples are quite rough, imaginge a system with 64 classes build like this - not much time would ever be assigned to class-63)


On a level higher than tasks, one may wish to organize tasks in processes (or teams of tasks), maybe only a single task in a process. The process organisation allows the OS to protect different tasks from each other, thereby increasing the stability of the system. Good OS have a strict division and protection mechanism between processes.


The differences between a task and a thread are difficult to define and most often the words are used interchangably. I wrote this page while studying embedded systems, where a task is a single thing that must be done - not a collection of things. Therefore a task, in my world, is a sequence of steps taken to obtain a given result from a given input. This is, in essence, the same as a thread of execution. In other more OS-like circles, a task may be the collection of threads. I prefer considering a task the simplest thing you can do. The problem lies in which approach to take - the abstract approach prefers a task as a simple sequence, whereas the machine oriented (or OS) prefers thread for this. This subject is open for discussion.

Last updated 2001-05-21 FlushedSector
Standard Disclaimer