Multitasking

Multitasking is, on single-processor machines, implemented by letting the running process own the CPU for a while (a timeslice) and when required gets replaced with another process, which then owns the CPU. The two most common methods for sharing the CPU time is either cooperative multitasking or preempetive multitasking.

Cooperative Multitasking

The simplest form of multitasking is cooperative multitasking. It lets the programs decide when they wish to let other tasks run. This method is not good since it lets one process monopolise the CPU and never let other processes run. This way a program may be reluctant to give away processing power in the fear of another process hogging all CPU-time. Early versions of the MacOS (up til MacOS 8) and versions of Windows earlier than Win95/WinNT used cooperative multitasking (Win95 when running old apps).

Preempetive Multitasking


Preempetive multitasking moves the control of the CPU to the OS, letting each process run for a given amount of time (a timeslice) and then switching to another task. This method prevents one process from taking complete control of the system and thereby making it seem as if it is crashed. This method is most common today, implemented by among others OS/2, Win95/98, WinNT, Unix, Linux, BeOS, QNX, OS9 and most mainframe OS. The assignment of CPU time is taken care of by the scheduler.

States

A task can be in serveral possible states:

These states are nescessary to understand some other concepts.

Synchronization

When using preemptive multitasking or SMP, new problems suddenly arrive - you can never be sure when your task is executed, which leads to problems if two tasks are depending on each other. The solution for this is using some sort of interface between the two tasks to synchronize their execution so they may tell another task what point in execution they have reached. Another problem is in sharing data between the two tasks - if one task is half-through writing something to memory and the other task starts to read that data - what happens? This is a so-called race condition. The solution for both of these tasks are a mechanism known as a semaphore.

Semaphores

A semaphore is a simple counter, which has the property of performing the two elementary tasks of WAIT(S) and SIGNAL(S) completely before another task is allowed to access the same semaphore S (so-called atomic access). The two operations are:

The waiting for S>0 is also called suspending a task, waiting for a semaphore.

In a complex system where there are many tasks running, synchronizing with semaphores may be very difficult.

Scheduler



Round robin task scheduling


Quite simple, in fact. The scheduler assigns the same amount of time to each task that is ready in a ring - all tasks have the same priority. This is slightly problematic, since one often wish certain tasks to be put in the background (eg. printing) while others get more CPU time (like games). Therefore a priority system must be implemented.

Highest Runnable task

The simplest form of scheduling with priorities are always to select the process with the highest priority that is ready. This method is very predictable and yields OK results.

Priority and round robin task scheduling


If you assign each task a priority class and assign CPU time to each class corrosponding to its priority and then within each class you use round robin, you obtain a very efficient scheduling architecture. Imagine that a total of 100 timeslices are available and that 2 is the lowest class. Now, if we say that a class-0 task gets 3 times as much time as a class-2 task and twice as much as a class-1, the division of slices is 3:2:1 for each task. If we imagine all tasks running at class-2, class-1 tasks would be twice in a round and class-0 would be 3 times in a round (thus ensuring the priority). Therefore the total timeslices per round would be 3*3+2*2+1=14 slices/round. The total slice count was 100 leaving a total of 3*100/14 to a class-0 task, 2*100/14 to a class-1 task and 100/14 to a class-2 task.

If we want to know in which order they are to be executed, it must be something like (priority-levels): 0-1-0-1-0-2-.

One implementation of this would be that for each class, one has a counter that tells us how many timeslices a class must be given before the system is reset to standards (in our example c0:3, c1:2, c2:1). Then, one starts at the highest level, gives one timeslice to c0, finds the first class below, that earns a slice and gives it, and then going to the top again. Repeating this would give us a sharing in our example 0-1-0-1-0-2-, making a quite even assignment between priority classes. An alternate method will be starting at c0 and run down the class-list and restarting when reaching the button. This is repeated until all classes counters is 0 (this would in our example give 0-1-2-0-1-0-). This algorithm has one problem - a high-priority task may have to wait quite a while if many tasks are being multitasked.

Processes

On a level higher than tasks, one may wish to organize tasks in processes (or teams of tasks), maybe only a single task in a process. The process organisation allows the OS to protect different tasks from each other, thereby increasing the stability of the system. Good OS have a strict division and protection mechanism between processes.

Thread/task


Task :
collection of threads
Multiprocessor environment