[Main menu]

How It Works

Multitasking


Introduction

The multitasking feature is the heart of this operating system. It makes it possible to run multiple processes at the same time. Well on systems with just one CPU it just seems that way, because each process gets a certain amount of processing power within a given time slot. On multi-processor systems several CPUs are processing at the same time, and thus increasing the system performance.

On system which are heavily loaded there are also the choice of distributing processes. This is also a kind of multitasking, where the entire process is replicated on another available host system, which then run the process. The distribution is explained in more details below.

The multitasking makes it possible to utilize the processing capabilities to the greatest extent. So when a process request data from a harddrive, the request is registered, the process put on hold, and another process begin executing.


Primary Goal

The primary goal with multitasking is to allow multiple processes to appear to run at the same time. But not all processes are alike. Some will do heavy computing almost all the time while others may only need to execute a few instructions in order to update fx. a timer. Other processes may require a certain response time or they will loose data, and therefore are more important to the system.


Changing Algorithms

The Maverick OS allows a privilege process to change the multitasking algorithm on-the-fly. This can be used when testing and/or tuning algorithms. The following algorithms are currently available:


Round-Robin

A very simple algorithm which only uses 1 queue for all processes.


Multiple Priority Queues with Lower Priority Pre-emptation

This algorithm uses a number of different queues to decide which process is the next to get execution time. Each queue has different priorities, and if an event occurs that activates a process with a higher priority than the one currently running, the current process is immidiately interrupted and the higher priority process gets control.

Running Queues

The number of queues in this group depends on the number of CPUs in the system. Each CPU will have a maximum of one queues and each queue can contain a maximum of only one process.

When no other processes are in the suspended queues, the process currently executing may continue to do so.

Suspended Queues

These queues contains processes which didn't finish executing before they were suspended. The number of queues depends on the configuration of the algorithm.

Blocked Queues

These queues contain the processes which are waiting for some event or time-out to happen. An example is the High-level part of the Keyboard ISR. This routine gets activated when the Keyboard ISR has serviced an interrupt, and that resulted in some input data.

Death Row

This queue contain processes which are to be terminated. These processes are not to be executed anymore, and are just waiting to be shutdown by the system.

Distributed Queues

These queues contain processes which has been distributed to other systems, but still belong here on this machine. The processes will not get processing time on this system, until they have been returned to us, to preserve coherency. The processes may even be adopted by the other system.

The queues also contain processes which are running on our system on the behalf of other systems.

Queue Interaction

A stated in the above, when a process is executing the process is in the running queue. Which running queue depends on which processor is actually executing the code.

When the currently processing process is interrupted and it hasn't finished what ever it was doing, it is moved to the suspended queue. In which suspend queue depends on what priority has been given to the process.


Distributed System

The idea is to have a "Load Balancer" process running in the system. When a machine gets overloaded it sends some of the work to the other machines.

This balancing could be done on both a macro and micro level. On the macro level, heave calculating applications such as generating 3D animation can be distributed, so that many machines chrunch different parts of the data at the same time.

It could also be a http server, which, when overloaded, forward the http request to another machine which then do the processing.

On the micro level, one thread may be distributed to another machine, along with some of the data its working on. On the other machine, a skeleton of the threads address space is created. When the thread then needs some data which weren't transferred, a page fault happen, the os transfer the missing data, and the thread is restarted.


The Control Processes

A couple of processes are required to make this multitasking miracle happen. They are:

Process Dispatcher

This is the one that determines which queue to fetch the next process from. The process runs in the address space of the currently running task (ie. not its own address space). The Process Dispatcher is run each time the IRQ 0 signal is received.

Scheduler

This is the process which moves processes between the individual suspend queues, the blocked queues and the death row. This process also run in the address space of the currently running task, but this process is called as a result of message passing.

LoadBalancer

This is a process with its own separate address space. The task has an overall view of the load on the system, and even the load on the machines in the neighborhood. The LoadBalancer can open and close the running queues for the extra processors in a multi-processor system, and/or direct load to other machines. Flags in the Process List determines if the process may be distributed.

When distributing processes between host systems, the amount of data to be transferred, may seriously affect the general performance on the system.

The Queue Syntax

The queues consist of 32 bit pointers into the Process List. These pointers are multiplied by the size of a entry in the Process List, to get the actual address of were the information is stored in memory. When a queue is full some more memory space is allocated and the queue link field of the first queue is set to point to the beginning of the second queue.

Each queue also has the following fields:

Queues are filled from the top and down. No empty spaces are allowed, except below queue_size.