The Kernel (computer science) reference article from the English Wikipedia on 24-Jul-2004
(provided by Fixed Reference: snapshots of Wikipedia from wikipedia.org)

Kernel (computer science)

Watch videos on African life
In computer science, the kernel is the fundamental part of an operating system. It is a piece of software responsible for providing secure access to the machine's hardware to various computer programs. Since there are many programs, and access to the hardware is limited, the kernel is also responsible for deciding when and how long a program should be able to make use of a piece of hardware, this is called multiplexing. Accessing the hardware directly could also be very complex, so kernels usually implement a set of hardware abstractions. These abstractions are a way of hiding the complexity, and providing a clean and uniform interface to the underlying hardware, which makes it easier on the programmers.

An operating system kernel is not strictly needed to run a computer. Programs can be directly loaded and executed on the "bare metal" machine, provided that the authors of those programs are willing to do without any hardware abstraction or operating system support. This was the normal operating method of many early computers, which were reset and reloaded between the running of different programs. Eventually, small ancillary programs such as program loaders and debuggers were typically left in-core between runs, or loaded from read-only memory. As these were developed, they formed the basis of what became early operating system kernels.

There are four broad categories of kernels :

Table of contents
1 Monolithic kernels
2 Microkernels
3 Monolithic kernels vs. microkernels
4 Hybrid kernels (modified microkernels)
5 Exokernels
6 See also
7 External links

Monolithic kernels

The monolithic approach defines a high-level virtual interface over the hardware, with a set of primitives or system calls to implement operating system services such as process management, concurrency, and memory management in several modules that run in supervisor mode.

Even if every module servicing these operations is separate from the whole, the code integration is very tight and difficult to do correctly, and, as all the modules run in the same space, a bug in one of them can bring down the whole system. However, when the implementation is complete and trustworthy, the tight internal integration of components allows the low-level features of the underlying system to be effectively exploited, making a good monolithic kernel highly efficient. Proponents of the monolithic kernel approach make the case that if code is not correct, it does not belong in a kernel, and if it is, there is little advantage in the microkernel approach.

More modern monolithic kernels such as Linux and the FreeBSD kernel can load executable modules at runtime, allowing easy extension of the kernel's capabilities as required, while helping to keep the amount of code running in kernelspace to a minimum.

Examples of monolithic kernels:

Microkernels

The microkernel approach consists in defining a very simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as thread management, address spaces and interprocess communication.

The main objective is the separation of basic service implementations from the operation policy of the system. For example, the process I/O locking could be implemented by a user server running on top of the microkernel. These user servers, used to carry on the system high level parts, are very modular and simplify the structure and design of the kernel. A service server that fails doesn't bring the entire system down; this simpler module can be restarted independently of the rest.

Examples of microkernels:

Monolithic kernels vs. microkernels

Monolithic kernels are often preferred over microkernels due to the lower level of complexity of dealing with all system control code in one address space. For example, XNU, the Mac OS X kernel, is based on Mach 3.0 + BSD in the same address space in order to cut down on the latency incurred by the traditional microkernel design.

In the early 1990s, monolithic kernels were considered obsolete. The design of Linux as a monolithic kernel rather than a microkernel was the topic of a famous flame war (or what then passed for flaming) between Linus Torvalds and Andrew Tanenbaum; a summary is available online.

There is merit in both sides of the arguments presented in the Tanenbaum/Torvalds debate.

Monolithic kernels tend to be easier to design correctly, and therefore may grow more quickly than a microkernel-based system. There are success stories in both camps. Microkernels are often used in embedded robotic or medical computers because most of the OS components reside in their own private, protected memory space. This is not possible with monolithic kernels, even with modern module-loading ones.

Although Mach is the best known general-purpose microkernel, several other microkernels have been developed with more specific aims. L3 was created to demonstrate that microkernels are not necessarily slow. L4 is a successor to L3 and a popular implementation called Fiasco is able to run Linux next to other L4 processes in separate address spaces. There are screenshots available on freshmeat.net showing this feat.

QNX is an operating system that has been around since the early 1980s and has a very minimalistic microkernel design. This system has been far more successful than Mach in achieving the goals of the microkernel paradigm. It is used in situations where software is not allowed to fail. This includes the robotic arms on the space shuttle to machines that grind glass where a tiny mistake may cost hundreds of thousands of dollars.

Many believe that since Mach basically failed to address the sum of the issues that microkernels were meant to solve, that all microkernel technology is useless. Mach enthusiasts state that this is a closed-minded attitude which has become popular enough that people just accept it as truth.

Hybrid kernels (modified microkernels)

Hybrid kernels are essentially microkernels that have some "non-essential" code in kernelspace in order for that code to run more quickly than it would were it to be in userspace. This was a compromise struck early on in the adoption of microkernel based architectures by various operating system developers before it was shown that pure microkernels could indeed be high performers. Most modern operating systems today fall into this category; Microsoft Windows being the most popular example. XNU, the Mac OS X kernel, is also a modified microkernel, due to the inclusion of BSD kernel code in the Mach based kernel. DragonFly BSD is the first non-Mach based BSD OS to adopt a hybrid kernel architecture.

Examples of Hybrid kernels:

Some people confuse the term "Hybrid kernel" with monolithic kernels that can load modules after boot. This is not correct. "Hybrid" implies that the kernel in question shares architectural concepts or mechanisms with both monolithic and microkernel designs - specifically message passing and migration of "non-essential" code into userspace while retaining some "non-essential" code in the kernel proper for performance reasons.

Exokernels

Exokernels, also known as vertically structured operating systems, are a [new and] rather radical approach to OS design.

The idea behind this is, to enable the developer to make all the decisions about hardware performance. Exokernels are extremely small, since they arbitrarily limit their functionality to the protection and multiplexing of resources.

Classic kernel designs (both monolithic and microkernels) abstract the hardware, hiding resources under a hardware abstraction layer, or behind device drivers. In these classic systems, if physical memory is allocated one cannot assure its actual placement, for example.

The goal of an exokernel is to allow an application to request a specific piece of memory, a specific disk block etc., and merely ensure that the requested resource is free, and the application is allowed to access it.

Since an exokernel therefore only provides a very low-level interface to the hardware, lacking any of the higher-level functionalities of other operating systems, it is augmented by a "library operating system". Such a library OS interfaces to the exokernel below, and provides application writers with the familiar functionalities of a complete OS.

One theoretical implication of an exokernel system is that it becomes possible to have several kinds of operating systems (Windows, Unix) running under a single exokernel, and that a developer may choose to override or increase functionality for performance reasons.

Currently, exokernel design is still very much a research effort and it is not used in any major commercial operating systems. One concept operating system is Nemesis, written by University of Cambridge, University of Glasgow, Citrix Systems and the Swedish Institute of Computer Science. MIT has also built several exokernel based systems.

See also

External links