System Essay, Research Paper
Windows NT vs Unix As An Operating System
In the late 1960s a combined project between researchers at MIT, Bell Labs and
General Electric led to the design of a third generation of computer operating
system known as MULTICS (MULTiplexed Information and Computing Service). It was
envisaged as a computer utility, a machine that would support hundreds of
simultaneous timesharing users. They envisaged one huge machine providing
computing power for everyone in Boston. The idea that machines as powerful as
their GE-645 would be sold as personal computers costing only a few thousand
dollars only 20 years later would have seemed like science fiction to them.
However MULTICS proved more difficult than imagined to implement and Bell Labs
withdrew from the project in 1969 as did General Electric, dropping out of the
computer business altogether.
One of the Bell Labs researchers (Ken Thompson) then decided to rewrite a
stripped down version of MULTICS, initially as a hobby. He used a PDP-7
minicomputer that no was using and wrote the code in assembly. It was initially
a stripped down, single user version of MULTICS but Thompson actually got the
system to work and one of his colleagues jokingly called it UNICS (UNiplexed
Information and Computing Service). The name stuck but the spelling was later
changed to UNIX. Soon Thompson was joined on the project by Dennis Richie and
later by his entire department.
UNIX was moved from the now obsolete PDP-7 to the much more modern PDP-11/20 and
then later to the PDP-11/45 and PDP-11/70. These two latter computers had large
memories as well as memory protection hardware, making it possible to support
multiple users at the same time. Thompson then decided to rewrite UNIX in a
high-level language called B. Unfortunately this attempt was not successful and
Richie designed a successor to B called C. Together, Thompson and Richie rewrote
UNIX in C and subsequently C has dominated system programming ever since. In
1974, Thompson and Richie published a paper about UNIX and this publication
stimulated many universities to ask Bell Labs for a copy of UNIX. As it happened
the PDP-11 was the computer of choice at nearly all university computer science
departments and the operating systems that came with this computer was widely
regarded as being dreadful and hence UNIX quickly came to replace them. The
version that first became the standard in universities was Version 6 and within
a few years this was replaced by Version 7. By the mid 1980s, UNIX was in
widespread use on minicomputers and engineering workstations from a variety of
vendors.
In 1984, AT&T released the first commercial version of UNIX, System III, based
on Version 7. Over a number of years this was improved and upgraded to System V.
Meanwhile the University of California at Berkeley modified the original Version
6 substantially. They called their version 1BSD (First Berkeley Software
Distribution). This was modified over time to 4BSD and improvements were made
such as the use of paging, file names longer than 14 characters and a new
networking protocol, TCP/IP. Some computer vendors like DEC and Sun Microsystems
based their version of UNIX on Berkeley’s rather than AT&T’s. There was a few
attempts to standardise UNIX in the late 1980s, but only the POSIX committee had
any real success, and this was limited.
During the 1980s, most computing environments became much more heterogeneous,
and customers began to ask for greater application portability and
interoperability from systems and software vendors. Many customers turned to
UNIX to help address those concerns and systems vendors gradually began to offer
commercial UNIX-based systems. UNIX was a portable operating system whose source
could easily be licensed, and it had already established a reputation and a
small but loyal customer base among R&D organisations and universities. Most
vendors licensed source bases from either the University of California at
Berkeley or AT&T(r) (two completely different source bases). Licensees
extensively modified the source and tightly coupled them to their own systems
architectures to produce as many as 100 proprietary UNIX variants. Most of these
systems were (and still are) neither source nor binary compatible with one
another, and most are hardware specific.
With the emergence of RISC technology and the breakup of AT&T, the UNIX systems
category began to grow significantly during the 1980s. The term “open systems”
was coined. Customers began demanding better portability and interoperability
between the many incompatible UNIX variants. Over the years, a variety of
coalitions (e.g. UNIX International) were formed to try to gain control over and
consolidate the UNIX systems category, but their success was always limited.
Gradually, the industry turned to standards as a way of achieving the
portability and interoperability benefits that customers wanted. However, UNIX
standards and standards organisations proliferated (just as vendor coalitions
had), resulting in more confusion and aggravation for UNIX customers.
The UNIX systems category is primarily an application-driven systems category,
not an operating systems category. Customers choose an application first-for
example, a high-end CAD package-then find out which different systems it runs on,
and select one. The final selection involves a variety of criteria, such as
price/performance, service, and support. Customers generally don’t choose UNIX
itself, or which UNIX variant they want. UNIX just comes with the package when
they buy a system to run their chosen applications.
The UNIX category can be divided into technical and business markets: 87% of
technical UNIX systems purchased are RISC workstations purchased to run specific
technical applications; 74% of business UNIX systems sold are
multiuser/server/midrange systems, primarily for running line-of-business or
vertical market applications.
The UNIX systems category is extremely fragmented. Only two vendors have more
than a 10% share of UNIX variant license shipments (Sun(r) and SCO); 12 of the
top 15 vendors have shares of 5% or less (based on actual 1991 unit shipments,
source: IDC). This fragmentation reflects the fact that most customers who end
up buying UNIX are not actually choosing UNIX itself, so most UNIX variants have
small and not very committed customer bases.
Operating System Architecture
Windows NT was designed with the goal of maintaining compatibility with
applications written for MS-DOS, Windows for MS-DOS, OS/2, and POSIX. This was
an ambitious goal, because it meant that Windows NT would have to provide the
applications with the application programming interfaces (API) and the execution
environments that their native operating systems would normally provide. The
Windows NT developers accomplished their compatibility goal by implementing a
suite of operating system environment emulators, called environment subsystems.
The emulators form an intermediate layer between user applications and the
underlying NT operating system core.
User applications and environment subsystems work together in a client/server
relationship. Each environment subsystem acts as a server that supports the
application programming interfaces of a different operating system . Each user
application acts as the client of an environment subsystem because it uses the
application programming interface provided by the subsystem. Client applications
and environment subsystem servers communicate with each other using a message-
based protocol.
At the core of the Windows NT operating system is a collection of operating
system components called the NT Executive. The executive’s components work
together to form a highly sophisticated, general purpose operating system. They
provide mechanisms for:
Interprocess communication.
Pre-emptive multitasking.
Symmetric multiprocessing.
Virtual memory management.
Device Input/Output.
Security.
Each component of the executive provides a set of functions, commonly referred
to as native services or executive services. Collectively, these services form
the application programming interface (API) of the NT executive.
Environment subsystems are applications that call NT executive services. Each
one emulates a different operating system environment. For example, the OS/2
environment subsystem supports all of the application programming interface
functions used by OS/2 character mode applications. It provides these
applications with an execution environment that looks and acts like a native
OS/2 system. Internally, environment subsystems call NT executive services to do
most of their work. The NT executive services provide general-purpose mechanisms
for doing most operating system tasks. However the subsystems must implement any
features that are unique to the their operating system environments.
User applications, like environment subsystems, are run on the NT Executive.
Unlike environment subsystems, user applications do not directly call executive
services. Instead, they call application programming interfaces provided by the
environment subsystems. The subsystems then call executive services as needed to
implement their application programming interface functions.
Windows NT presents users with an interface that looks like that of Windows 3.1.
This user interface is provided by Windows NT’s 32-bit Windows subsystem (Win32).
The Win32 subsystem has exclusive responsibility for displaying output on the
system’s monitor and managing user input. Architecturally, this means that the
other environment subsystems must call Win32 subsystem functions to produce
output on the display. It also means that the Win32 subsystem must pass user
input actions to the other environment subsystems when the user interacts with
their windows.
Windows NT does not maintain compatibility with device drivers written for MS-
DOS or Windows for MS-DOS. Instead, it adopts a new layered device-driver
architecture that provides many advantages in terms of flexibility,
maintainability, and portability. Windows NT’s device driver architecture
requires that new drivers be written before Windows NT can be compatible with
existing hardware. While writing new drivers involves a lot of development
effort on the part of Microsoft and independent hardware vendors (IHV), most of
the hardware devices supported by Windows for MS-DOS will be supported by new
drivers shipped with the final Windows NT product.
The device driver architecture is modular in design. It allows big (monolithic)
device drivers to be broken up into layers of smaller independent device drivers.
A driver that provides common functionality must only be written once. Drivers
in adjacent layers can then simply call the common device driver to get their
work done. Adding support for new devices is easier under Windows NT than most
operating systems because only the hardware-specific drivers need to be
rewritten.
Windows NT’s new device driver architecture provides a structure on top of which
compatibility with existing installable file systems (for example, FAT and HPFS)
and existing networks (for example, Novell and Banyan Vines) was relatively easy
to achieve. File systems and network redirectors are implemented as layered
drivers that plug easily into the new Windows NT device driver architecture.
In any Windows NT multiprocessor platform, the following conditions must hold:
All CPUs are identical, and either all have identical coprocessors or none has a
coprocessor.
All CPUs share memory and have uniform access to memory.
In a symmetric platform, every CPU can access memory, take an interrupt, and
access I/O control registers. In an asymmetric platform, one CPU takes all
interrupts for a set of slave CPUs.
Windows NT is designed to run unchanged on uniprocessor and symmetric
multiprocessor platforms
A UNIX system can be regarded as hierarchical in nature. At the highest level is
the physical hardware, consisting of the CPU or CPUs, memory and disk storage,
terminals and other devices.
On the next layer is the UNIX operating system itself. The function of the
operating system is to allow access to and control the hardware and provide an
interface that other software can use to access the hardware resources within
the machine, without having to have complete knowledge of what the machine
contains. These system calls allow user programs to create and manage processes,
files and other resources. Programs make system calls by loading arguments into
memory registers and then issuing trap instructions to switch from user mode to
kernel mode to start up UNIX. Since there is no way to trap instructions in C, a
standard library is provided on top of the operating system, with one procedure
per system call.
The next layer consists of the standard utility programs, such as the shell,
editors, compilers, etc., and it is these programs that a user at a terminal
invokes. They use the operating system to access the hardware to perform their
functions and generally are able to run on different hardware configurations
without specific knowledge of them.
There are two main parts to the UNIX kernel which are more or less
distinguishable. At the lowest level is the machine dependent kernel. This is a
piece of code which consists of the interrupt handlers, the low-level I/O system
device drivers and some of the memory management software. As with most of the
Unix operating system it is mostly written in C, but since it interacts directly
with the machine and processor specific hardware, it has to be rewritten from
scratch whenever UNIX is ported to a new machine. This kernel uses the lowest
level machine instructions for the processor which is why it must be changed for
each different processor.
In contrast, the machine independent kernel runs the same on all machine types
because it is not as closely reliant on any specific piece of hardware it is
running on. The machine independent code includes system call handling, process
management, scheduling, pipes, signals, memory paging and memory swapping
functions, the file system and the higher level part of the I/O system. The
machine independent part of the kernel is by far the larger of the two sections,
which is why it UNIX can be ported to new hardware with relative ease.
Unix does not use the DOS and Windows idea of independently loaded device
drivers for each additional hardware item that is not under BIOS control in the
machine which is why it must be recompiled whenever hardware is added or removed,
the kernel needing to be updated with the new information. This is the
equivalent of adding a device driver to a configuration file in DOS or Windows
and then rebooting the machine. It is however a longer process to undertake.
Memory Management
Windows NT provides a flat 32-bit address space, half of which is reserved for
the OS, and half available to the process. This provides a separate 2 gigabytes
of demand-paged virtual memory per process. This memory is accessible to the
software developer through the usual malloc() and free() memory allocation and
deallocation routines, as well as some advanced Windows NT-specific mechanisms.
For a programmer desiring greater functionality for memory control, Windows NT
also provides Virtual and Heap memory management APIs.
The advantage of using the virtual memory programming interface (VirtualAlloc(),
VirtualLock(), VirtualQuery(), etc.) is that the developer has much more control
over whether backing store (memory committed in the paging (swap) file to handle
physical memory overcommitment) is explicitly marked, and removed from the
available pool of free blocks. With malloc(), every call is assumed to require
the memory to be available upon return from the function call to be used. With
VirtualAlloc() and related functions, the memory is reserved, but not committed,
until the page on which an access occurs is touched. By allowing the application
to control the commitment policy through access, less system resources are used.
The trade-off is that the application must also be able to handle the condition
(presumably with structured exception handling) of an actual memory access
forcing commitment.
Heap APIs are provided to make life easier for applications with memory-using
stack discipline. Multiple heaps can be initialised, each growing/shrinking with