SUPPLEMENT ON INFORMATION AND COMMUNICATION TECHNOLOGY

 

 

Chapter 

 

2

 


Operating Systems Evolution

History of Operating Systems

Earliest Computers

The first computers were analog and digital computers made with intricate gear systems by the Greeks. These computers turned out to be too delicate for the technological capabilities of the time and were abandoned as impractical.

The first practical computers were made by the Inca using ropes and pulleys. Knots in the ropes served the purpose of binary digits. The Inca had several of these computers and used them for tax and government records. In addition to keeping track of taxes, the Inca computers held data bases on all of the resources of the Inca empire, allowing for efficient allocation of resources in response to local disasters (storms, drought, earthquakes, etc.). Spanish soldiers acting on orders of Roman Catholic priests destroyed all but one of the Inca computers in the mistaken belief that any device that could give accurate information about distant conditions must be a divination device powered by the Christian “Devil” (and many modern Luddites continue to view computers as Satanically possessed devices).

In the 1800s, the first computers were programmable devices for controlling the weaving machines in the factories of the Industrial Revolution. Created by Charles Babbage, these early computers used Hollerinth (Punch) cards as data storage (the cards contained the control codes for the various patterns). The first computer programmer was Lady Ada, for whom the Ada programming language is named.

In the 1900s, researchers started experimenting with both analog and digital computers using vacuum tubes. Some of the most successful early computers were analog computers, capable of performing advanced calculus problems rather quickly. But the real future of computing was digital rather than analog. Building on the technology and math used for telephone and telegraph switching networks, researchers started building the first electronic digital computers.

Bare Hardware

In the earliest days of electronic digital computing, everything was done on the bare hardware. Very few computers existed and those that did exist were experimental in nature. The researchers who were making the first computers were also the programmers and the users. They worked directly on the “bare hardware”. There was no operating system. The experimenters wrote their programs in assembly language and a running program had complete control of the entire computer. Debugging consisted of a combination of fixing both the software and hardware, rewriting the object code and changing the actual computer itself.

The lack of any operating system meant that only one person could use a computer at a time. Even in the research lab, there were many researchers competing for limited computing time. The first solution was a reservation system, with researchers signing up for specific time slots.

The high cost of early computers meant that it was essential that the rare computers be used as efficiently as possible. The reservation system was not particularly efficient. If a researcher finished work early, the computer sat idle until the next time slot. If the researcher's time ran out, the researcher might have to pack up his or her work in an incomplete state at an awkward moment to make room for the next researcher. Even when things were going well, a lot of the time the computer actually sat idle while the researcher studied the results (or studied memory of a crashed program to figure out what went wrong).

Computer Operators

The solution to this problem was to have programmers prepare their work off-line on some input medium (often on punched cards, paper tape, or magnetic tape) and then hand the work to a computer operator. The computer operator would load up jobs in the order received (with priority overrides based on politics and other factors). Each job still ran one at a time with complete control of the computer, but as soon as a job finished, the operator would transfer the results to some output medium (punched tape, paper tape, magnetic tape, or printed paper) and deliver the results to the appropriate programmer. If the program ran to completion, the result would be some end data. If the program crashed, memory would be transferred to some output medium for the programmer to study (because some of the early business computing systems used magnetic core memory, these became known as “core dumps”)

Device Drivers And Library Functions

Soon after the first successes with digital computer experiments, computers moved out of the lab and into practical use. The first practical application of these experimental digital computers was the generation of artillery tables for the British and American armies. Much of the early research in computers was paid for by the British and American militaries. Business and scientific applications followed.

As computer use increased, programmers noticed that they were duplicating the same efforts. Every programmer was writing his or her own routines for I/O, such as reading input from a magnetic tape or writing output to a line printer. It made sense to write a common device driver for each input or output device and then have every programmer share the same device drivers rather than each programmer writing his or her own. Some programmers resisted the use of common device drivers in the belief that they could write “more efficient” or faster or "“better” device drivers of their own.

Additionally each programmer was writing his or her own routines for fairly common and repeated functionality, such as mathematics or string functions. Again, it made sense to share the work instead of everyone repeatedly “reinventing the wheel”. These shared functions would be organized into libraries and could be inserted into programs as needed. In the spirit of cooperation among early researchers, these library functions were published and distributed for free, an early example of the power of the open source approach to software development.

UNIX Takes Over Mainframes

UNIX was orginally developed in a laboratory at AT&T’s Bell Labs (now an independent corporation known as Lucent Technologies). At the time, AT&T was prohibited from selling computers or software, but was allowed to develop its own software and computers for internal use. A few newly hired engineers were unable to get valuable mainframe computer time because of lack of seniority and resorted to writing their own operating system (UNIX) and programming language (C) to run on an unused mainframe computer still in the original box (the manufacturer had gone out of business before shipping an operating system).

AT&T’s consent decree with the U.S. Justice Department on monopoly charges was interpreted as allowing AT&T to release UNIX as an open source operating system for academic use. Ken Thompson, one of the originators of UNIX, took UNIX to the University of California, Berkeley, where students quickly started making improvements and modifications, leading to the world famous Berkeley Standard Distribution (BSD) form of UNIX.

UNIX quickly spread throughout the academic world, as it solved the problem of keeping track of many (sometimes dozens) of proprietary operating systems on university computers. With UNIX< all of the computers from many different manufacturers could run the same operating system and share the same programs (recompiled on each processor).

When AT&T settled yet another monopoly case, the company was broken up into “Baby Bells” (the regional companies operating local phone service) and the central company (which had the long distance business and Bell Labs). AT&T (as well as the Baby Bells) was allowed to enter the computer business. AT&T gave academia a specific deadline to stop using “encumbered code” (that is, any of AT&T’s source code anywhere in their versions of UNIX).This led to the development of free open source projects such as FreeBSD, NetBSD, and OpenBSD, as well as commercial operating systems based on the BSD code.

Meanwhile, AT&T developed its own version of UNIX, called System V. Although AT&T eventually sold off UNIX, this also spawned a group of commercial operating systems known as Sys V UNIXes. UNIX quickly swept through the commercial world, pushing aside almost all proprietary mainframe operating systems. Only IBM’s MVS and DEC’s OpenVMS survived the UNIX onslaught.

Vendors such as Sun, IBM, DEC, SCO, and HP modified Unix to differentiate their products. This splintered Unix to a degree, though not quite as much as is usually perceived. Necessity being the mother of invention, programmers have created development tools that help them work around the differences between Unix flavors. As a result, there is a large body of software based on source code that will automatically configure itself to compile on most Unix platforms, including Intel-based Unix. Regardless, Microsoft would leverage the perception that Unix is splintered beyond hope, and present Windows NT as a more consistent multi-platform alternative.

UNIX To The Desktop

Among the early commercial attempts to deploy UNIX† on desktop computers was AT&T selling UNIX in an Olivetti box running a w74 680x0 assembly language. Microsoft partnered with Xenix to sell their own version of UNIX.w74 Apple computers offered their A/UX version of UNIX running on Macintoshes. None of these early commercial UNIXs was successful. “Unix started out too big and unfriendly for the PC. … It sold like ice cubes in the Arctic. … Wintel emerged as the only ‘safe’ business choice”.

Unix had a limited PC market, almost entirely server-centric. SCO made money on Unix, some of it even from Microsoft. (Microsoft owns 11 percent of SCO, but Microsoft got the better deal in the long run, as it collected money on each unit of SCO Unix sold, due to a bit of code in SCO Unix that made SCO somewhat compatible with Xenix. The arrangement ended in 1997.)”

 

Why Operating Systems are Needed ?

System Software

System software directs the computer in performing tasks that are basic to proper functioning of the system or commonly needed by system users. System software serves as the "middleman" between the computer hardware and application software. The operating system is the set of software routines that sits between the application program and the hardware. All systems have system software: Some more than others, some with more capabilities than others.

Three categories of system programs:

  1. System operation - software that manages the resources of a computer system.

  2. System utilization - software that manages or assists users in managing the system operation.

  3. System implementation - software that assists users in preparing programs for execution.

An operating system is an integrated set of systems programs whose major function:

  • Manage resources (CPU, disk, tape, printer, memory, etc.)

  • Schedule resources

  • Control I/O

  • Handle error recovery

  • Manage memory

  • Manage processor

  • Schedule tasks and jobs

  • Provides security

  • Supplies user commands

At the hardware level

  • Computers by different manufacturers are incompatible

  • Communicate to peripherals differently

  • Handle interrupts differently.

  • A program written on one computer will probably not run on another computer.

  • If both computers support the same operating system, then this program can be run on both.

  • Communication with hardware will be different.

  • The interface with application program represents a consistent platform.

A Computer Program

  • The application logic

  • The user interface (screens, dialogues, etc.)

  • The operating system interface (read, write, I/O operation, plus language commands).

  • The database interface (the logic to access the database management system).

  • The network interface (the logic to access the data communication software).

Kinds of Operating Systems

Functionality

Operating systems can be grouped according to functionality: operating systems for supercomputing, render farms, mainframes, servers, workstations, desktops, handheld devices, real time systems, or embedded systems.

  • Supercomputing is primarily scientific computing, usually modeling real systems in nature. Render farms are collections of computers that work together to render animations and special effects. Work that previously required supercomputers can be done with the equivalent of a render farm.

  • Mainframes used to be the primary form of computer. Mainframes are large centralized computers. At one time, they provided the bulk of business computing through time-sharing. Mainframes and mainframe replacements (powerful computers or clusters of computers) are still useful for some large scale tasks, such as centralized billing systems, inventory systems, database operations, etc. When mainframes were in widespread use, there was also a class of computers known as minicomputers which were smaller, less expensive versions of mainframes for businesses that couldn’t afford true mainframes.

  • Servers are computers or groups of computers used for internet serving, intranet serving, print serving, file serving, and/or application serving. Servers are also sometimes used as mainframe replacements.

  • Desktop operating systems are used for personal computers.

  • Workstations are more powerful versions of personal computers. Often only one person uses a particular workstation (like desktops) and workstations often run a more powerful version of a desktop operating system, but workstations run on more powerful hardware and often have software associated with larger computer systems.

  • Hand held operating systems are much smaller and less capable than desktop operating systems, so that they can fit into the limited memory of hand held devices.

  • Real time operating systems (RTOS) are specifically designed to respond to events that happen in real time. This can include computer systems that run factory floors, computer systems for emergency room or intensive care unit equipment (or even the entire ICU), computer systems for air traffic control, or embedded systems. RTOSs are grouped according to the response time that is acceptable (seconds, milliseconds, microseconds) and according to whether or not they involve systems where failure can result in loss of life.

  • Embedded systems are combinations of processors and special software that are inside of another device, such as the electronic ignition system on cars.

Proprietary vs. UNIX

In the early days of computing, each manufacturer created their own custom operating system(s). There was competition in features of both the operating system and the underlying hardware.

After AT&T was forced to abandon commercial computing as part of an antitrust settlement, AT&T’s UNIX was made available for free to the academic community. Because UNIX had been designed in a way that made it easy to “port” (move) to new hardware, colleges and universities that switched to UNIX were able to run a single operating system on all of their computers, even if their computers came from multiple manufacturers.

Eventually UNIX spread into the business community, and pushed aside almost all proprietary mainframe and minicomputer operating systems. Only IBM’s MVS and DEC’s OpenVMS survived in common use (MVS because of the sheer number of installations using it and OpenVMS in the banking and financial community because of its high reliability, security, and preservation of data). Even IBM and DEC ended up offering their own versions of UNIX as well as their proprietary operating systems.

In a reintroduction of the “Tower of Babel”, manufacturers once again competed in features, offering platform-specific enhancements to their versions of UNIX. MIS managers were faced with the choice of using these custom features and being locked into a specific manufacturer’s version of UNIX or foregoing the advanced features and limiting themselves to generic UNIX facilities.

With the introduction of microprocessors and personal computers, once again manufacturers each produced their own custom proprietary operating systems for their hardware, often changing operating systems with each new generation of hardware. Commodore and Apple introduced semi-graphical operating systems for the Commodore PET and C64 and the Apple ][. Digital Research introduced CP/M, a simple business-oriented operating system that ran on multiple manufacturer’s computers.

Moving beyond the early hobbyist days, Commodore (Amiga), Atari (GEM), and Apple (Lisa and Macintosh) introduced fully graphic user interfaces. Microsoft introduced a bad copy of CP/M, known as MS-DOS or PC-DOS, and then later introduced a bad copy of the Macintosh known as Windows.

The strong point of these desktop operating systems was the graphic user interface, which opened up the computer to the masses, no longer demanding that computer users be mathematically competent by eliminating the text command line. While the Amiga and Atari’s GEM had very solid underpinnings, the Macintosh and Windows have always had weak underpinnings, which typically manifests as system crashes and various mysterious events. The Amiga slowly dwindled in popularity due to gross mismanagement by Commodore executives, while Atari’s GEM was a victim of Atari’s financial troubles. Microsoft has repeatedly tried to fix the underpinnings of Windows, with Windows 95, Windows 98, Windows NT, and Windows 2000, but never with success. Apple also tried to fix the underpinnings of the Macintosh, first with Copeland (never released, although parts of it appeared in Mac OS 8), and now with Mac OS X. With Mac OS X, Apple took an already working workstation UNIX (NeXT) and have been attempting to place the Macintosh user interface on top. So far it looks as if Apple will be providing a high quality UNIX, but at the sacrifice of basic user interface capability, which may make Mac OS X too difficult for the non-engineer to use. With OS/2, IBM succeeded in creating a personal computer operating system that had both a sophisticated graphic user interface and high quality underpinnings, but Microsoft used what were later declared illegal tactics to prevent OS/2 from becoming popular.

For almost as long as there have been microprocessors, there have been variations of UNIX available for them (Apple even provided its own version of UNIX for the Macintosh hardware), including the BSD projects (FreeBSD, NetBSD, OpenBSD). With LINUX, a UNIX-like operating system took off in popularity.

LINUX started as an alternative operating system to Windows, coordinated by Linus Torvalds, at the time an engineering student. With the cooperation of literally tens of thousands of volunteer programmers, Linux grew into a powerful server and workstation operating system. Two groups (KDE and GNOME) are in the process of building modern graphic user interfaces for Linux. Already, their work has progressed to the point that after some initial set-up hassles, many non-technical people can use Linux. It is reasonable to expect that soon Linux will match or surpass the graphic user interface sophistication of Windows. And because of the way that KDE and GNOME are being written (as open source projects using standard UNIX interfaces), both graphic shells can be (and already are being) used on just about any UNIX system, including the free BSDs. Once again, UNIX sweeps aside most proprietary operating systems.

Free UNIXs

There are four major free, open source UNIX projects: LINUX, FreeBSD, NetBSD, and OpenBSD. The three different BSD projects started because the original design and programming teams had personality conflicts and couldn’t all work together. LINUX was started by a college student who didn’t know of the existence of the BSD projects.

The basic difference between the BSD projects and LINUX, is that each BSD project has a tightly controlled design, while LINUX is very free-form. In most cases, the four operating systems are interchangeable. The three BSDs share a great deal of source code with each other. Most software written for one of the four operating systems will run on the other three with little or no modification. One of the three BSDs is the operating system of choice where reliability is critical (because of the tightly controlled design). LINUX is the operating system of choice for hobbyists who want to experiment with and tweak their personal copy.

Where computers are headed

After two decades of supplying boring beige boxes, PC makers have begun to add a bit of color and style to their lines, following the runaway success of Apple’s iMac line, a candy-colored machine designed for consumers that was not simply a repackaged business box. Industrial design isn’t the only selling point. A fundamental shift in computing has occurred. For business users and consumers alike, what matters is being connected to the Web, not the raw processing power of the desktop computer. The most intriguing new technologies aren’t spreadsheets or word-processing programs, or the latest updates to Windows. Digital photography, digital music, desktop video editing, and high speed internet access are where the action is. A top-flight desktop computer or notebook is nice to have, but what makes that technology really rock is all the gear that goes with it. Computer manufacturers have altered their product lines in recognition of that trend. Apple’s top-end consumer model, the iMac DV Special Edition, comes with a stellar sound system, high-speed FireWire ports for transferring video, and the company’s iMovie software for editing movies. Sony has a similar strategy with VAIO desktop models configured for video editing that sport a huge hard drive, high-speed i.LINK [FireWire] ports, and dual CD/DVD drives. The most expensive notebook models now rival desktop machines for speed and versatility. Except for Apple’s eye-catching iBook, however, most notebooks are designed for business users.