Mainframe computer is defined by Webster dictionary, as a computer with it’s cabinet and internal circuits; also: a large fast computer that can handle multiple tasks concurrently.1 The second definition is probably more accurately because in the 1940s, there were a half-dozen computers, designed in clumsy ways, using expensive (vacuum tube) components, consuming vast amounts of power, which would take up large amounts of floor space. No one in those days thought that more than a few dozen such machines would be required in the, world. This was the era of the mainframe, a large, expensive, central processing unit designed to process one job at a time. The computers of the mainframe era cost a million dollars or more each. Because they were so expensive, they were in a central location: one such computer would serve the needs of an entire government agency; a large corporation would have a single computer in its main office. Mainframes were managed by data-processing professionals-individuals that had grown up with the tab product generation of business machines that were in widespread use in the first half of this century. 2
Mainframes dominated the computing world from 1950 through the early 1970s. 3 One of the first mainframe computers and probably the most famous was ENIAC. The computer (ENIAC) was produced by a partnership between the U.S. government and the University of Pennsylvania. It consisted of 18,000 vacuum tubes, 70,000 resistors and 5 million soldered joints. The computer was such a massive piece of machinery that it consumed 160 kilowatts of electrical power, enough energy to dim the lights in an entire section of Philadelphia.4 It was developed by John Presper Eckert (1919-1995) and John W. Mauchly (1907-1980). The mainframe computer was first used by the military during World War II to break codes, calculate trajectory, and design new weapons.5 These tasks would have been to time consuming and tedious for humans, plus these machines were more accurate then any humans.
The more current mainframes are computers that are designed for data processing with heavy use of I/O units such as large-capacity disks and printers; they do not necessarily employ the most advanced hardware and software technology. They have one to four processors, although some have a few more. Mainframes, such as the ES/9000 family of computers of the International Business Machines Corp. (IBM), are used for such applications as payroll computations, accounting, business transactions, information retrieval, and airline seat reservations.6
By 1948, the invention of the transistor greatly changed the computer’s development. The first large-scale machines to take advantage of this transistor technology were early the supercomputers. The first two supercomputers were by IBM and LARC by Sperry-Rand.7 These computers, both developed for atomic energy laboratories, could handle an enormous amount of data, a capability much in demand by atomic scientists. The machines were costly, however, and tended to be too powerful for the business sector’s computing needs, thereby limiting their attractiveness. Only two LARCs were ever installed: one in the Lawrence Radiation Labs in Livermore, California, for which the computer was named (Livermore Atomic Research Computer) and the other at the U.S. Navy Research and Development Center in Washington, D.C. 8 By the time it was the 1960’s, there were a number of commercially successful supercomputers used in business, universities, and government from companies such as Burroughs Control Data, Honeywell, IBM, Sperry-Rand, and others. These supercomputers were also of solid state design, and contained transistors in place of vacuum tubes. They also contained all the components we associate with the modern day computer: printers, tape storage, disk storage, memory, operating systems, and stored programs. One important example was the IBM 1401, which was universally accepted throughout industry, and is considered by many to be the Model T of the computer industry.9
Supercomputers have certain distinguishing features. Unlike conventional computers, they usually have more than one CPU (central processing unit), which contains circuits for interpreting program instructions and executing arithmetic and logic operations in proper sequence. The use of several CPUs to achieve high computational rates is required by the physical limits of circuit technology. Electronic signals cannot travel faster than the speed of light, which thus constitutes a fundamental speed limit for signal transmission and circuit switching. Rapid retrieval of stored data and instructions is required to support the extremely high computational speed of CPUs. Therefore, most supercomputers have a very large storage capacity, as well as a very fast input/output capability.
Still another distinguishing characteristic of supercomputers is their use of vector arithmetic for example they are able to operate on pairs of lists of numbers rather than on mere pairs of numbers.10 For instance, a typical supercomputer can multiply a list of hourly wage rates for a group of factory workers by a list of hours worked by members of that group to produce a list of dollars earned by each worker in roughly the same time that it takes a regular computer to calculate the amount earned by just one worker. Supercomputers were originally used in applications related to national security, including nuclear-weapons design and cryptography. Today, the aerospace, petroleum, and automotive industries also routinely employ them. In addition, supercomputers have found wide application in areas involving engineering or scientific research, as, for example, in studies of the structure of subatomic particles and of the origin and nature of the universe. Supercomputers have also become an indispensable tool in weather forecasting: predictions are now based on numerical models.
In general supercomputers outperforms mainframes. The reason for this is because supercomputers use multiple processors, uses most advance hardware, and the most advance software. As a result, the mainframe computers are now use for mostly business use while supercomputers are used for scientific and engineering use.
During the 1970’s large scale integration (LSI) could fit hundreds of components onto one chip.11 While early mainframes and their peripheral devices often took up the floor space of a house, minicomputers were about the size of a refrigerator or stove. Minicomputers were smaller, cheaper, and simpler. Their operating systems were often designed to be special purpose rather than general purpose. One important area in which the minicomputer began to play an important role was in support of laboratory systems: input, storage, and analysis of data generated by laboratory instruments. One of the first of these systems was the LINC, standing for Laboratory Instrument Computer. Originally the LINC was designed and built at MIT. The LINC evolved into a commercial product marketed by a fledgling corporation named Digital Equipment Corporation, which grew to become the second largest computer manufacturer in the world (after IBM) based on its success in the minicomputer field.
The minicomputer made it possible the use of computers by groups who could not have afforded to buy mainframes. Although minicomputers were relatively limited in power and fairly expensive (the cheapest probably cost somewhere around $40,000 in their early days), they broke the dominant market of mainframes on computing applications in government, industry, and especially in universities.12 Prior to the availability of minicomputers, researchers had no way of using computers to analyze research data. Mainframes could not be linked directly to laboratory instruments, and the data was too much to keypunch manually. Mainframes also cost too much to be affordable by departments or small companies. Minicomputers, such as the AS/400 family of IBM and the VAX family of the Digital Equipment Corp. (DEC), were less expensive and offer lower performance than mainframes. Their capability for handling I/O units is weaker than that of mainframes but stronger than that of personal computers. Minicomputers are used for scientific and engineering computations, business-transaction processing, file handling, and database management.
Microcomputers are synonymous with PC’s, or personal computers. Personal computers are designed for individual use. In other words, the entire machine is dedicated to the exclusive use of a single person, whereas a supercomputer, mainframe, or minicomputer is shared by many users. Personal
computers, such as the PS/2 family of IBM, clones of IBM’s PC family produced by a variety of manufacturers, and the Macintosh family of Apple Computer, Inc., are less expensive and smaller than mainframes or minicomputers but have lower performance. They are suitable for the average individual’s needs, such as document preparation with graphics and simple business calculations.
Computers that became small and inexpensive enough to be purchased by individuals for use in their homes first became feasible in the 1970s, when large-scale integration made it possible to construct a sufficiently powerful microprocessor on a single semiconductor chip. A small firm named MITS made the first personal computer, the Altair. This computer, which used the Intel Corporation’s 8080 microprocessor, was developed in 1974. Though the Altair was popular among computer hobbyists, its commercial appeal was limited, since purchasers had to assemble the machine from a kit.
The personal computer industry truly began in 1977, when Apple Computer, Inc., founded by Steven P. Jobs and Stephen G. Wozniak, introduced the Apple II, one of the first pre-assembled, mass-produced personal computers. Radio Shack and Commodore Business Machines also introduced personal computers that year.13 These machines used 8-bit microprocessors (which process information in groups of 8 bits, or binary digits, at a time) and possessed rather limited memory capacity–i.e., the ability to address a given quantity of data held in memory storage. But because personal computers were much less expensive than mainframes, they could be purchased by individuals, small and medium-sized businesses, and primary and secondary schools. The Apple II received a great boost in popularity when it became the host machine for VisiCalc, the first electronic spreadsheet (computerized accounting program). Other types of application software soon developed for personal computers.
Personal computers are classified on the basis of size and portability. Personal computers that can be placed on top of a desk but are not very portable are called desktop computers. These were the first types of personal computers. This included a motherboard, mouse, keyboard, monitor, and other attached peripherals.
Portable computers are those personal computers that are light enough to be easily transported. They are classified under as laptops or notebooks. Laptops are small enough to fit on your lap and notebooks are the size of a book. One of the main reasons computers have been able to become portable is flat screen technology. This type of display is advanced versions of the familiar liquid-crystal display used in digital watch faces. They are essentially two parallel sheets of thin glass having the facing sides coated with a transparent yet electrically conducting film such as indium tin oxide. 14The film layer nearer the viewer is patterned, while the other layer is not. The space between the films is filled with a fluid with unusual electrical and optical properties, so that, if an electrical field is established between the two thin films, the molecules of the fluid line up in such a way that the light-reflecting or light-transmitting properties of the assembly are radically changed. All flat-panel displays have these characteristics in common, but the many different varieties exploit the electro-optical a high-performance computer system that is basically designed for a single user and has advanced graphics capabilities, large storage capacity, and a powerful microprocessor (central processing unit).15 Another advancement that made laptops and notebooks possible is the miniaturization of integrated circuitry. The only limit now is heat when it comes to size today.
Another type of PC is a workstation. Workstations are more capable than an average personal computer (PC). The term workstation is also sometimes ascribed to dumb terminals meaning, without any processing capacity that are connected to mainframe computers. Most workstation microprocessors employ reduced instruction set computing (RISC) architecture, as opposed to the complex instruction set computing (CISC) used in most PCs. Because it reduces the number of instructions permanently stored in the microprocessor, RISC architecture streamlines and accelerates data processing. Workstation microprocessors typically offer 32-bit addressing (indicative of data-processing speed), compared to the exponentially slower 16-bit systems found in most PCs. Some advanced workstations employ 64-bit processors, which possess four billion times the data-addressing capacity of 32-bit machines. Their raw processing power allows high-end workstations to accommodate high-resolution or three-dimensional graphic interfaces, sophisticated multitask software, and advanced abilities to communicate with other computers. Workstations are used primarily to perform computationally intensive scientific and engineering tasks. They have also found favor in some complex financial and business applications. Workstations are often tied in with server computers. Server computers are computers that have massive storage capabilities. Server computers are always found in networks. Server computer stores data and shares it with other nodes, or workstations.
Another type of PC is PDA or personal digital assistant. PDA’s are pocketsize computer with a write-on screen instead of a keyboard. This type of PC uses LCD technology, which enables the user to write on the screen with a stylus or navigate with a touch of a finger. This is the newest technology and is pioneered by the Palm Pilot, which is made by 3Com. 16
The last type of computer is an industrial computer. This consists of automation by the computer. For example one of the most significant developments in automation has been computer-aided design/computer-aided manufacturing (CAD/CAM). This technology makes use of computer systems to assist in the creation and optimization of a design as well as to control and monitor the processes that are involved in manufacturing a product from that design. CAD/CAM technology has been adopted by a number of industries, particularly by those engaged in the manufacture of electronic equipment and machine components.17
Over the last 50 years tremendous achievements have been made since the first computer. PC’s have gone smaller, cheaper, and definitely faster. Due to rapid advances in technology the computational ability no longer differentiates many smaller PC’s. For example a workstation ten years ago would be considered slower then today’s fastest laptop. The boundaries have been changed and they are veering to those things that are faster and smaller. PDA’s are currently too expensive for the average American, but ten years from now they will be as common as a desktop PC. Technology is moving at the most rapid pace it has ever gone, and I’m glad I’m living through it.
Bitter, Gary; Macmillian Encyclopedia of Computers; Macmillian Publishing Company; 1992
Byte; September 1995
Gassee, Jean-Louis; The Third Apple; Harcourt, Brace, Jovanovich; 1985
Kin Koph, Sherry; Computers: A Visual Encyclopedia; Alpha Books; 1994
Machine of the Year; Time; January 3, 1983
Mandell, Steven; Dr. Mandell’s Ultimate Personal Computer Desk Reference; Rawhide Press; 1993
Moreau, Rene; The Computer Comes of Age; The MIT Press; 1986
Polsson, Ken; Chronology of Events in the History of Microcomputers
Ralston, Anthony and Edwin Reilly; Encyclopedia of Computer Science; Third Edition; Van Nostrand Reinhold; 1993