Computers are electronic machines that process information. They are capable of communicating with the user, of doing five kinds of arithmetic operations, and of making three kinds of decisions. However, they are incapable of thinking. They accept data and instructions as input, and after processing the information, they output the results.
When talking about computers, both hardware and software need to be considered. The former refers to the actual machinery, whereas the latter refers to the programs that control and coordinate the activities of the hardware.
The first computer was built in 1930 but since then computer technology has evolved a great deal. There are three different kinds of computers in use today: the mainframe, the minicomputer, and the microcomputer. However, the dividing line between these has become rather blurred; a modem micro is often as powerful as a mainframe was ten years ago. All three have one thing in common - they operate quickly and accurately in solving problems.
What is a computer?
When you read the following text, you will probably meet words and expressions that are new to you. First try to understand their meaning from the context - read the same passage a few times. When you have read the whole text, check new -words in a dictionary. Most of the words in bold typeface are explained in the Glossary at the end of this book.
 A computer is a machine with an intricate network of electronic circuits that operate switches or magnetize tiny metal cores. The switches, like the cores, are capable of being in one of two possible states, that is, on or off, magnetized or demagnetized. The machine is capable of storing and manipulating numbers, letters, and characters. The basic idea of a computer is that we can make the machine do what we want by inputting signals that turn certain switches on and turn others off, or that magnetize or do not magnetize the cores.
 The basic job of computers is the processing of information. For this reason, computers can be defined as devices which accept information in the form of instructions called a program and characters called data perform mathematical and/or logical operations on the information, and then supply results of these operations. The program, or part of it, which tells the computers what to do and the data, which provide the information needed to solve the problem, are kept inside the computer in a place called memory.
 Computers are thought to have many remarkable powers. However, most computers, whether large or small have three basic capabilities. First, computers have circuits for performing arithmetic operations, such as: addition, subtraction, division, multiplication and exponentiation. Second, computers have a means of communicating with the user. After all, if we couldn't feed information in and get results back, these machines wouldn't be of much use. However, certain computers (commonly minicomputers and microcomputers) are used to control directly things such as robots, aircraft navigation systems, medical instruments, etc.
 Some of the most common methods of inputting information are to use diskettes, magnetic tape, disks, and terminals. The computer's input device (which might be a card reader, a tape drive or disk drive, depending on the medium used in inputting information) reads the information into the computer.
For outputing information, two common devices used are a printer which prints the new information on paper, or a CRT display screen which shows the results on a TV-like screen.
 Third, computers have circuits which can make decisions. The kinds of decisions which computer circuits can make are not of the type: 'Who would win a war between two countries?' or 'Who is the richest person in the world?' Unfortunately, the computer can only decide three things, namely: Is one number less than another? Are two numbers equal? and. Is one number greater than another?
 A computer can solve a series of problems and make hundreds, even thousands, of logical decisions without becoming tired or bored. It can find the solution to a problem in a traction of the time it takes a human being to do the job. A computer can replace people in dull, routine tasks, but it has no originality; it works according to the instructions given to it and cannot exercise any value judgments. There are times when a computer seems to operate like a mechanical 'brain', but its achievements are limited by the minds of human beings. A computer cannot do anything unless a person tells it what to do and gives it the appropriate information; but because electric pulses can move at the speed of light, a computer can carry out vast numbers of arithmetic-logical operations almost instantaneously. A person can do everything a computer can do but in many cases that person would be dead long before the job was finished.
 Let us take a look at the history of the computers that we know today. The very first calculating device used was the ten fingers of a man's hands. This, in fact, is why today we still count in tens and multiples of tens. Then the abacus was invented, a bead frame in which the beads are moved from left to right. People went on using some form of abacus well into the 16th century, and it is still being used in some parts of the world because it can be understood without knowing how to read.
 During the 17th and 18th centuries many people tried to find easy ways of calculating. J. Napier, a Scotsman, devised a mechanical way of multiplying and dividing, which is how the modem slide rule works. Henry Briggs used Napier's ideas to produce logarithm tables which all mathematicians use today. Calculus, another branch of mathematics, was independently invented by both Sir Isaac Newton, an Englishman, and Leibnitz, a German mathematician.
 The first real calculating machine appeared in 1820 as the result of several people's experiments. This type of machine, which save a great deal of time and reduces the possibility of making mistakes, depends on a series often-toothed gear wheels. In 1830 Charles Babbage, an Englishman, designed a machine that called 'The Analytical Engine'. This machine, which Babbage showed at the Paris Exhibition in 1855, was an attempt to cut the human being altogether, except for providing the machine with the necessary facts, about problem to be solved. He never finished this work, but many of his ideas were the basis for building today's computers.
 In 1930, the first analog computer was build by an American named Vannevar Bush. This device was used in World War II to help aim guns. Mark I, the name given to the digital computer, was completed in 1944. The men responsible for this invention were Professor Howard Aiken and some people from IBM. This was the first machine that could figure out long lists of mathematical problems, all at a very fast rate. In 1946 two engineers at the University of Pennsylvania, J. Eskert and J. Mauchly, built the first digital computer using part called vacuum tubes. They named their new invention ENIAC. Another important advancement in computers came in 1947, when John von Newmann developed the idea of keeping instructions for the computer inside the computer's memory.
 The first generation of computers, which used vacuum tubes, came out in 1950. Univac I is an example of these computers, which could perform thousands of calculations per second. In 1960, the second generation of computers was developed and these could perform work ten times faster than their predecessors. The reason for this extra speed was the use of transistors instead of vacuum tubes. Second-generation computers were smaller, faster and more dependable than first-generation computers. The third-generation computers appeared on the market in 1965. These computers could do a million calculations a second, which is 1000 times as many as first-generation computers. Unlike second-generation computers, these are controlled by tiny integrated circuits and are consequently smaller and more dependable. Fourth-generation computers have now arrived, and the integrated circuits that are being developed have been greatly reduced in size. This is due to microminiaturization, which means that the circuits are much smaller then before, as many as 1000 tiny circuits now fit onto a single chip. A chip is a square or rectangular piece of silicon, usually from 1/10 to 1/4 inch, upon which several layers of an integrated circuit are etched or imprinted, after which the circuit is encapsulated in plastic, ceramic or metal. Fourth - generation computers are 50 times faster than third generation and can complete approximately 1000000 instructions per second.
 At the rate computer technology is growing, today's computers might be obsolete by 1988 and most certainly by 1990. It has been said that if transport technology had developed as rapidly as computer technology, a trip across the Atlantic Ocean today would take a few minutes.
When you read the following text, remember to by and understand the meaning of new words and expressions from the context. Don't check new words in the dictionary until you read the whole text. Most of the words in bold typeface are explained in the Glossary at the end of the book.
 Computers are machines designed to process electronically, specially prepared pieces of information, which are termed data. Handling or manipulating the information that has been given to the computer in such ways as doing calculation, adding information or making comparisons is called processing. Computers are made up of millions of electronic devices capable of storing data or moving them, at enormous speeds, through complex circuits with different functions.
 All computers have several characteristics in common, regardless of make or design. Information, in the form of instructions and data, is given to the machine, after which the machine acts on it, and a result is then returned. The information presented to the machine is the input; the internal manipulative operations, the processing; and the result, the output. These three basic concepts of input, processing, and output occur in almost every aspect of human life whether at work or at play. For example, in clothing manufacturing, the input is the pieces of cut cloth, the processing is the sewing together of these pieces, and the output is the finished garment.
 Figure 3.1 shows schematically the fundamental hardware components in a computer system. The centerpiece is called either the computer, the processor, or usually, the central processing unit (CPU). The term 'computer' includes those parts of hardware in which calculations and other data manipulations are performed, and the high-speed internal memory in which data and calculations are stored during actual execution of programs. Attached to the CPU are the various peripheral devices such as card readers and keyboards (two common examples of input devices). When data or programs need to be saved for long periods of time, they are stored on various secondary memory devices or storage devices such as magnetic tapes or magnetic disks.
 Computers have often been thought of as extremely large adding machines, but this is a very narrow view of their function. Although a computer can only respond to a certain number of instructions, it is not a single-purpose machine since these instructions can be combined in an infinite number of sequences. Therefore, a computer has no known limit on the kinds of things it can do; its versatility is limited only by the imagination of those using it.
 In the late 1950s and early 1960s when electronic computers of the kind in use today were being developed, they were very expensive to own and run. Moreover, their size and reliability were such that a large number of support personnel were needed to keep the equipment operating. This has all changed now that computing power has become portable, more compact, and cheaper.
 In only a very short period of time, computers have greatly changed the way in which many kinds of work are performed. Computers can remove many of the routine and boring tasks from our lives, thereby leaving us with more time for interesting, creative work. It goes without saying that computers have created whole new areas of work that did not exist before their development.
Computer capabilities and limitations
 Like all machines, a computer needs to be directed and controlled in order to perform a task successfully. Until such time as a program is prepared and stored in the computer's memory, the computer "knows" absolutely nothing, not even how to accept or reject data. Even the most sophisticated computer, no matter how capable it is, must be told what to do. Until the capabilities and the limitations of a computer are recognized, its usefulness cannot be thoroughly understood.
 In the first place, it should be recognized, that computers are capable of doing repetitive operations. A computer can perform similar operations thousands of times, without becoming bored, tired, or even careless.
 Secondly, computers can process information at extremely rapid rates. For example, modern computers can solve certain classes of arithmetic problems millions of times faster than a skilled mathematician. Speeds for performing decision-making operations are comparable to those for arithmetic operations but input-output operations, however, involve mechanical motion and hence require more time. On a typical computer system, cards are read at an average speed of 1000 cards per minute and as many as IOOO lines can be printed at the same rate.
 Thirdly, computers may be programmed to calculate answers to whatever level of accuracy is specified by the programmer. In spite of newspaper headlines such as 'Computer Fails', these machines are very accurate and reliable especially when the number of operations they can perform every second is considered. Because they are man-made machines, they sometimes malfunction or break down and have to be repaired. However, in most instances when the computer fails, it is due to human error and is not the fault of the computer at all,
 In the fourth place, general-purpose computers can be programmed to solve various types of problems because of their flexibility. One of the most important reasons why computers are so widely used today is that almost every big problem can be solved by solving a number of little problems - one after another.
 Finally, a computer, unlike a human being, has no intuition. A person may suddenly find the answer to a problem without working out too many of the details, but a computer can only proceed as it has been programmed to.
 Using the very limited capabilities possessed by all computers, the task of producing a university payroll, for instance, can be done quite easily. The following kinds of things need be done for each employee on the payroll. First: Input information about the employee such as wage rate, hours worked, tax rate, unemployment insurance, and pension deductions. Second: Do some simple arithmetic and decision making operations. Third: Output a few printed lines on a cheque. By repeating this process over and over again, the payroll will eventually be completed.
Hardware and software
 In order to computers effectively solve problems in our environment, computers systems are devised. A 'system' implies a good mixture of integrated parts working together to form a useful whole. Computer systems may be discussed in two parts.
 The first part is hardware - the physical, electronic, and electromechanical devices that are thought of and recognized as 'computers'. The second part is software -the programs that control and coordinate the activities of the computer hardware and that direct the processing of data.
Secondary storage Figure 5.1 Hardware
components of a basic computer system
 Figure 5.1 shows diagrammatically the basic components of computer hardware joined together in a computer system. The centerpiece is called either the computer, the processor, or usually the central processing unit (CPU). The term 'computer' usually refers to those parts of the hardware in which calculations and other data manipulations are performed, and to the internal memory in which data and instructions are stored during the actual execution of programs. The various peripherals, which include input and/or output devices, various secondary memory devices, and so on, are attached to the CPU.
 Computer software can be divided into two very broad categories; systems software and applications software. The former is often simply referred to as 'systems'. These, when brought into internal memory, direct the computer to perform tasks. The latter may be provided along with the hardware by a systems supplier as part of a computer product designed to answer a specific need in certain areas. These complete hardware/software products are called turnkey systems.
 The success or failure of any computer system depends on the skill with which the hardware and software components are selected and blended. A poorly chosen system can be a monstrosity incapable of performing the tasks for which it was originally acquired.
 Large computer systems, or mainframes, as they are referred to in the field of computer science, are those computer systems found in computer installations processing immense amounts of data. These powerful computers make use of very high-speed main memories into which data and programs to be dealt with are transferred for rapid access. These powerful machines have a larger repertoire of more complex instructions which can be executed more quickly. Whereas smaller computers may take several steps to perform a particular operation, a larger machine may accomplish the same thing with one instruction.
 These computers can be of two types: digital or analog. The digital computer or general -purpose computer as it is often known, makes up about 90 per cent of the large computers now in use. It gets its name because the data that are presented to it are made up of a code consisting of digits - single-character numbers. The digital computer is like a gigantic cash register in that it can do calculation in steps, one after another at tremendous speed and with great accuracy. Digital computer programming is by far the most commonly used in electronic data processing for business or statistical purposes. The analog computer works something like a car speedometer, in that it continuously works out calculations. It is used essentially for problems involving measurements. It can simulate, or imitate different measurements by electronic means. Both of these computer types - the digital and the analog - are made up of electronic components that may require a large room to accommodate them. At present, the digital computer is capable or doing anything the analog once did. Moreover, it is easier to program and cheaper to operate. A new type of scientific computer system called the hybrid computer has now been produced that combines the two types into one.
 Really powerful computers continue to be bulky and require special provision for their housing, refrigeration systems, air filtration and power supplies. This is because much more space is taken up by the input/output devices - the magnetic tape and disk units and other peripheral equipment -than by the electronic components that do not take up the bulk of the machine in a powerful installation. The power consumption of these machines is also quite high, not to mention the price that runs into hundreds of thousands of dollars. The future will bring great developments in the mechanical devices associated with computer systems. For a long time these have been the weak link, from the point of view of both efficiency and reliability.
 Until the mid-1960s, digital computers were powerful, physically large and expensive. What was really needed though, were computers of less power, a smaller memory capacity and without such a large array of peripheral equipment. This need was partially satisfied by the rapid improvement in performance of the semi-conductor devices (transistors), and their incredible reduction in size, cost and power; all of which led to the development of the minicomputer or mini for short. Although there is no exact definition of a minicomputer, it is generally understood to refer to a computer whose mainframe is physically small, has a fixed word length between 8 32 bits and costs less than U.S. $100,000 for the central processor. The amount of primary storage available optionally in minicomputer systems ranges from 32-512K* bytes; however, some systems allow this memory to be expanded even further.
 A large number of peripherals have been developed especially for use in systems built around minicomputers; they are sometimes referred to as miniperipherals. These include magnetic tape cartridges and cassettes, small disk units and a large variety of printers and consoles.
 Many minicomputers are used merely for a fixed application and run only a single program. This is changed only when necessary either to correct errors or when a change the design of the system is introduced. Since the operating environment for most minis is far less varied and complex than large mainframes, it goes without saying that the software and peripheral requirements differ greatly from thoseofa computer which runs several hundred ever-changing jobs a day. The operating systems of minis also usually provide system access to either a single user or to a limited number of users at a time.
 Since many minis are employed in real-time processing, they are usually provided with operating systems that are specialized for this purpose. For example, most minis have an interrupt feature which allows, a program to be interrupted when they receive a special signal indicating that any one of a number of external events, to which they are preprogrammed to respond, has occurred. When the interrupt occurs, the computer stores enough information about the job in process to resume operation after it has responded to the interruption. Because minicomputer systems have been used so often in real-time applications, other aspects of their design have changed; that is, they usually possess the hardware capability to be connected directly to a large variety of measurement instruments, to analog and digital converters, to microprocessors, and ultimately, to an even larger mainframe in order to analyze the collected data.
The early 1970s saw the birth of the microcomputer, or micro for short. The central processor of the micro, called the microprocessor, is built as a single semiconductor device; that is, the thousands of individual circuit elements necessary to perform ail the logical and arithmetic functions of a computer are manufactured as a single chip. A complete microcomputer system is composed of a microprocessor, a memory and peripheral equipment. The processor, memory and electronic controls for the peripheral equipment are usually put together on a single or on a few printed circuit boards. Systems using microprocessors can be hooked up together to do the work that until recently only minicomputer systems was capable of doing. Micros generally have somewhat simpler and less flexible instruction sets than minis, and arc typically much slower. Different micros are available, with 4-, 8-, 12-, 16-bit word lengths, and some new ones use 32-bit chips. Similarly, minis are available with word lengths up to. 32 bits. Although minis can be equipped with much larger primary memory sizes, micros are becoming more powerful and converging with minicomputer technology.
The extremely low price of micros has opened up entirely new areas of application for computers. Only 20 years or so ago, a central processing unit of medium capability sold for a few hundred thousand dollars (U.S.), and now some microprocessors sell for as cheaply as $10. Of course, by the time you have a usable microcomputer system, the price will be somewhere between $200 and $5000 depending on the display unit, secondary storage, and whatever other peripherals arc needed.
The available range of microcomputer systems is evolving more rapidly than minicomputers. Because of their incredibly low price, it is now possible to use only a small fraction of the computer's capability in a particular system application arid still be far ahead financially of any other way of getting the job done. For example, thousands of industrial robots are in use today, and the number is growing very rapidly as this relatively new industry improves the price and performance of its products by using the latest microcomputers.
Microcomputer software is developing rapidly and it now covers a tremendous range of applications. As well as data processing, software can also be written for specialized tasks even as complex as navigating rockets. Some modern micros are even capable of multitasking. In addition to their extensive use in control systems of all types, they are destined for many new uses from more complex calculators to automobile engine operation and medical diagnostics. They are already used in automobile emission control systems and are the basis of many TV game attachments. There is also a rapidly growing market for personal computers whose application potential in education is only just beginning to be exploited.
It would seem that the limits for microcomputer applications have by no means been reached. There are those who predict that the home and hobby computer markets, and the education market, will grow into multi-billion dollar enterprises within a decade or so. It would also appear that performance of microprocessors could well increase ten-fold before 1990 while prices for micros could decrease by as much.
MEMORY OR MAIN STORAGE
1. It is common practice in computer science for the words 'computer' and processor to be used interchangeably. More precisely, 'computer' refers to the central processing unit (CPU) together with an internal memory. The internal memory or main storage, control and processing components make up the heart of the computer system. Manufacturers design the CPU to control and carry out basic instructions for their particular computer.
2. The CPU coordinates all the activities of the various components of the computer. It determines which operations should be carried out and in what order. The CPU can also retrieve information from memory and can store the results of manipulations back into the memory unit for later reference.
3. In digital computers the CPU can be divided into two functional units called the control unit (CU) and the arithmetic-logical unit (ALU). These two units are made up of electronic circuits with millions of switches that can be in one of two states, either on or off.
4. The function of the control unit within the central processor is to transmit coordinating control signals and commands. The control unit is that portion of the computer that directs the sequence or step-by-step operations of the system, selects instructions and data from memory, interprets the program instructions, and controls the flow between main storage and the arithmetic-logical unit.
5. The arithmetic-logical unit, on the other hand, is that portion of the computer in which the actual arithmetic operations, namely, addition, subtraction, multiplication, division and exponentiation;
called for in he instructions are performed. It also performs some kinds of logical operations such as comparing or selecting information. All the operations of the ALU are under the direction of the control unit.
6. Programs and the data on which the control unit and the ALU operate, so must be in internal memory in order to be processed. Thus, if located on secondary memory devices such as discs or tapes, programs and data are first loaded into internal memory.
7. Main storage and the CPU are connected to a console, where an operator can perform manual control operations. The console is an important, but special purpose, piece of equipment. It is used mainly when the computer is being started up, or during maintenance and pair. Many mini and micro systems do not have a console.
The Control Unit and the Arithmetic Logical Unit
 The basic components of a computer system, the input, the output, the memory, and the processor operate only in response to commands from the control unit. The control unit operates by reading one instruction at a time from memory and taking the action called for by each instruction. In this way it controls the flow between main storage and the arithmetic-logical unit.
 A control unit has the following components:
a. A counter that selects the instructions, one at a time, from memory.
b. A register that temporarily holds the instruction read from memory while it is
c. A decoder that takes the coded instruction and breaks it down into the
individual commands necessary to carry it out.
d. A clock, which, while not a clock in the sense of a time-keeping device, does
produce marks at regular intervals. These timing marks are electronic and very
 Binary arithmetic (the kind of arithmetic the computer uses), the logical operations and some special functions are performed by the arithmetic-logical unit. The primary components of the ALU are banks of bi-stable devices, which are called registers. Their purpose is to hold the numbers involved in the calculation and to hold the results temporarily until they can be transferred to memory. At the core of the arithmetic-logical unit is a very high-speed binary adder, which is used to carry out at least the four basic arithmetic functions (addition, subtraction, multiplication, and division). Typical modern computers can perform as many as one hundred thousand additions of pairs of thirty-two-bit binary numbers within a second. The logical unit consists of electronic circuitry, which compares information and makes decisions based upon the results of the comparison. The decisions that can be made are whether a number is greater than (>), equal to (=), or less than (<) another number.
PRIMARY AND SECONDARY MEMORY
 The term "memory" is usually used to refer to the internal storage locations of a computer. It is also called real storage or primary memory, and is expressed as quantities of K. For example, computers are advertised as having memories of 16K or 152K, depending on their storage -capacity. Each K is equal to 1024 bytes, and each byte is equal to 8 bits. Some modem computers measure their memory in megabytes (Mb) - a megabyte is equal to 1048576 bytes.
 Primary memory is closely associated with the CPU because it stores programs and data temporarily, thus making them immediately available for processing by the CPU. To facilitate processing, two things are needed random access and speed. The former means that any part of the memory may be read or accessed, qually quickly. This is made possible by the system of addressed in primary memory, where me storage locations are like a series of tiny compartments, each having its own address. These address are like the address of houses, in that they do not change. Because they are, always fixed, the control unit knows where to find them at a very high speed. When it finds them, it puts in the compartments whatever must go there and wipes out whatever was stored there. The information present is these compartments is called the contents of the memory.
 Most primary memory is costly, and therefore it used transiently, which means that a program, or parts of it, is kept in internal storage while the program is being executed. This, however, is not true for mini and micro applications where the computer performs the same function, referred to as a dedicated function, all the time. But since computers must process vast quantities of data and programs, a lot of storage space is required. For this reason various secondary memory technologies have been developed.
 Secondary memory devices fall into two categories: sequential devices and random-access devices. Sequential devices permit information to be written on to or read off some storage medium in a fixed sequence only in order to get at a particular data item, it is necessary to pass over all the data preceding it. An example of such a device is the magnetic tape. It s cost is low, but access to specified data may take a considerable length of time. On the other hand, random-access devices are designed to permit direct, or almost direct, access to specified data. These devices bypass large quantities of irrelevant data and therefore reduce access time considerably. An example of this technology is the magnetic disk, which is faster than the magnetic tape and also more expensive. When disks are hooked up to the computer and used as an extension of internal storage in order to increase me capacity of primary memory, this is called virtual storage. For example, a computer with 256K bytes of real storage may seem to have 512K bytes of virtual storage by using disks to provide additional storage. The memory size of computers is increasing as memory chips become cheaper.
Types of memory
1. As mentioned previously, one of the most important characteristics of a computer is its capability of storing information in its memory long enough to process it. Not all computers have the same type of memory. In this section, three types of memory will be discussed: core memory, semiconductor memory (or chip), and bubble memory.
2. The memory of the first computers was made up of a kind of grid of fine vertical and horizontal wires At each intersection where the wires crossed, there was a small ferrite ring called a core (hence the name -core memory') which was capable of being either magnetized or demagnetized Every intersection had its unique address; consequently, when an electrical current was passed through the wires, the magnetized as well as the unmagnetized cores were identified by their respective addresses. Each core represented a binary digit of either 0 or 1. depending on its state. Early computers had a capacity of around 80,000 bits; whereas now, it is not surprising to hear about computers with a memory capacity of millions of bits. This has been made possible by the advent of transistors and by the advances in the manufacture of miniaturized circuitry. As a result, mainframes have been reduced in both size and cost. Throughout the 1950s, 1960s and up to the mid-1970s, core memory dominated the market, but it is now obsolete.
3. In the 1970s there was a further development, which revolutionized the computer field. This was the ability to etch thousands of integrated circuits onto a tiny piece (chip) of silicon, which is a non-metallic element with semiconductor characteristics. Chips have thousands of integrated circuits, each one capable of storing one bit. Because of the very small size of the chip, and consequently of the circuits etched on it, electrical signals do not have to travel far; hence they are transmitted faster. Moreover, the size of the components containing the circuitry can be considerably reduced, a step which has led to the introduction of both minis and micros. As a result computers have become smaller, faster, and cheaper. There is one problem with semiconductor memory, however: when power is removed, information in the memory is lost-unlike core memory, which is capable of retaining inform it ion during a power failure.
4. Another development in the field of computer memories is bubble memory. The concept consists of creating a thin film of metallic alloys over the memory board. When this film is magnetized, it produces magnetic bubbles, the presence or absence of which represents one bit of information. These bubbles are extremely tiny, about 0.1 micrometers in diameter. Therefore, a magnetic bubble memory can store information at a greater density than existing memories, which makes it suitable for micros. Bubble memories are not expensive, consume little power are small in size, and are highly reliable. There is probably a lot more to learn about them, and research in this field continues.
Cards, readers and keyboards
1. An essential requirement for making good use of computers is the ability to put information into the machine. Until the early 1960s, one of the most frequently used devices r providing input data to a computer was the punched card, a major storage medium for computer programs and data. Most people are very surprised to find that punched cards are used as long ago as 1780 on textile machinery. However, the first application of punched cards for me representation of large quantities of data was made by Dr Herman Hollerith in 1890. Working for the US Census Bureau, he realized that unless some means speeding up the analysis of census data were found, it would take more than ten years to complete the job. He recognized the value of the punched cards for this purpose, devised a code for representing data on the cards, and invented the necessary machines to meet his needs. Dr Hollerith went on to found a company to produce these machines, which in 1924 became International Business Machines, or IBM for short. Nowadays punched cards are rarely used.
2. The use of punched cards actually required two separate pieces of equipment. The first as a keypunch, which looked like a large typewriter and was not physically connected to e computer; hence, it was an off-line device. The second was a card reader, which, as its time implies, read information from the cards. Unlike the keypunch, it was connected to the computer and was, therefore, said to be on-line.
3. In order to speed up input, some installations used magnetic tape or disk as an intermediate input medium. Information was punched on cards and then transferred from cards to magnetic tape or disk. It was then transferred to the computer and stored in memory, one word at a time, by mounting the tape (or disk) on a tape drive (or disk drive) connected to the computer. This process is known as spooling. 4. Today, most programming and data entry is done directly onto, magnetic tape or disk, eliminating cards and card readers — although cards are still sometimes used as a backup astern in case of loss of data. The instructions, or data, are typed on a keyboard, which records the characters magnetically, and a screen shows what has been typed. When a data set or program is complete, the disk or tape can then be read into the computer at high speed.
5. The most common cards used rectangular holes to store data, although circular holes were also used (see Figure 13.1). The reason for one corner of a card being cut off was so that the user had a reference point when placing the cards in the card reader. Each time a keypunch operator pressed a key, the machine punched a number of holes in one column of card. There would be 1, 2 or 3 holes-in any of 12 rows. For this reason, each character was hanged into a 12-bit word, which was represented by writing a hole as a 1 and an unpunched area as a 0. The cards usually had 80 columns; therefore 80 characters could be punched on one card.
6. Each of the letters, digits and characters was represented by the particular pattern of punches in a column. Computer users could learn to read the patterns of the holes, but this was unnecessary since the characters were usually printed at the top of the cards at the same time as the holes were punched. Because of this feature, both computer and user could easily read a punched card.
7. Once the information had been converted into holes in the cards, it was ready to be fed into a card reader. This peripheral device was actually attached to the computer by wires; hence it was on-line. The reader examined a deck of cards one at a time by means of a light source with photosensitive elements, which sensed the presence or absence of a hole. The printed characters were not read; these were only there to help the users interpret the cards. A modern card reader can read about 2,000 cards per minute.
8. Obviously, cards could be used for storing binary information, with a hole representing 1, and no hole, 0. If a card was used in this way, it was said to be a 'straight binary' card. More often, information was entered in the Hollerith code or some other code. Each column had to be read and interpreted individually, with a combination of punches in that column representing a specific character.
9. Modem key to disk machines use similar principles for storing data, but in this case the data is stored as tiny magnets, one direction of magnetization representing 1 and the other 0. A common code uses 8 bits to store the code for each character; this is known as the American Standard Code for Information Interchange (ASCII).
Tapes and tape drives
Tapes are obviously a faster medium than punched cards for accessing information; moreover, they require less space in the library. Because they can be mailed, they are a convenient way to transfer data from one computer to another, or even from one city to another.
DISKS AND DISK DRIVES
1. Tapes are an example of sequential-access memory technology; at example of random-access or direct-access secondary memory devices is the magnetic disk. It provides a large amount of storage and rapid retrieval of any stored information. All disks are made of a substance coated with metal oxide, and can therefore be magnetized.
2. Magnetic disks are of two kinds, namely floppy and hard. The hard disks, in turn, are subdivided into fixed-head and moving-head disks which are either cartridge or pack. Floppy disks, or diskettes as they are called, are made from plastic, which makes them very light, flexible, and quite inexpensive, whereas hard disks are made from a rigid material.
A disk cartridge is made of a circular disk called a platter, about the same size as a long-playing record, which can be magnetized on both sides. When a number of these circular platters are stacked one on top of the other, they are called a disk pack. How many platters there are in a disk pack varies depending on the manufacturer and the model.
3. The recording surface of a disk has concentric circles called tracks, which are similar to the grooves in a record. Information is stored on a track in magnetized spots called bits. These bits are similar to the bits in internal memory and are situated on the track such that usually every eight of them make up one byte.
4. To access information from a cartridge, it is mounted on a disk drive which is equipped with two recording heads, one for each side of the disk. The heads move radially along a line from the center to the outside from track to track. To access information from a disk pack, the recording heads are moved back and forth in the space between the platters by the access arms to which they are attached.
A stack of tracks is called a cylinder and all the recording heads acting at Mice accesses it. The recording capacity of a disk pack is measured in terms of a number of cylinders, the number of tracks, and the amount of data in each track.
5. Information on a disk is organized in terms of blocks, each having its own address, which consists of a cylinder number, a track number, and a record number. To access directly the necessary information, the recording heads first seek the required cylinder, then search to find the beginning of the required record, and then transfer the information to the memory of the computer or to another form of storage, all of which is done in a few milliseconds.
6.Dust and dirt cause the recording condition of disks to deteriorate. As a result, data packs, which are disks with the recording heads scaled inside, were developed. They are more expensive than the normal disk packs but the drives on which they are mounted are cheaper man the normal disk drivers.
7. Disk drives are of two kinds: drives with a single non-removable platter, and drives in which disks can be changed. The latter kind is further subdivided into top-loading single platter, front-loading single platter, and top-loading multiple platter. Some disk drives open from the top, where single platter disks, either hard disks or diskette, are inserted. For very long storage, the top-loading multiple platter drives are used. After being mounted on a disk drive, disks are kept spinning at a very high and constant speed, thus allowing the recording heads to have direct access to the required information. For example, the pack on the IBM 3330 spins at 60 revolutions per second.
11.Microfilm is often used as an alternative to the printer. The output is ‘printed’ on microfilm rather than paper, which, in addition to being faster, also condenses large stacks f paper down into small amounts of microfilm with no special programming. The drawback of computer-output microfilm (COM) is that it takes a special device to print the microfilm and a special viewer to read it.
STEPS IN PROBLEM SOLVING
1). Can a computer solve problems? Definitely not. It is a machine that carries out the procedures which the programmer gives it. It is the programmer then who solves the problems. There are a few steps that one has to follow in problem solving:
2). Step 1. The programmer must define the problem clearly. This means that he or she has to determine, in a general way, how to solve the problem. Some problems are easy, while others take months of study. The programmer should always start by asking: "Do I understand the problem?"
3). Step 2. The programmer must formulate an algorithm, which is a straightforward sequence of steps of instructions used to solve the problem. Constructing an algorithm is the most important part of problem solving and is usually time-consuming. An algorithm can be described by a flowchart, which may be stated in terms of a sequence of precise sentences, or a block diagram. The latter is a diagrammatic representation of the sequence of events to be followed in solving the problem. The relationship between the events is shown by means if a connecting arrow —>. A block diagram can show if a process has to be repeated or if here are alternative routes to be taken.
4). Step 3. The programmer must translate the algorithm or flowchart into a computer program. To do so, he or she writes detailed instructions for the computer, using one of the many computer languages available following the exact sequence of the flowchart algorithm. The program is usually written on coding sheets which have a specific format drawn on them.
5). Step 4. The programmer must then keypunch the program, or give the coding sheets to he keypunch to do it. The program is either punched on cards or, more usually, entered into he computer at a terminal with a visual display unit.
6). Step 5. The program must then be tested. To do so, the computer operator puts the deck of cards in the card reader and presses the "read" button. This transfers the information to he memory of the computer. Alternatively, the program must be read from disk into the memory. Next, a printout shows if the program works or if it has errors (called bugs). If the programmer is using a terminal instead of cards to enter the instructions it is possible, with he aid of a few commands, to store the program in the memory of the computer and get a printout.
7). Step 6. The last step is to add the data to the program and ran the job completely. The computer will then perform the calculations necessary to solve the problem. It will follow he instructions in the program to the minutest details. Therefore, one can say that the computer is a robot. It doesn't think, but simply does what it is told.
|Party scene 1 In the computer room Corvax||Yuri R. Tsoy, Vladimir G. Spitsyn, Department of Computer Engineering|
|Yuri R. Tsoy, Vladimir G. Spitsyn, Department of Computer Engineering||Документы|
1. /Radiohead - OK Computer - ПЕЧАТЬ(1).doc
|Semenova lecture part1|
|Тhema:” Der Computer – Ein Werkzeug Der Zukunft.” Цель урока|
Совершенствовать навыки по аудированию, строить монологическое высказывание на основе прослушанного