Thursday 12 January 2012

INFORMATION PROCESSING USING COMPUTERS









INFORMATION PROCESSING USING COMPUTERS


MODULE I









                                
1
COMPUTER APPRECIATION

1.1. Introduction
Computers play a key role in how individuals work and how they live. Even the smallest organizations have computers to help them operate more efficiently, and many individuals use computers at home for educational, entertainment, and business purposes.   The ease with which computers can process data, store and retrieve it painlessly, have made them inevitable in office and business environments. In fact any task that can be carried out systematically can be performed by a computer. Therefore, it is essential for every educated person today to know about a computer, it strengths, its weaknesses and its internal structure.
1.2. What is a computer?
A computer is an electronic device that stores and manipulates information. Computers can access and process data millions of times faster than humans can. A computer can store data and information in its memory, process them and produce the desired results. It operates under the control of a set of instructions that is stored in its memory unit. The computer is often compared with the human brain. Like the brain, a computer can take in data and process it. It can store the data either in raw form or as processed results and can deliver the raw or processed data on demand. A computer is used essentially as a data processor. The terms data and information are very commonly used. You must clearly understand the difference between the two.
Data: Data in computer terminology mean raw facts and figures. For example ‘Mohan’, 1977, ‘A’, -162.19, and 75.2 are data. Data are processed to form information. .
Information: It means what we get after processing data (meaningful data). Data are aggregated and summarized in various meaningful ways to form information. For example “Mohan, whose roll number is 1977, has got grade A” is an information as it is conveying some meaning.
Data is entered into the computer through an input device like the keyboard and is stored in the computer’s memory. It is then processed according to the given set of instructions and the result is displayed through an output device like the monitor. A computer can store large amount of information. You can receive the stored information whenever needed. Computers can understand only electric signals ON and OFF where ON means the circuit is on and OFF means the circuit is OFF.
Computers can do a lot of different tasks such as playing music and games, typing documents, drawing pictures, storing data etc. these days, computers are also used in banks, hospitals, offices, supermarkets, schools, railway and online reservations, weather forecasting, error detection, controlling the flight of a space aircraft etc.
Points to Remember
F  A computer is an electronic device that processes the input data according to a given set of instructions to give meaningful output or information.
F  Computer can understand only electric signals ON and OFF.
F  Typing data into the computer through the keyboard is called entering or inputting data.
F  Results produced by the computer are called the output.
F  Doing calculations or comparing data is called Processing.
1.3. Functions of a computer
A computer does mainly the following four functions:
F  Receive input —Accept data/information from outside through various input devices like the keyboard, mouse, scanner, etc.
F  Process information—Perform arithmetic or logical operations on data/information.
F  Produce output—Communicate information to the outside world through output devices like monitor, printer, etc.
F  Store information—Store the information in storage devices like hard disk, floppy disks, CD, etc.
1.4. Components of a computer
Since a computer follows Input-Process-Output (I-P-O) cycle, the first stage (input stage) is performed in computer by input unit, second stage (Process stage) is performed by its central processing unit (CPU) and the third stage (output stage) is performed by output unit. Thus the basic structure of a computer is as shown below, where the dotted line represents control signals and others represent data signals.
Input Unit
The input unit is formed by the input devices attached to the computer. Examples of input devices and media are: keyboard, mouse, Magnetic Ink Character Reader (MICR), Optical Mark Reader (OMR), Optical Character Reader (OCR), joystick etc. The input unit is responsible for taking input and converting it into computer understandable form (the binary code). Since a computer operates on electricity, it can understand only the language of electricity i.e. either ON or OFF, or high voltage or low voltage. Thus a computer uses binary language which has only two symbols: 1 for ON and 0 for OFF. The input unit takes the input and converts it into binary form so that it can be understood by the computer.
Output Unit
The output unit is formed by the output devices attached to the computer. The output coming from the CPU is in the form of electronic binary signals, which needs conversion into a form which can be easily understood by human beings i.e. characters, graphical or audio visual form. This conversion is performed by the output units. Some popular Output devices are VDU (Visual Display Unit), printer, plotter, speech synthesizer etc.
Central Processing Unit (CPU)
The CPU is the brain or the control centre for a computer. It guides, directs and governs its performance. The CPU consists of Control Unit (CU), Arithmetic and Logic Unit (ALU) and Main memory or Primary memory.
Control Unit (CU)
The CU controls and guides the flow and manipulation of data and information. It also controls the flow of data from input devices to memory and from memory to output devices. Another important function of CU is the program execution i.e. carrying out all the instructions stored in the program. The CU gets program instructions from memory and executes them one after the other. After getting the instructions from memory in CU, the instruction is decoded, interpreted and executed.  After processing the current instruction, control unit sends signal to memory to send the next instruction in sequence to the control section.  This goes on until the processing is complete.
Arithmetic & Logic Unit (ALU)
The ALU performs all the four arithmetical (+, —, *, /) and some logical (<,>, =<, >=, <>) operations. When two numbers are required to be added, these numbers are sent from memory to ALU where addition takes place and the result is put back in the memory. The same way other arithmetic operations are performed.
For logical operations also, the numbers to be compared are sent from memory to ALU where the comparison takes place and the result is returned to the memory. The result of a logical operation is either TRUE or FALSE. These operations provide the capability of decision-making to the computer.
The Memory
Well, if a computer has a brain (CPU), it must also have a memory. Indeed, it does possess a memory; however, the memory of a computer is unlike the human memory. A human being can remember stored information for a long whereas a computer can not. Its memory is temporary (volatile), it cannot remember anything after it is switched off. The memory of a computer is more like a predefined working place, where it temporarily keeps information and data to facilitate its performance. When the task is performed, it clears its memory and memory space is then available for the next task to be performed. When the power is switched off, everything stored in the memory gets erased and cannot be recalled.
The memory of computer is often called main memory or primary memory. It is generally the third component of CPU. It has the following functions:
F  Data are fed into the input storage area where they are held until ready to be processed.
F  It functions as a working storage place used to hold the data that is being processed and the intermediate results of such processing
F  It also acts as an output storage area to hold the final result of the processing.
F  Another role is of a program storage area to hold the processing instructions.
The memory of a computer can be thought of as ‘cells’. A memory cell may be defined as a device which can store a symbol selected from a set of symbols. Each of these cells are further broken down into smaller parts knows as bits. A bit means a binary digit i.e., either 0 or 1. A number of bits together are used to store data instructions by their combination. A bit is an elementary unit of the memory. A group of 8 bits is called a byte and a group of 4 bits is called a nibble. One byte is the smallest unit which can represent a data item or a character. Other units of memory are KB, MB, and GB.
One KB (kilobyte) means 210 bytes i.e., 1024 bytes.
One MB (Megabyte) means 210 KB i.e. 1024 x 1024 bytes.
One GB (Gigabyte) means 210 MB i.e. 1024 x 1024 x 1024 bytes.
One TB (Terabyte) means 210 GB i.e. 1024 x 1024 x 1024 x 1024 bytes.
One PB (Petabyte) means 210 TB i.e. 1024 x 1024 x 1024 x 1024 x 1024bytes.
Since computer’s main memory (primary memory) is temporary, secondary memory space is needed to store data and information permanently for later use. The two most common secondary storage media are the floppy diskette and the hard disk.
Points to Remember
F  Information is entered into a computer with the help of an input device.
F  Results are provided by the output devices.
F  CPU consists of Arithmetic Logic Unit (ALU) and Control Unit (CU).
F  ALU is capable of doing arithmetic and logical operations.
F  CU controls the activities of the computer
F  CPU is the brain of the computer.
F  The main memory holds data, information and intermediate results.
F  A bit (Binary Digit) is an elementary unit of memory.
F  A group of 8 bits is called a Byte; a group of 4 bits is called a Nibble.
F  1KB=1024 bits; 1MB = 1024KB; 1GB = 1024MB; 1TB = 1024GB; 1PB = 1024TB.
1.5. Hardware & Software
Modern computers are made of high-speed electronic components that enable the computer to perform thousands of operations each second. A computer system consists of both hardware and software. The Hardware is the physical equipment: the computer itself and the peripherals connected to it. The Peripherals are any devices attached to the computer for purposes of input, output, and storage of data (such as a keyboard, monitor display, or external hard disk).
The Software consists of the programs and associated data (information) stored in the computer. A Program is a set of instructions that the computer follows to manipulate data. Being able to run different programs is the source of a computer’s versatility.
Like hardware and software, Firmware is another term commonly used. Firmware is a prewritten program that is permanently stored in read-only memory. It configures the computer and is not easily modifiable by the user. BIOS (Basic Input Output Services) instructions are an example of firmware. Another term used is live ware, it is the term generally used for the people associated with and benefited from the computer system. 
1.6. Characteristics of a Computer
All computers have certain strengths and weaknesses irrespective of their size and type. Computers are not just adding machines; they are capable of doing complex activities and operations. They can be programmed to do complex, tedious, and monotonous tasks. Some of the important strengths of computer are:
F  Speed: Computers can calculate at very high speeds. A microcomputer, for example, can execute millions of instructions per second over and over again without any mistake. As the power of the computer increases, the speed also increases. A powerful computer is capable of performing about 3 to 4 million simple instructions per second.
While referring to the speed of computers, we do not talk in terms of seconds or even milliseconds. The units of measuring computer speed are microseconds (10-6), nanoseconds (10-9), and even picoseconds (10-12). For example, supercomputers can operate at speeds measured in nanoseconds and even in picoseconds — one thousand to one million times faster than microcomputers.
F  Word Length: A digital computer operates on binary digits — 0 and 1. It can understand information only in terms of Os and 1s. A binary digit is called a bit. A group of 8 bits is called a byte. The number of bits that a computer can process at a time in parallel is called its word length. Commonly used word lengths are 8, 16, 32 or 64 bits. Word length is the measure of the computing power of a computer. The longer the word length, the more powerful the computer is. When we talk of a 32-bit computer, it means that its word length is 32 bits.
F  Storage: Computers have their main memory and auxiliary memory systems. Computers can store a large amount of information in very small space. A CDROM of 4.7 inch diameter can store all the 33 volumes of Encyclopedia Britannica and will still have room to store more information. Bubble memories can store 6,250,000 bits per square centimeter of space.
F  Accuracy: The accuracy of a computer system is very high. Errors in hardware can occur, but error detecting and correcting techniques will prevent false results. In most cases, the errors are due to the human factor rather than the technological fault. For example, if a program is wrongly coded, if the data is corrupted, or if the program logic is flawed, then you will always get wrong results. Another area where mistakes can occur is during data entry. So if a wrong input is given, the output also will be wrong — this characteristic is called GIGO (Garbage In Garbage Out).
F  Versatility: computers are very versatile machines. They can perform activities ranging from, simple calculations to complex operations such as CAD modeling, navigating missiles and satellites etc. In other words, they are capable of, performing almost any task, provided the task can be reduced to a series of logical steps. Computers can communicate with other computers and can receive and send data in various forms like text, sound, video, graphics, etc. This ability of computer to communicate to one another has led to the development of computer networks, Internet, WWW and so on. Today, we can send e-mail to people all around the world. We now live in a connected world and all this is possible because of computers and other related technologies.
F  Automation: The level of automation achieved in a computer is phenomenal. It is not a simple calculator where you have to punch in the numbers and press the ‘equal to’ sign to get the result. Once a task is initiated, computers can proceed on its own till its completion. Computers can be programmed to perform a series of complex tasks involving multiple programs. Computers will perform these things flawlessly. They will execute the programs in the correct sequence, they will switch on/off the machines at the appropriate time, they will monitor the operational parameters, and they will send warning signals or take corrective actions if the parameters exceed the control level, and so on. Computers are capable of these levels of automation, provided they are programmed correctly.
F  Diligence: Diligence means being constant and earnest in effort and application. Human beings suffer from weakness like tiredness, lack of concentration, etc. humans have feelings, they become sad, depressed, bored, and negligent and it will reflect on the work they do. Moreover, human beings cannot perform the same or similar tasks over and over again with the same precision, accuracy and enthusiasm as the first time. After some time people will become bored, and this will affect the performance. Being a machine, a computer does not have any of these human weaknesses. It won’t get tired or bored. They will not go into depression or loose concentration. They will perform the tasks that are given to them, irrespective of whether it is interesting, creative, monotonous, or boring, irrespective of whether it is the first time or the millionth time, with exactly the same accuracy and speed.
In spite of having all the above given characteristic, it does possess some limitations also that are strengths of human beings. These are:
F   Lack of Decision Making Power: Computers cannot take decisions themselves. They do not possess this power which is a great asset of human beings.  Computers are to be instructed at every step. If an unanticipated situation arises, computers will either produce erroneous results or abandon the task altogether. They do not have the potential to work out an alternate solution.
F   IQ Zero: Computers are dumb machines with zero IQ. They need to be told each and every step, however minute it may be. 
These limitations of computers are characteristics of human beings. Thus, computers and human beings work in collaboration to make a perfect pair.
1.7. Evolution & history of computers
The development of computers from the early calculating device to current generation can be broadly classified into the following categories:
F  Mechanical Calculating Devices
F  Electromechanical Calculating Devices
F  Electronic Computers
1.7.1. Mechanical Calculating Devices
Mechanical calculating devices can be further classified as:
F  Manual Calculating Devices
F  Semi-automatic Calculating Devices
Manual Calculating Devices
The Abacus: The first manual calculating device developed around 3500 BC was Abacus. It consists of a rectangular frame carrying a number of rods or wires. A centre bar divides each of these rods into two unequal portions. On the upper smaller portion of each rod are two beads and on the lower portion are five beads. The position of the beads on a particular rod represents a digit in that particular decimal position. It was used to do simple calculations: addition, subtraction, multiplication, and division. China played an essential part in the development and evolution of the abacus. A skilled abacus operator can work on addition and subtraction problems at the speed of a person equipped with a hand calculator (multiplication and division are slower).

The Napier Bones: In 1617 an eccentric (some say mad) Scotsman named John Napier (1550–1617) invented logarithms. He noted that multiplication and division of numbers can be performed by addition and subtraction, respectively, of logarithms of those numbers. Using this principle, he designed Napier's bones, an abacus-like device used for multiplication and division. These were used in the early 17th century for multiplication and division. Even square roots and powers can be calculated. The Napier Bones were rectangular strips of wood or bones with figures marked on one side. Each rod was divided into ten squares and in the top square was a digit from 0 to 9. The squares below had the multiples of digit. By placing the rods in line with one another in such a way one can do long multiplications with great speed.
In 1623, Wilhelm Schickard built the first digital mechanical calculator called a calculating clock. It was the first gear driven calculating device. This device got little publicity because Schickard died soon afterward. It was put to practical use by his friend Johannes Kepler, who revolutionized astronomy.
Semi-automatic Calculating Devices
In 1642 the first semi-automatic mechanical device was developed by Blaise Pascal. This device was known as Pascaline. Blaise Pascal, the 18-year old son of a French tax collector, invented what he called a numerical wheel calculator to help his father with his duties. The drawback of Pascaline was that it could do only addition.
A German mathematician and philosopher Gottfried Wilhem Von Leibnitz improved the Pascaline in 1673, by creating a machine that could add, subtract, multiply, and divide. It is known as Leibnitz Machine.
In 1822 the English mathematician Charles Babbage proposed a steam driven calculating machine, which he called the Difference Engine. The device was never finished.
In 1834, Charles Babbage moved on from developing his difference engine to developing a more complete design, the analytical engine. This device, large as a house and powered by 6 steam engines, was more general purpose in nature because it was programmable. Babbage called the two main parts of his Analytic Engine the "Store" and the "Mill", as both terms are used in the weaving industry. The Store was where numbers were held and the Mill was where they were "woven" into new results. In a modern computer these same parts are called the memory unit and the central processing unit (CPU). In 1835 Charles Babbage described his analytical engine. Charles Babbage is considered to be the Father of Computers.
The next breakthrough occurred in America. The U.S. Constitution states that a census should be taken of all U.S. citizens every 10 years in order to determine the representation of the states in Congress. While the very first census of 1790 had only required 9 months, by 1880 the U.S. population had grown so much that the count for the 1880 census took 7.5 years. Automation was clearly needed for the next census. The census bureau offered a prize for an inventor to help with the 1890 census and this prize was won by Herman Hollerith, who proposed and then successfully adopted Jacquard's punched cards for the purpose of computation.  He developed a device which could automatically read census information which had been punched onto card. As a result of his invention, reading errors were consequently greatly reduced, work flow was increased, and, more important, stacks of punched cards could be used as an accessible memory store of almost unlimited capacity; furthermore, different problems could be stored on different batches of cards and worked on as needed. Hollerith's tabulator became so successful that he started his own firm to market the device; this company eventually became International Business Machines (IBM).
1.7.2. Electromechanical Calculating Devices
Mark I (1944): By the late 1930s punched-card machine techniques had become so well established and reliable that Howard Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer based on standard IBM electromechanical parts. Aiken's machine, called the Harvard Mark I, handled 23-decimal-place numbers (words) and could perform all four arithmetic operations; moreover, it had special built-in programs, or subroutines, to handle logarithms and trigonometric functions. The machine weighed 5 tons, incorporated 500 miles of wire, was 8 feet tall and 51 feet long, and had a 50 ft rotating shaft running its length, turned by a 5 horsepower electric motor. The Mark I ran non-stop for 15 years.
One of the primary programmers for the Mark I was a woman, Grace Hopper. Hopper found the first computer "bug": a dead moth that had gotten into the Mark I and whose wings were blocking the reading of the holes in the paper tape. The word "bug" had been used to describe a defect since at least 1889 but Hopper is credited with coining the word "debugging" to describe the work to eliminate program faults.
1.7.3. Electronic Computers
The first fully electronic computer was built by John Vincent Atanasoff and his assistant Clifford Berry at Iowa State University, between 1937 and 1942. The Atanasoff Berry Computer (ABC) used punched cards for input and output, vacuum tube electronics to process data in binary format, and rotating drums of capacitors to store data. The ABC, however, only performed one task: it was built to solve large systems of simultaneous equations (up to 29 equations with 29 unknowns), an onerous computing task commonly found in science and engineering. So, the ABC was not a general-purpose computer; it was a special-purpose computer.
In 1941, Konrad Zuse, a German who had developed a number of calculating machines, released the first programmable computer designed to solve complex engineering equations. The machine was named Z3.
Similarly, another special-purpose electronic computer named Colossus was built in England starting in 1943 for the purpose of breaking German codes. The project was worked on by Alan Turing and Max Newman. The existence of this computer was kept secret until the 1970’s.
Back in America, with the success of Aiken's Harvard Mark-I as the first major American development in the computing race, work was proceeding on the next great breakthrough by the Americans. Their second contribution was the development of the giant ENIAC machine between 1943 and 1945 by two professors John W. Mauchly and J. Presper Eckert at the University of Pennsylvania. ENIAC (Electrical Numerical Integrator and Computer) used a word of 10 decimal digits instead of binary ones like previous automated calculators/computers. ENIAC also was the first machine to use more than 2,000 vacuum tubes, using nearly 18,000 vacuum tubes. ENIAC is generally acknowledged to be the first successful high-speed electronic digital computer (EDC) and was productively used from 1946 to 1955.
In 1945, Von Neumann designed the Electronic Discrete Variable Automatic Computer (EDVAC) with a memory to hold both a stored program as well as data. This "stored memory" technique as well as the "conditional control transfer," that allowed the computer to be stopped at any point and then resumed, allowed for greater versatility in computer programming. The key element to the von Neumann architecture was the central processing unit, which allowed all computer functions to be coordinated through a single source.
In 1949, a professor name M.Wilkes of Cambridge University, designed Electronic Delay Storage Automatic Computer (EDSAC). Here the program was fed into the storage unit by means of paper tape. It also used vacuum tubes and was slightly faster than ENIAC.
In 1951, the UNIVAC I (Universal Automatic Computer), built by Remington Rand, became one of the first commercially available computers to take advantage of these advances. The first computers were characterized by the fact that operating instructions were made-to-order for the specific task for which the computer was to be used. Each computer had a different binary-coded program called a machine language that told it how to operate. This made the computer difficult to program and limited its versatility and speed. Other unique features of first computers were the use of vacuum tubes and magnetic drums for data storage.
By 1965, most large business routinely processed financial information using computers.  It was the stored program and programming language that gave computers the flexibility to finally be cost effective and productive for business use. Though transistors were clearly an improvement over the vacuum tube, they still generated a great deal of heat, which damaged the computer's sensitive internal parts. Jack Kilby, an engineer with Texas Instruments, developed the integrated circuit in 1958. The IC combined three electronic components onto a small silicon disc, which was made from quartz. Scientists later managed to fit even more components on a single chip, called a semiconductor.
As IC technology progressed, chip manufacturers could fit more and more circuitry onto the tiny silicon chips. By 1971, a company named Intel developed the first microprocessor (also called an MPU) that fit a whole CPU onto one microchip. The Intel 4004 processor contained 2300 transistors on a chip of silicon 1/8" x 1/16" in size.
By 1974, Intel introduced their 8080 chip, a general purpose microprocessor offering ten times the performance of the earlier MPU. It was not too long before electronics hobbyists began building small computer systems based on the rapidly improving microprocessor chips. These computers came complete with user-friendly software packages that offered even non-technical users an arrangement of applications, most popularly word processing and spreadsheet programs.
The first commercially available microcomputer of note was the Altair 8800 computer sold by MITS (Micro Instrumentation & Telemetry Systems), a company founded by Dr. Ed Roberts that was based in Albuquerque, New Mexico.
Remember that a computer can’t do anything without software. A small company was formed in Albuquerque to provide software (a BASIC language) for the Altair computer. The founder’s name was Bill Gates, and the company he formed (along with his partner Paul Allen) was Microsoft. Another popular company named Apple was founded by Steve Jobs and Steve Wozniak on April 1, 1976. Their Apple II computer was a hit, especially in the home and education markets.
In 1981, IBM introduced its personal computer (PC) for use in the home, office and schools. It used a 4.77 MHz Intel 8088 processor. Within two years IBM released the PC XT (1983) and PC AT (1984) using the Intel 80286 processor.
The 1980's saw an expansion in computer use. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used. As computers became more widespread in the workplace, new ways to harness their potential developed. As smaller computers became more powerful, they could be linked together, or networked, to share memory space, software, information and communicate with each other. Computers continue to grow smaller and more powerful.
Computers were traditionally very difficult to use, requiring the user to memorize and type in the necessary commands (this is called a Command Line Interface). To make computers more accessible, the Graphical User Interface (GUI) was developed. In a GUI, the user interacts with a graphical display on the screen containing icons and windows and controls. Commands are chosen from menus rather than typed in.
The GUI was developed at the Xerox Palo Alto Research Center, but the management at Xerox failed to see the usefulness of it. When Steve Jobs of Apple saw the GUI, however, he recognized its value. Apple licensed the concepts from Xerox, developed them further, and released the first successful GUI computer, the Macintosh, in 1984. Macintosh computers used the Motorola 68000 series of microprocessors (and later the PowerPC series of microprocessors).
The evolution of computers is summarized in the table below.
Year
Computing device and Inventor
Description
3000 BC
Abacus
Developed in China; used as a counting device and later for mathematical calculations.
1620 AD
Slide Rule
Normally used for engineering calculations
1642
Pascaline – Pascal’s Calculating Machine
(Blaise Pascal, French Mathematician)
A device with 8 counter-wheels linked by ratchets for carryover. It was made for tedious mathematical calculations. It was not very successful due to difficult operation and very high cost.
1834
Babbage’s Analytical Engine
(Charles Babbage, Professor of Mathematics at Cambridge)
Today’s computer organization corresponds very closely to analytical engine.
1842
First Computer Programmer
(Lady Augusta Ada Byron)
She translated a paper on Babbage’s Analytical Engine, describing steps to use it. A programming language, ADA, is named after her.
1854
Boolean Logic (Algebra)
(George Boole, British Mathematician)
Published the principles of Boolean logic based on variables with values either True or False. It was an important development in the field of computers as it became easy to build reliable electronic circuits representing binary digits – 1 for ON and 0 for OFF.
1884
Punched Card Tabulating Machine
(Hermon Hollerith, Instructor at MIT, US)
It was used for the US census of 1880. The work of approximately eight years was performed by this machine in three years.
1944
Howard Mark-I 
(Howard A.Aiken, Harward University, US)
It was the first successful general purpose digital computer.
1946
Concept of program Vs. Data
(Dr. John Von Neumann of Philadelphia, US)
He gave the design principle of digital computers suggesting the concept of stored programs to make computers fully automatic.
1946
ENIAC
John W. Mauchely & J.Presper Eckert, US.
The first general- purpose electronic digital computer.
1951
UNIVAC-1 (Universal Automatic Computer)
Remington Rand
One of the first commercially available computers taking advantage of Von-Neumann architecture.
1.8. Generations of Computers
The history of computer development is often referred to in reference to the different generations of computing devices. Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful, and more efficient and reliable devices.
First Generation - 1940-1956: Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.
Second Generation - 1956-1963: Transistors
Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry.
Third Generation - 1964-1971: Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.
Fourth Generation - 1971-Present: Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input/output controls - on a single chip.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors. As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices.
Fifth Generation - Present and Beyond: Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.
The characteristics of each generation of computers can be summarized as shown below:
First Generation Computers
F  Time Period
:
1951 – 1959
F  Technology Used
:
Vacuum Tubes
F  Memory Capacity
:
10000 -  20000 (Characters)
F  Execution Speed
:
Few thousand instructions per second
F  Languages
:
Machine code and electric wired boards
F  Important computers
:
ENIAC, EDVAC, EDSAC, UNIVAC-I & II, IBM 170 & 650
F  General Remarks
:
Computers were extremely huge and bulky, relatively slow, unreliable, and generated a lot of heat. Beginning of electronic data processing.
Second Generation Computers
F  Time Period
:
1959 – 1963
F  Technology Used
:
Transistors and diodes
F  Memory Capacity
:
4000 -  64000 (Characters)
F  Execution Speed
:
Up to 1 million instructions per second
F  Central Memory
:
Magnetic Core Memory
F  Languages
:
Assembly languages and high-level languages like COBOL, FORTRAN, ALGOL
F  Important computers
:
CDC-60, UNIVAC 1004, IBM 1620,7090,7094, and Burroughs 200
F  General Remarks
:
Use of transistors and diodes, reduced size and weight, faster operation but costly, increase in reliability, rapid growth in data processing applications, and introduction of time sharing and real-time processing.
Third Generation Computers
F  Time Period
:
1963 – 1975
F  Technology Used
:
Integrated Circuits
F  Memory Capacity
:
32000 -  4 million (Characters)
F  Execution Speed
:
Up to 10 million instructions per second
F  Languages
:
High-Level, e.g., FORTRAN, PL/1, COBOL,
ALGOL 68, and BASIC
F  Important computers
:
UNIVAC 100, IBM 360, and Burroughs 7700
F  General Remarks
:
Smaller, faster, reliable and required less power, reduced computing costs, improved software support, and software development methodologies and tools.
Fourth Generation Computers
F  Time Period
:
1975 - Today
F  Technology Used
:
Microprocessor using Large Scale Integration (LSI)
F  Memory Capacity
:
512,000 -  32 million (Characters)
F  Execution Speed
:
Up to 100 million instructions per second
F  Languages
:
All High-Level and fourth generation languages and artificial Intelligence.
F  Important computers
:
CDC Cyber 170, Apple, Macintosh, IBM PC,
PC-XT,  PC-AT, and AT-386.
F  General Remarks
:
More powerful and versatile computers, much faster, much smaller, less expensive, minicomputer, and microcomputer came in the market.
Fifth Generation Computers
F  Japan Initiated the fifth generation computer project in 1982. Aimed at becoming the leader in the computer field in 1990’s.
F  Was conceived as ‘Knowledge / Inference Processing System’.
F  Used Very Large Scale Integration (LSI) and parallel processing
F  Incorporated Artificial Intelligence (AI)
F  Artificial Intelligence refers to the use of computers in such a way that they perform various operations and at the same time take decisions similar to human beings.
F  Process non-numeric information such as pictures and graphs.
F  Natural language processing system.


1.9. Classification of Computers
Computers can be classified many different ways - by mode of data representation or working principle, by size, and speed.
1.9.1. Classification by working principle
Computers can be classified into three according to the mode of data representation – Analog, Digital and Hybrid.
An analog computer represents data as physical quantities (such as such as pressure, voltage, or temperature), and operates on the data by manipulating those quantities. The analog system is set up according to initial conditions and then allowed to change freely. The output of analog computer will be usually in the form of dial gauge readings or graphs. In other words, analog computers are electronic systems which are used to manipulate physical quantities that are represented in analog form. A thermometer is a simple analog computer. As the temperature varies, the mercury moves correspondingly. Another example of analog computer is the processor attached to petrol pump which converts the fuel flow measurements and displays the quantity and price.
The word ‘Digital’ stands for discrete (step-by-step) and hence digital computers can take only discrete values. A digital computer represents data in terms of discrete numbers and processes data using standard arithmetic operations. Hence accuracy obtained in a digital computer is very high. They are high speed, programmable electronic devices that perform mathematical calculations, compare values and store the results. They recognize data by counting discrete signals, representing either a high voltage electrical state (on) or low voltage electrical state (off). Numbers and special characters are reduced to representation by 1s (on) and 0s (off).
A computer which performs operations based on both analog and digital principles is called a hybrid computer. In other words, a computer system that has capabilities, behavior, functions, and principles of operation of both analog and digital computer is called a hybrid computer. Many scientific, business and medical applications rely on the combination of analog and digital services. The ultrasonic digital scanner is an example of hybrid computer.
The characteristics of these computers can be summarized as follows:
Analog Computers
F  Operate by measuring rather than counting         
F  Use continuous signals as input
Digital computers
F  Operate both on digits and alphabets  
F  Use discrete signals as input
F  Computers used for business and scientific applications, and pulse/heart-beat counters are examples.
Hybrid computers
F  Use both types of signals – analog as well as digital – as input
F  Mostly used with process control equipments in continuous production plants, for example, oil refineries.
F  Areas of application are nuclear power plants, mines, intensive care units (ICUs) of hospitals, and chemical process plants.

1.9.2. Classification by size and speed
The size of a computer often determines its function and processing capacity. The size of computers varies widely from tiny to huge and is usually dictated by computing requirements. According to size and speed, the computers can be classified as Micro, Mini, Mainframe and super computers.
Supercomputers
A supercomputer is mainframe computer that has been optimized for speed and processing power. They are used for extremely calculation-intensive tasks such simulating nuclear bomb detonations, aerodynamic flows, and global weather patterns. A supercomputer typically costs several million dollars. Supercomputers are the biggest in size, and the most expensive in price than any other. It can process trillions of instructions in seconds. This computer is not used as a PC in a home neither by a student in a college. Governments specially use this type of computer for their different calculations and heavy jobs. Different industries also use this huge computer for designing their products. In most of the Hollywood’s movies it is used for animation purposes. This kind of computer is also helpful for forecasting weather reports worldwide.
Mainframe Computers
Another giant in computers after the super computer is Mainframe, which can also process millions of instruction per second and capable of accessing billions of data. Users connect to the mainframe using terminals and submit their tasks for processing by the mainframe. A terminal is a device that has a screen and keyboard for input and output, but it does not do its own processing (they are also called dumb terminals since they can’t process data on their own). The processing power of the mainframe is time-shared between all of the users. Mainframes typically cost several hundred thousand dollars. They are used in situations where a company wants the processing power and information storage in a centralized location. Mainframes are also now being used as high-capacity server computers for networks with many client workstations. This computer is commonly used in big hospitals, air line reservations companies, and many other huge companies prefer mainframe because of its capability of retrieving data on a huge basis.
Mini Computers
A minicomputer is a multi-user computer that is less powerful than a mainframe. This class of computers became available in the 1960’s when large scale integrated circuits made it possible to build a computer much cheaper than the then existing mainframes (minicomputers cost around $100,000 instead of the $1,000,000 cost of a mainframe). These are the computers, which are mostly preferred by the small type of business personals, colleges, etc.
Personal / Micro Computer
A microcomputer is a computer that has a microprocessor chip as its CPU. They are often called personal computers because they are designed to be used by one person at a time. These computers are lesser in cost than the computers given above and also, small in size; they are also called PCs in short for Personal computers. They are typically used at home, at school, or at a business. Popular uses for microcomputers include word processing, surfing the web, sending and receiving e-mail, spreadsheet calculations, database management, editing photographs, creating graphics, and playing music or games. Personal computers come in two major varieties, desktop computers and laptop computers:
Desktop computers are larger and not meant to be portable. They usually sit in one place on a desk or table and are plugged into a wall outlet for power. The case of the computer holds the motherboard, drives, power supply, and expansion cards. This case may lay flat on the desk, or it may be a tower that stands vertically (on the desk or under it). The computer usually has a separate monitor (either a CRT or LCD) although some designs have a display built into the case. A separate keyboard and mouse allow the user to input data and commands. Laptop or notebook computers are small and lightweight enough to be carried around with the user. They run on battery power, but can also be plugged into a wall outlet. They typically have a built-in LCD display that folds down to protect the display when the computer is carried around. They also feature a built-in keyboard and some kind of built-in pointing device (such as a touch pad). Laptops cost more than desktop units of equivalent processing power because the smaller components needed to build laptops are more expensive.
1.10. Basic Applications of computers
Computers have affected the lives of people in one way or the other. Computers are being used in each and every field—at home, airline and railway reservations, telephone and electricity bills, banking, medical diagnosis, weather forecasting, etc.
Home - Computers are used at homes for playing games, telling stories, writing letters, making greeting cards, etc. They can be used as an educational aid at home. You can test your general knowledge, and improve your grammar and mathematics. You can get information on any topic using the Internet. Computers are also used in home management. You can keep track of your monthly expenditure and budget, store addresses, phone numbers, etc., on your computer.
Education - Computers are widely used in the field of education. Computers can assist in actual teaching and learning processes. Computers are used by teachers to prepare lessons, report cards, and for teaching different topics related to various subjects. They are also used in schools for helping students with their writings, spellings, grammar, etc.
Cartoons and Animations - In earlier days, cartoons were created with great difficulty. The artist had to draw every picture by hand. These days, cartoons can be created easily on the computer. In cartoon films, the characters appear to be moving. The technique of making cartoon films is called animation. Earlier, it took a lot of time and effort to make animated films. But now, cartoon films are created very easily through computer animation.
Cinema - Computers are also used in cinema to create special effects through computer graphics. Special effects like that of a fire, battle, earthquake, etc., can be created using computers. These days, computers are used to simulate these special effects and combine them with the real characters or scenes using special devices and then the pictures produced look realistic. Some films have been produced in which cartoon characters interact with real characters.
Desk Top Publishing (DTP) - These days, magazines, newspapers, books, comics, etc., are produced using computers. The text is typed using word-processing software and the illustrations are drawn using a graphics package. Modern DTP software makes it easy to apply styles and layout texts and graphics.
Business - Every company requires a lot of information to carry out its day-to-day activities. This information must be constantly updated. Computers are being used in several office jobs like preparing salary, sales record, stock control, and maintenance of staff records. Computers are also used for sales forecasting, production planning, etc.
Medicine - A large number of computerized equipment is used for medical tests in hospitals and clinics. Computers are used for storing medical records for future references. Complete records of patients can be stored. Doctors can search through these records to examine various case histories. In hospitals, special computers are built inside different equipments. These help to monitor the condition of patients and record all the necessary information. Computers are also used in surgical cases, especially surgery involving the heart.
Defense - Computers are very useful in defense services. Modern weapons and missiles are totally computer-controlled.
Space Technology - A number of satellites have been put into orbit. These space satellites are linked with computers that provide enormous information. Computers are also used to monitor and control the proper functioning of space equipment, to determine and control routes, etc.
Library - Computers are used in libraries for many purposes. Computers are used to record issuing and returning of books. Computers are also used for maintaining a list of borrowed books. Computer indexing helps in selecting information on a particular subject from a library.
Airline/Railway Reservations - Computers are being used for airline/railway reservation. All the information required for booking is fed into the computer by the booking clerk and he/she checks the availability of tickets. The tickets so booked are printed in pre-printed stationery and issued. The computer updates all the information immediately and gives the latest status. The computers booking counters are connected through a common network. This enables people to book tickets from any, town or city for any other place.
Banks - Computers are being used in banks for various tasks — online enquiry of customer’s balance, cheque verification and updating the balance, calculating interests, printing customer statements, etc. All such transactions in banks are carried out through computers. Many leading banks have installed Automated Teller Machines (ATM). These enable the customers to draw money from accounts, transfer money, obtain bank statements, etc. All these can be done using a special plastic card which is inserted into the input device of a computer. This also eliminates the need for a clerk.
Weather Forecasting - Computers are used for weather forecasting. Data is collected from weather stations and satellites all over the world. They provide information about the changes in weather and direction of winds. This data is fed to the computer and analyzed. The computer predicts the changes in the weather conditions. Timely predictions may avoid some of the worst mishaps. The people concerned with air travel, shipping, rescue operations, and farmers, etc., are dependent on correct weather forecasting.
Points to Remember

F  Hardware and Software are the two terms associated with computers.
F  Hardware is the physical parts; and software is the set of programs and data.
F  Speed, Versatility, Diligence, accuracy etc are the characteristics of computer.
F  Computers are used in most of our day-to-day activities.
F  Computers are used in offices, houses, hospitals, etc.
F  Computers are used to make cartoon films, or to create special effects in cinema.
F  Computers are used to produce books, magazines, newspapers, comics, etc.
F  Computers assist in teaching and learning, in medicine and in space technology.
F  Computers are used in airline and railway reservations.
F  Computers are used for weather forecasting.
F  Using computers customer transactions are verified and updated in banks.
F  Computers are very useful in defense services and in hospitals.
F  In libraries, computers are used for issuing books and storing the list of books available in the library.
F  All leading banks have Automated Teller Machine (ATM) services that facilitate speedy transactions.

<< End of Chapter >>
2
 Hardware concepts

2.1. Introduction
Computer hardware represents all the physical components of a computer system that can be seen and located. Thus it includes input devices, output devices, central processing unit, and storage devices.
Computer is not a single machine but a combination of several working units. To accomplish a task it requires input which is taken from the input unit. The processing part is handled by the Central Processing Unit (CPU). The output that is generated is sent to output unit or saved on secondary storage devices. The input unit converts the input to a form recognized by the machine; transfers the input data in the form of digital signals for processing. These digital signals are interpreted by the CPU and processed. The output unit on the other hand, converts the output-digital-signals generated as a result of processing into understandable form. All the units communicate with each other through internal set of wires called ports.
2.2. Input & output Units
The input unit facilitates man to machine communication. Input of any form is converted into binary electronic signals which can be understood by the CPU. This process is called digitizing. Input data may be graphical, audio, visual, linguistic, and mechanical. Some of the input devices used for this purpose are keyboard, mouse, joystick, light pen, Voice Data Entry (VDE), punched cards, Optical Mark Reader (OMR), Optical Character Reader (OCR), Magnetic Ink Character Reader (MICR), bar code reader, and magnetic tapes and disks.
Output unit is just the opposite of input unit, that is, it is an interface for machine to man communication.  The output that comes from the CPU is in the form of binary signals which get converted into a form that can be understood by humans, that is, graphical, audio visual or language form. Some of the popular output devices are visual display unit (VDU), plotter, printer, speech synthesizer, magnetic disks and magnetic tapes.
2.3. input/output devices
Input-output (I/O) devices attached to a computer are also called peripherals. Among the most common peripherals are keyboards, display units, and printers. Peripherals that provide auxiliary storage for the system are magnetic disks and tapes. Peripherals are electromechanical and electromagnetic devices. These devices are called peripherals because they are attached in the surroundings (periphery) of the computer systems. Peripheral devices are classified mainly into two types: input devices and output devices. Some devices such as magnetic disks and tapes serve the purpose of input as well as output devices.
2.3.1. Input devices
Input devices accept different forms of input from the user and forward it to the computer system in understandable form, that is, by converting the inputs to the binary form. Some of the popular input devices are:

FKeyboard
FMouse
FJoystick
F  Scanner
FLight Pen
F  Track ball
F  Touch Pad
F  Bar Code Reader
F  Optical Mark Reader (OMR)
F  Optical Character Reader (OCR)
F  Magnetic Ink Character Reader (MICR)


Keyboard
The most widely used input device today is the keyboard. It is similar to a typewriter in that all the keys are arranged like those on a typewriter but there are some extra keys also. Every key and key combination passes a unique signal to the computer. It is generally used for typing text-based information. The traditional format of a computer keyboard is called QWERTY keyboard because of the sequence of the six letters on the left hand corner of the upper row.
During typing a flashing line appears on the screen, which is called a cursor. When a key on the keyboard is pressed, that character is displayed at the cursor position Control (Ctrl) and Alter (Alt) keys in combination with other keys have special functions.
Mouse
A mouse is a pointing device, which rolls on a small rubber ball and has two or three buttons on the top. The movement of the ball is sensed by two sensors and resolved into horizontal and vertical components. The movement of the cursor on the screen is controlled by the movement of the mouse. Selection of menu option is done by clicking the mouse buttons while the cursor, or pointer, points to the option to be selected.
Joystick
It provides fast, controlled movement on the screen and allows movement of objects around the screen with ease. The movement is sensed by vertical stick which is attached to a solid base. Different shapes of joysticks are available now. It is commonly used for playing games on computers.
Scanner
Scanners look and work somewhat like photo copiers. One needs to simply lay an image or page of text face down on the flatbed scanner and then issue a command to scan the page. The page stays stationary and a mechanism inside the flatbed scanner moves over the image to scan it. The scanned image is then transferred to the system and saved in a graphic format generally as a paint package file or CorelDraw file.
Light Pen
Light pen is also a pointing device that can be used to select an option by simply pointing at it or for drawing figures directly on the screen. The light pen functions on the concept of photocell. It is used in application areas like designing and engineering.
Trackball
A trackball is similar to a mouse, but the roller ball is mounted in a fixed position and the user spins the ball in various directions to move the cursor on the screen. This type of pointing device is normally used in a Laptop personal computer.
Touch Pad
This is also used on portable computers. It is a small, touch-sensitive pad. By moving a finger or other object along the pad, you can move the pointer on the display screen.
Bar Code Reader
Bar code readers are widely used in super markets, bookshops etc. They are photoelectric scanners that read the bar codes, or vertical zebra striped marks, printed on product containers. Supermarkets use a bar code system called Universal Product Code (UPC). The bar code identifies the product to the supermarket’s computer, which has a description and the latest price of the product.
Optical Mark Reader (OMR)
It works on the concept of mark sensing and reflectance of light. Using these methods, data can be directly transferred to the computer. It is used for evaluating multiple choices answer sheets and work at a speed of 200 documents per minute. OMR is used in tests such as aptitude tests.
Optical Character Reader (OCR)
OCR permits direct reading of any printed character. It can also read bar codes to enter data directly into a computer. Using OCR, each character is scanned photo-electrically and converted into a pattern of electronic signals which are then compared with the stored patterns to identify the character. OCR recognizes characters printed in a special format. Use of OCR saves a lot of time, which would otherwise be spent in data transcription. It increases data accuracy and timeliness of information produced. OCR is widely used in legal profession. Examples of OCRs are the American National Standards OCR and European OCR.
Magnetic Ink Character Reader (MICR)
MICR is generally used by banks to process large volumes of cheques. The information coded on the cheque is printed with a special ink that contains magnetized particles of iron oxide. The characters are read or recognized by the reader based on patterns of magnetization of particles in the ink. Magnetic ink characters can be read by humans also. It eliminates the document encoding process. Apart from being used in banks, MICR helps in reading bills and customer payment coupons.
2.3.2. Output devices
Output devices take the binary form of output from the computer system and produce it in the desired form. Outputs can be typed, printed, or graphical, or may be of video or audio type. Output devices can be classified as softcopy devices and hard copy devices.
Hard Copy Devices
Hard copy means that the output is in an instantaneously usable form, that is, in printed or plotted form. Hard copy devices produce a permanent record on media such as paper or microfilm. They are very slow in operation as compared to soft copy devices because these often involve mechanical movement. Commonly used hard copy devices are   Printers and Plotters.
Printers
Printers can be divided into two distinct categories on the basis of producing impression over the paper - Impact Printers & Non-impact Printers
Impact Printers
In an impact printer, a character is printed on the paper through physical contact between the print head and paper. Either the needle or a character is stuck on the paper through the ribbon. This creates a lot of noise when these printers work. Impact printers may also be categorized into two types on the basis of produced (impression) pattern.
§ Solid Font - In a solid font printer, a complete character strikes a carbon ribbon or other inked surface against paper to produce an image of the character.
§ Dot Matrix Dot matrix printer has a set of printing needles or pins. Selected print needles strike the inked ribbon against paper to produce an image of the character. 
Impact printers can further be categorized in four categories:
1.      Character Printer - Character printer prints character by character. It may work on both technologies: dot matrix as well as on solid font.
2.      Line Printer – line printer prints one complete line at a time. It works on both technologies: dot matrix and solid font. Dot matrix type line printers are relatively slower than the solid font impact line printers. Speed may be 300 lines per minute or more.
3.      Dot Matrix Printer - In a dot matrix printer, the character is formed with closely packed dots. The printing head contains a vertical array of pins. Formation of character is done by the movement of head across the paper. Selected print needles strike the inked ribbon against paper to produce an image of the character. Dot matrix printer supports printing of graphics also. It is faster than daisy wheel printer and the printing speed lies between 30 to 600 cps (characters per second). It comes in two print head specifications, 9 pin and 24 pin. Examples include EPSON EX 1000, and EPSON LQ 1050.
4.      Daisy Wheel Printer – it is a solid font type character printer. Daisy wheel printer is named as such because the print head resembles a daisy flower, with the printing arms appearing like petals of the flower. Speed lies between 30 cps to 90 cps. Print quality is better than dot matrix. It is a bi-directional printer, that is, head of the printer prints while moving in forward direction as well as in backward direction. It also supports graphics, such as curves.
Non-Impact printers
In non-impact printers, the head does not come directly in contact with the paper. There is no impact or hitting of needles so non-impact printers do not make any noise while printing. They come in many varieties:
1.      Thermal Printer - In a thermal printer the characters are formed by pressing an array of electrically heated needles against heat sensitive paper. Such papers have a special heat sensitive coating which becomes dark when a spot is heated. Character is printed with a matrix of dots which are heated by the needles.
It is not possible to produce multiple copies simultaneously with this type of a printer. A special type of paper is used with this printer which is costly, thereby reducing the popularity of a thermal printer.
2.      Laser Printer - Laser printer works on the concept of using laser beams to create an image on a photosensitive surface. Initially the desired output image is written on a copier drum with a laser beam that operates under the control of the computer. The laser exposed drum areas attract a toner that attaches itself to the laser generated charges on the drum. The toner is permanently fused on paper with heat and/or pressure by rolling the drum over a blank paper. Laser printers are quiet and produce very high quality of output. They are capable of printing 4-40 pages per minute.
3.      Ink-jet printer - Ink-Jet printers use dot matrix approach to print text and graphics. Nozzles in the print head produce tiny ink droplets. These droplets are charged which are deflected and then directed to the desired spots on the paper to form the impression of a character. It has a speed of 40-300 cps with software controls on size and style of characters. These printers support color printing and are very quiet and noiseless in operation. The print quality of such printers is very near letter-quality.
Comparative View of Printers
Printer Type
Advantages
Disadvantages
Dot Matrix
Inexpensive, Fast, Prints Graphics
Poor quality printing
Daisy Wheel
High quality printing
Slow, Noisy, Expensive
Thermal
Light Weight, Battery powered
Slow, Poor quality printing, requires special paper
Laser
Excellent print quality, Prints graphics
Expensive
Plotters
Plotters are output devices that are used to produce precise and good quality graphics and drawings under computer control. They use ink pen or ink-jet to draw graphics or drawings. Either single color or multicolor pens can be employed. The pens are driven by a motor.
The graphics and drawings produced by plotters are uniform and precise and they are of very high quality. Plotters are used for complex engineering drawings and for drawing of maps that require high degree of accuracy. Flatbed plotters use horizontal flat surface on which paper can be fixed. The pen moves in X and Y directions which is controlled by the computer.
Soft Copy Devices
Soft copy is in magnetic/audible form that cannot be used directly. These devices do not produce a permanent record. Following soft copy devices are normally used: VDU & ARU.
Video Display Unit (VDU)
It is the most commonly used output device. VDU works on the concept of a Cathode Ray Tube (CRT) and no media, cards or paper outputs are involved. VDU can be used for character or graphic display. The input errors can be corrected instantly. It can be used as an on-line terminal as well as an off-line terminal. Finally, VDUs are quiet, clean and fairly reliable in operation.
There are many types of VDUs based on different characteristics: text and graphics, monochrome and colored.
§  Text and Graphics - Certain VDUs are capable of displaying a character set such as that provided by the ASCII code. The output of a computer is best presented in graphical form. For that a graphical monitor is required which has high degree of resolution and screen is divided into rows and columns of dots called pixels.
§  Monochrome and Colour Monitor - Monitors capable of displaying only a single colour image are called monochrome monitors. It has only one electron gun. Colour monitor is capable of displaying upto 17 million colours using combinations of basic colours. It has different colours. Generally two types of coloured monitors are used.
RGB (RED, GREEN, BLUE)
CMYK (CYAN, MAGENTA, YELLOW and ‘K’ for BLACK)
RGB colour monitor has three electron guns and the screen is coated with three types of phosphors: red, green and blue.
Audio Response Unit (ARU)
Audio Response Unit or ARU permits computers to talk to people. All the sounds needed are provided on a storage medium. Each sound is given a code. When enquiries are received, the processor follows a set of rules to create a reply message in a coded form. This coded message is then transmitted to an audio response device. The sounds are assembled in a proper sequence. The audio message is then transmitted back to the station requesting the information.
A common example of an ARU is the way messages and train schedules are announced on railway stations with automatic enquiry system.
2.4. input-output (i/0) interface
To communicate with various types of devices available, the computer system requires an input-output interface. An I/O interface provides a solution to transfer information between internal storage and external I/O devices. Peripherals connected to each other need special communication links to interface with the CPU. The purpose of the interface units is to resolve the differences between the computer system and each of the peripheral devices. It supervises and synchronizes all 1/O transfers. The main reason why this interface is needed is because:
  1. The manner of operation of peripherals is quite different from the CPU and memory, which are electronic devices.
  2. The data transfer rate of peripheral device is slower than that of the CPU.
  3. Data codes and formats in peripherals differ from the word format in the CPU and memory.
  4. Operation of peripherals is different from each other and each must be controlled to avoid disturbances in the operation of other peripheral devices.
2.5. Central Processing Unit
Central Processing Unit (CPU) is the brain of the computer system. All the actions performed by computer system are initiated, performed and controlled by the CPU. The CPU works with binary signals only. Every instruction that is executed first gets stored in the memory unit, and then it gets processed by the CPU. Thus, CPU has three parts:
1. Arithmetic Logic Unit
2. Control Unit
3. Register Set
The components of CPU communicate among themselves with the help of internal set of wires called “Bus’. Just as buses carry people from one place to another, here these wires are used to carry data from one unit to another hence the name bus. There are different kinds of buses for different purposes.
Data bus - The data bus carries the data that is transferred from one unit to another. Generally a data bus is a bi-directional bus. This means that data can travel in both the directions. The size of a bus determines how much data can be transferred at one time. If the width of data bus is 16 then 2 bytes of data can be transferred at a time.
The need of data transfer may arise due to interaction between memory and CPU, input output unit and processor.
Address Bus - The information stored in the memory is identified by a unique number called an “address”. This address needs to be supplied to this memory to access of data. The address bus carries the address of the data to be accessed. The number of memory locations that a CPU can address is determined by the number of address lines. If the CPU has n address lines then it can address 2n different addresses in the memory and other I/O equipment. The address bus is uni-directional, from CPU to memory or from CPU to I/O unit.
Control Bus - It is the most important bus of the system. It controls nearly all the operations in the CPU. The common control bus signals are the read-write signals. To read from memory unit, the CPU places the address on the address bus, that is, location from where data is to be read and initiate the read control signal. The control bus is also uni-directional because control signals are initiated only by the CPU.
Components of CPU
The basic structure of CPU is shown below.

Control Unit
It is the most critical part of CPU. It does not perform the actual processing of the data but manages and coordinates the entire computer system including the input and the output devices. It is responsible for generating control signals to streamline the functioning of the CPU and other units. The control signals generated by the CPU are placed on the control bus. The control unit determines the sequence in which program instructions are interpreted and executed. It also controls the flow of data to and from secondary storage devices.
The control unit makes use of some special purpose registers and a decoder for accomplishing its tasks. The special purpose register called the Instruction register, holds the current instruction to be executed, and the Program control register holds the next instruction to be executed. The decoder interprets the meaning of each instruction supported by the CPU. Each instruction is also accompanied by a Microcode, i.e., basic directions to tell the CPU how to execute the instruction.
Arithmetic and Logic Unit (ALU)
The ALU provides arithmetic and logic operations. It has necessary circuitry to carry out these operations. The arithmetic unit performs number of calculations and computations. The logic unit is used to apply logic, which is, used to compare certain types of tests and take decisions. All such logical operations are done in this unit. This unit has a number of registers and accumulators for short-term storage of data while calculating and comparing.


Register Set
The CPU consists of a set of registers which are used for storing instructions as well as intermediate results. Some of them include Memory Address Registers (MAR), Memory Buffer Register (MBR), Accumulator, Instruction Register, Program counter etc.
Memory Address Register (MAR) specifies the address of the memory location from which data is to be accessed (in case of read operation) or to which data is to be stored (in case of write operation).
Memory Buffer Register (MBR) receives data from the memory (in case of read operation) or contains the data to be written in the memory (in case of write operation).
Accumulator (AC) interacts with the ALU and stores the input or output operand. This register therefore, holds the initial data to be operated upon, the intermediate results and the final results of the processing operations.
Instruction Register (IR) holds the current instruction that is being executed.
The basic issues relating to a CPU can be expressed as:
F  It should be as fast as possible
F  The capacity of the main memory needed by the CPU is very large.
Two terms associated with CPU are the CPU cycle time and Memory cycle time.
The CPU cycle time is the time taken by the CPU to execute a well-defined shortest micro-operation. The memory cycle time is the speed at which the memory can be accessed by the CPU.
It has been found that the memory cycle time is approximately 1-10 times higher than the CPU cycle time. That is why temporary storage is provided within the CPU in the form of CPU registers. CPU registers are also called fast memories and can be accessed almost instantaneously.
Further, the number of bits a register can store at a time is called the length of the register. Most CPU sold today has 32-bit or 64-bit registers. The size of the register is also called the word size and indicates the amount of data that a CPU can process at a time. Thus the bigger the word size, the faster the computer can process data.
How a CPU works
The basic task performed by the CPU is instruction execution. Each instruction is executed using several small operations called micro-operations. The simplest form of instruction processing can be defined as a two-step process:
  1. The CPU reads (fetches) instructions (codes) from the memory one at a time.
  2. It executes or performs the operation specified by this instruction.
The fetching of the instruction is done using the program counter (PC) that keeps track of the next instruction to be fetched. Normally, the next instruction in the sequence is fetched, as programs are executed in sequence. the fetched instruction is in the form of a binary code and is loaded into an instruction register (IR) in the CPU. The CPU then interprets the instruction and performs the required action. In general, these actions can be divided into the following categories:
F  Data Transfer: From CPU to memory, memory to CPU, from CPU to I/O, or I/O to CPU.
F  Data Processing: An arithmetic or logic operation may be performed on the data by the CPU.
F  Sequence Control: This action is typically required for altering the sequence of execution. For example, if an instruction from location 50 specifies that the next instruction to be fetched should be from location 100, then the program counter will need to be modified to contain the location 100 (which otherwise would have contained 51).
Execution of an instruction may involve any combination of these actions.
2.6. Memory /storage devices
Memory is an essential component of a digital computer. CPU contains several registers for storing data and instructions. But these can store only a few bytes. If all the instructions and data being executed by the CPU were to reside in the Secondary storage (like magnetic take or disk), and were to be loaded into the registers of the CPU as the program execution proceeded, it would lead to the CPU being idle for most of the time. This is because the speed at which the CPU processes data is much higher than the speed at which data can be transferred from disk to registers. Every computer thus requires storage space where instructions and data of a program can reside temporarily while the program is being executed. This temporary storage area is built into the computer hardware and is known as primary storage or main memory. Devices that provide backup (like magnetic tapes and disks) are called secondary storage or auxiliary memory.
At present the two kinds of memory are commonly used - Semiconductor Memory & Magnetic Memory
The semiconductor memory is faster, compact, and lighter. It consumes less power, and is a static device, that is, there is no rotating component in it. The magnetic memory is cheaper than static memory. It is in the form of magnetic disk or magnetic tapes. The semiconductor memory is employed as the main memory or primary memory of the computer. It stores programs and data which are currently needed by the CPU. The magnetic memory is used as secondary memory or auxiliary memory.
The total memory capacity of the computer can therefore be visualized as being a hierarchy of components consisting of all storage devices employed in a computer system from the slow but high capacity auxiliary memory to a relatively faster main memory, to an even smaller and faster cache memory accessible to the high-speed processing logic. The memory hierarchy is schematically represented as follows.

Capacity of Memory
In computers the capacity of memory is measured in mega bytes. Byte is the smaller unit and means a set of 8 bits. Higher units are kilo bytes, mega bytes, and giga bytes.
I character = 1 byte = 8 bits
1 Kilo bytes (KB) = 1024 bytes or 210 bytes
I Mega byte (MB) = 1024 KB or 220 bytes
1 Giga byte (GB) =1024 MB =1024 x 1024 x 1024 bytes or 230 bytes
Thus if we say that the capacity of a primary memory is 16 MB it means it contains 16 x 220 s or 224 bytes. Also a 1.44 MB floppy can store 1.44 x 220 bytes of information.
Storage Evaluation Criteria
The most common properties used for characterizing and evaluating the storage unit of the computer system are discussed below:
1.    Storage capacity: Represents the size of the memory. It is the amount of data that can be stored in the storage unit. Primary storage units have less storage capacity compared to secondary storage units. While the capacity of main memory and the internal memory can be expressed in terms of number of bytes or words, the capacity of external or secondary storage is measured in terms of bytes.
2.    Storage cost: In a memory system 'cost' is another key factor that is of primary concern. It is expressed normally per bit. It is obvious that lower costs are desirable. It is worth noting that as the access time for memories increases, the cost decreases.
3.    Access time: The time required to retrieve and locate the data from storage unit. It is dependant on the access mode used and physical characteristics, of the particular device. Primary storage units have faster access lime compared to secondary storage units.
4.    Access mode: It is considered that memory consists of varied memory locations. Access mode refers to the mode in which information from the memory is accessed. Memory devices can be accessed in any of the following ways:
(a) Random access memory (RAM): It is the mode in which any memory location can be accessed in any order in the same amount of time. Semiconductor and Ferrite memories, which generally constitute the primary storage or main memory, are of this nature.
(b) Sequential access: Memories that can be accessed only in a pre-defined sequence are sequential access memories. Here, since sequencing through other locations precedes the arrival at a desired location, the access time varies according to the location. Information on a sequential device can be retrieved in the same sequence in which it is stored. Songs stored on a cassette, that can be accessed only one by one, is an example of sequential access. Typically, magnetic tapes are sequential access memory.
(c) Direct access: In certain cases, the information is neither accessed randomly nor in sequence but something in between. In this kind of access, a separate read/write head exists for each track, and on a track the information can be accessed serially. This semi-random mode of access exists in magnetic disks.
5.    Permanence of Storage: If the storage unit can retain the data even after the power is turned off or interrupted, it is termed non-volatile storage. And if the data is lost once power is turned off or interrupted, it is called Volatile storage. It is obvious from these properties that the primary storage units of the computer systems are volatile, while the secondary storage units are non-volatile. A non-volatile storage is definitely more desirable and feasible for storage of large volumes of data.
Primary/Main Memory
In a computer system the main memory is the central storage unit. It is relatively fast and large memory and is used to store data and programs during the computer operations. The size of the main memory is comparatively much smaller than that of the secondary memory. CPU communicates directly with the main memory. The speed of the main memory must match the fast speed of the CPU so a semiconductor (chip) technology is used in the main memory. Random Access Memory (RAM) and Read Only Memory (ROM) ICs are used for main memory. RAMs are volatile in nature, that is, their contents get erased when power goes off.
RAM - RAM stands for Random Access Memory and is a read-write memory of a computer. In a Random Access Memory, any location can be accessed in a random manner and the access time is same for each memory location. This memory is volatile in nature.
There are two important types of RAMs —static RAM and dynamic RAM. The two types differ in the technology to hold data. Static RAM or SRAM stores binary information using clocked sequential circuits, while dynamic RAM or DRAM stores binary information in the form of electric charges that are applied to capacitors inside the chip. The stored charge on the capacitors tends to discharge with time. Hence Dynamic RAM needs to be refreshed thousands of time per second. Static RAM needs to be refreshed less often which makes it faster; but is also more expensive than Dynamic RAM. Hence dynamic RAM is commonly used than static RAM.  The Dynamic RAM also offers larger storage capacity and reduced power consumption. Usually large memories use dynamic RAM, while static RAM is mainly used for specialized applications.
ROM - ROM stands for Read Only Memory, that is, nothing can be written on it. ROM is a non-volatile memory; the information stored on it is not lost when power goes off. It is used for storing the programs that are permanently resident in the computer. The contents of ROM are decided by the hardware manufacturer. The necessary programs are hardwired during the manufacture of computer. It also possesses random access property and stores information which is not subject to change.
ROM is mainly used for storing an initial program called a “Bootstrap loader”. This is a program whose function is to start the computer when power is turned on. Since ROM is not volatile, its contents remain unchanged even if the power is turned off. When power is turned on, the hardware of the computer sets the program counter to the first address of the bootstrap loader. The bootstrap program loads a portion of the operating system from disk to main memory and control is then transferred to the operating system.
PROM - It is a Programmable ROM. Its contents are decided by the user. The user can store permanent programs and data in a PROM. The difference between a PROM and a ROM (read-only memory) is that a PROM is manufactured as blank memory, whereas a ROM is programmed during the manufacturing process. To write data onto a PROM chip, you need a special device called a PROM programmer or a PROM burner. The process of programming a PROM is called burning the PROM.
EPROM - Acronum for Erasable Programmable Read-Only Memory, and pronounced ee-prom, EPROM is a special type of memory that retains its contents until it is exposed to ultraviolet light. The ultraviolet light clears its contents, making it possible to reprogram the memory. An EPROM differs from a PROM in that a PROM can be written to only once and cannot be erased. EPROMs are used to store programs which are permanent but need updating.
EEPROM or E2 PROM - Acronym for Electrically Erasable Programmable read-only memory, pronounced double-ee-PROM is a special type of PROM that can be erased by exposing it to an electrical charge. It is also known as EAPROM (Electrically Alterable PROM). Like other types of PROM, EEPROM retains its contents even when the power is turned off. Also like all other types of ROM, EEPROM is not as fast as RAM.
Flash Memory - It is a special type of EEPROM that can be erased and reprogrammed. The difference between an EEPROM and flash memory is that the flash memory can be written and erased in blocks whereas EEPROM can be written and erased one byte at a time. Many modern PCs have their BIOS (Basic Input Output System) stored on a flash memory chip so that it can easily be updated if necessary. Such a BIOS is sometimes called a flash BIOS.
Secondary/Auxiliary Memory
Since a computer’s main memory is temporary, the secondary memory is used for bulk storage of programs, data, and other information. The secondary storage is of permanent nature, that is, it stores the information permanently. It has a much larger capacity than the main memory. The secondary memory is non-volatile. The two most common secondary storage devices are the floppy disk and the hard disk.
Floppy disks
A floppy disk is a data storage medium that is composed of a disk of thin, flexible ("floppy") magnetic storage medium encased in a square or rectangular plastic shell. The recording medium on floppies is a Mylar or vinyl plastic material with magnetic coating on one or both sides. Floppy disks are read and written by a floppy disk drive or FDD. Floppies are available in the following sizes:
§  5¼” diameter - This floppy has a capacity of 1.2 MB.
§  3½” diameter - This floppy has a capacity of 1.44 MB.
Hard Disk/Winchester Disk
The hard disk is made up of a collection of disks known as platters. These platters are coated with a material that allows data to be magnetically recorded. The disks rotate at a very high speed. A typical speed is 3600 revolutions per minute. The read/write head moves across the disk surface. Hard disks can store more data than floppy disks. Hard disks are installed inside the computer and can access the data more quickly than floppy disks.
                                     
 







Hard disk and floppy disk are random access storage devices, i.e., information may he retrieved from them in any order as you want. Sequential access storage devices like magnetic tapes are similar to audio or video tapes and the information from them can be accessed only sequentially, i.e., one after the other.
CD-ROM (Compact Disc-Read-Only Memory)
The CD-ROM stands for Compact Disc-Read-Only Memory. CD-ROMs are used to distribute a wide variety of information, from multimedia encyclopedias to books, to games, to image and video libraries, to product and sales presentations, and more. The advantage is that it is a portable media and can contain a large amount of data. To read a CD-ROM, a device called CD-ROM drive is needed. Any information or data on CD-ROM can be erased or written onto with the help of a special device called CD-Recorder.
CD-ROMs are available in two forms:
1.        CD-R It is also called as the recordable-CD. It is written once and can be read again and again. Data once written cannot be erased.
2.        CD-RW It is also called as erasable-CD. It is a recording system that allows the user to erase previously recorded information and then to record new information onto the same physical location on the disk.
DVD (Digital Versatile Discs)
DVD, the next generation of CD-ROMs is called DVD-ROM that stands for Digital Versatile Disk. A DVD is the same size as a Compact Disc but holds up to 25 times more matter and is much faster. This increased capacity allows DVD to store high-quality video as well as higher-than-CD-quality audio. The most notable advantage of DVD is its capacity. This allows the ability to access much more data than standard CD-ROM for computer application. Physically, a CD-ROM and a DVD-ROM disk are similar.
Cache Memory
Cache memory is placed between the CPU and main memory. It is a fast speed memory and is expensive and faster than the main memory. Cache memory is used to store the frequently accessed data of the main memory. The instructions that are frequently used by the CPU are stored in cache memory. It is used to reduce the average access time for address, instructions or data which are normally stored in the main memory. Thus the cache memory increases the operating speed of the system. But it is much costlier than the main memory. From economic considerations, the capacity of the cache memory is much less as compared to the main memory.
2.7 DATA STORAGE AND RETRIEVAL
Data storage and retrieval system stores personal and public data that makes available at any time for users. Ii is easily accessible to edit and save for various applications. It provides cost effective disks based on storage solution. It is very valuable and useful process for any running business depends on data size. In the planning phase, normally data is collected; it is stored and then manipulated for further execution. Data control depends on data storing size in a small or big manufacturing organization. For small organization the data is stored according to following process:
Design & sampling Plan à Run the material à Take the measurements à Fill the run sheet à Analyze the result.
Data Storage Equipments
Data storage equipments can store and archive large amounts of data. A high quality and high performance data storage system is connected to all system resources by a high throughput fiber channel. The storage system is long-lasting as per the need of the consumers and the users. Computer data storage and online data storage helps all type of small as well as big corporate players. As the field of consumption is very large, data storage suppliers increase their marketing areas. Due to the introduction of these electronic equipments, information transfer and retrieval becomes very easy. Data processing system helps the industries to run fast their operations towards progress. The solution for data storage is made available to the consumers including data backup storage, offsite data storage, media data storage and other techniques. Data storing and retrieval system is reliable, efficient, errorless and fast. The storage system is long-lasting as per the need of the consumers and the users. Computer Data storage and Online Data Storage helps all type of small as well as big corporate players. As the field of consumption is very large data storage suppliers are increasing their market field. The uses of data storage are as follows:
F  Portable methods support easy replacement of data easily.
F  Semi-portable methods require mechanical disassembly tools.
F  Inseparable methods support loss of memory if storage materials are disconnected from the unit.
Network Data Storage (NDS) and Auto Back Server
NDS centralizes backups of all computers distributed across the organization in easy-to-use system. It consolidates memory on pick-up and drop-off to central location. It ensures business continuity if disaster of data occurs or data corrupts. Data is generally processed in a long chain of automatic acquisition, storage reformatting and retrieval before you see it. In the planning phase of processed data, the following steps are followed:
F  Collect and catalog the data based on request.
F  Store entire information in the form of documents.
F  They are retrieved later by keywords associated with them.
If information is stored in text mode, data retrieval technique permits full text searching on the basis of words in the document. If data is stored as database, the information is stored in a series of discrete records that are, in turn, divided into discrete fields for example, name, address and phone numbers of various users. The records can be searched and retrieved on the basis of the content of the fields e.g. all users who have a particular email code such as xyz@yahoo.com or xyz@hotmail.com sites. The data is stored within the computer either in main storage or auxiliary storage to retrieve easily and fast. Reference-retrieval systems store references to documents rather than the documents themselves. Such systems, in response to a search request, provide the titles of relevant documents based on their physical locations. Correct information at right time helps development. Due to the introduction of these electronic equipments information transfer has become very easy and retrieval too. Data processing system helps the industries to run fast their operations towards progress. To address the problem of data storage, a kind of data storage solution is made available to the consumers that include data backup storage, offsite data storage and media data storage. The data back ups are taken in different modes. They are as follows:
F  Disk-to-Tape Backup Mode: Tape is a chronically plagued medium requiring continual testing for data availability. Tape storage requires in-depth procedures with vaulting service for successful recovery.
F  Disk-to-Disk-to-Tape Backup: Reliance on tape solution does not reduce the possibility of human error. Off-site vaulting requires a complete solution for backup.
F  Online Backup Mode:. Limited capacity hinders bare metal server recovery and RTO (Return To Operation). WAN (Wide Area Network) congestion is added. Backup window is not large enough for complete server backup.
F  Combination Online and Tape Backup Mode: Expensive recurring monthly fee is required in addition to cost of hardware, media, time, etc. Offsite tape vaulting strategy is required in order to limit rapid growth of backup storage. Online data cost is determined.
The data storage provides integration, customization and deployment of data backup and recovery, data storage management and communication solutions. The solutions include:
F  Data backup and recovery at every step.
F  Complete solutions are given from standalone drives to enterprise level automated storage media.
Data Retrieval Software
The data recovered and retrieved if data disasters take place of all file types and size in the system. Data disasters are caused due to the following reasons - Hardware failure, human error, Power related problems, Flood or water damage, Software failure, Virus damage, Heat damage, Vandalism and sabotage. A number of software is available to retrieve data. They are as follows:
F  Windows Data Retrieval Software
F  Best Data Retrieval Software
F  NTFS Data Retrieval Software
F  Digital Camera Data Retrieval Software
F  Digital Pictures Retrieval Software
F  Removable Media Data Retrieval Software
F  Memory Card Data Retrieval Software
F  Pen Drive Data Retrieval Software
F  SIM Card Data Retrieval Software
F  iPod Data Retrieval Software
F  Outlook and Outlook Express Password Retrieval Software
F  Internet Explorer Password Retrieval Software

<< End of Section >>


3
SOFTWARE concepts

3.1. Introduction
The software is the backbone of the computer industry. It provides the facility to manipulate data, maintains integrity within the system components and network. Computer systems consist of hardware that controls the overall activity of the computer. But in order for hardware to function, it must have the necessary instructions. These instructions are supplied by software. The hardware without the software is like a human without a brain. The software is the set of programs and associated data used for the operation of computer. In early systems, computer software was very task oriented and not interactive. But today we have very sophisticated interactive softwares which can be used for a variety of tasks. The following figure gives an overview of the software classification and the different software types.
3.2 System Software
System software is a set of one or more programs that are basically designed to control the operation of a computer system. They are general programs written to assist users in the use of the computer system by performing tasks such as controlling all the operations, moving data in and out of a computer, and all the other steps in executing the application program. It manages and controls computer hardware so that application software can perform a task. It also makes the operation of the computer system more effective and efficient. System software helps run the computer hardware and computer system. Examples are operating systems, debuggers, and translators etc. The objective of system software is to:
  • Support the running of other software
  • Communicate with peripheral devices
  • Support the development of other types of software
  • Monitor the use of various hardware resources.
3.2.1. OPERATING SYSTEMS
Operating systems are the most important programs that run on a computer. It is an integrated set of specialized programs that are used to control and manage the resources and overall operations of a computer. The operating system controls the execution of other computer programs. They recognizes input from the keyboard, sends output to the display screen, keeps track of files and directories on the disk, and control peripheral devices. Most used operating systems include Microsoft Windows, DOS, Linux, UNIX, OS/2, Mac OS, MVS, etc.

3.2.2. Language Translation Softwares / Translators
Programs which translate a program written in any computer language into machine language (recognized by the computer) code are known as translators. Translators are divided into 3 categories:
Assemblers - An assembly language program cannot be directly executed by a computer. It has to be converted into machine language before the computer can interpret and execute it. An assembler is a program that translates a program written in assembly language into a machine executable code. The input to assembler program is an assembly language program known as source program and the output of assembler is a machine language program known as object program. Once an object program is created, it is transferred to the computer’s primary memory using the system’s loader. Here, another computer program known as “link editor” or “linker” passes the control to the first instruction in the object program and then the execution starts and proceeds till the end of the program.
Compilers - A compiler is a program that translates a high-level language program into a machine language program. A compiler goes through the entire program and then translates the entire program into machine codes. It reports all the errors of the program along with the line numbers. Some of the most widely used compiled languages are COBOL, C, C++, FORTRAN, etc.
Interpreters - An interpreter is a program that translates one statement of a high-level language program into machine codes and executes it. In this way it proceeds further till all the statements of the program are translated and executed. An interpreter is a smaller program as compared to the compiler. A compiler is faster than an interpreter. The object program produced by the compiler is permanently saved for future reference, whereas the object code of the statement produced by an interpreter is not saved. If the instruction is used next time, it must be interpreted once again and translated into machine code. The most frequently used interpreted language is BASIC.
Compiler languages are better than interpreted languages as they can be executed faster and more efficiently once the object code has been obtained. On the other hand, interpreter languages do not need to create object code and so are usually easier to develop — that is, to code and test.
3.2.3. Debuggers
A debugger is a computer program that is used to test and debug other programs. It helps a programmer to debug a program by stopping at certain breakpoints and displaying various programming elements. The programmer can step through source code statements one at a time while the corresponding machine instructions are being executed.
3.2.4. Linkers & Loaders
A linker or link editor is a program that takes one or more objects generated by compilers and assembles them into a single executable program. The objects are program modules containing machine code and information for the linker.
A loader is the part of an operating system that is responsible for loading programs from executable files into memory, preparing them for execution and then executing them. The loader is usually a part of the operating system's kernel and usually is loaded at system boot time and stays in memory until the system is rebooted, shut down, or powered off.
3.2.5. Drivers
A driver is a special software created by peripheral device manufacturers to facilitate the computer in communicating with the peripheral devices, like printer and mouse drivers. Drivers according to the device setting have the ability to convert the data supplied by the computer and then transfer i to the device to work on it.
3.3. Application Software
Application softwares are the programs used by the user to perform some specific functions. These softwares allow the user to utilize computers for the tasks which are provided by the software itself like data manipulation, image and multimedia development and usage. It is divided into two broad categories:
1.    Customized Application Software
Customized application software are programs written by the user or programmer in order to perform specific jobs for the user. They are written in a variety of programming languages depending on the task at hand. Normally these are sets of programs used in conjunction with one another, such as a payroll system or customized accounting packages for a company.
2.    Standard Application Software
These are generalized set of programs used to deal with a particular application. These softwares are normally developed by specialist software developers to solve common problems faced by many users for example, MS-Office, WordStar, Lotus, EX, and TALLY.
3.3.1. Word Processors
A Word processor is a program that enables you to perform word processing functions. Word processors use a computer to create, edit, and print documents. To perform word processing, you need a computer, the word processing software (word processor), and a printer. A word processor enables you to create a document, store it electronically on a disk, display it on a screen, modify it by entering commands and characters from the keyboard, and print it on a printer. Some of the commonly used word processors are Microsoft Word, WordStar, WordPerfect, AmiPro, etc.
3.3.2. Spread sheets
A spreadsheet is a table of values arranged in rows and columns. Each value can have a predefined relationship to the other values. Spreadsheet applications (often referred to simply as spreadsheets) are computer programs that let you create and manipulate spreadsheets electronically.
There are a number of spreadsheet applications in the market, Lotus 1-2-3 and Microsoft Excel being the most famous. These applications support graphic features that enable you to produce charts and graphs from the data.
3.3.3. Image Processors
Image processors or graphics programs that enable you to create, edit, manipulate, give special effects, view, print, and save images.
3.3.4. Presentation Graphics
Presentation Graphics enable users to create highly stylized images for slide shows and reports. The software includes functions for creating various types of charts and graphs and for inserting text in a variety of fonts. Most systems enable you to import data from a spreadsheet application to create the charts and graphs. Presentation graphics is often called business graphics. Some of the popular presentation graphics softwares are Microsoft PowerPoint, Lotus Freelance Graphics, Harvard Presentation Graphics, etc.
3.3.5. Database Managers
A special data processing system, or part of a data processing system, which aids in the storage, manipulation, reporting, management, and control of data. It is also called Database Management System (DBMS). It accepts requests from the application and instructs the operating system to transfer the appropriate data. They may work with traditional programming languages (COBOL, C, etc.) or they may include their own programming language for application development. Typical examples of DBMSs include Oracle, DB2, Microsoft Access, Microsoft SQL Server, FoxPro, Dbase etc.
3.4. UTILITIES
Utility is a program that performs a very specific task, usually related to managing system resources. They are a wide variety of general purpose programs that greatly speed up and simplify the use of a computer. They improve programming efficiency. Some of the main utility softwares are Disk Defragmenter, virus scanners, and disk cleanup.
Disk cleanup provides us with free space on our disks by removing the useless files whereas disk defragmenter helps us to relocate file blocks as close as possible to provide us with fast access to our files. It does so by creating and writing files efficiently.
Virus scanners and vaccines are programs which are used to detect and remove viruses using boot monitors, file monitors, and disk scanners. They remove the scanned virus by using any virus vaccine programs like Smart Dog, or McAfee.

<< End of Section >>

4
programming languages

4.1. Introduction
Language is the main tool of communication among people. Languages like English, Hindi, and Marathi, which we use to communicate with each other are known as natural languages. Each language uses its own construct and rules for word formation known as semantic rules. Similarly, in order to communicate with the computer, we use programming languages. These programming languages are used to communicate instructions and commands of a user written program to the computer to accomplish the tasks assigned by the program. Learning a programming language means learning the syntactic and semantic rules and various other constructs and structures of the language.
4.2. Computer Programming Languages
A computer, being an electronic device, cannot understand instructions if provided in a general language. Therefore, a special language is used to instruct a computer system. This language is known as computer programming language. It consists of a set of symbols and characters, words and grammar rules that permit the user to construct instructions in the format that can be understood by the computer system. A major goal of computer scientists is to develop computer system which can accept instructions in normal human language known as natural language processor.
4.3. Classification of Programming languages
Computer languages can be classified into two major categories:
§  Low-Level Languages (LLL), and
§  High-Level Languages (HLL)
They can also be classified into five generations, which shows the step-by-step evolution of programming languages. Each generation indicates significant progress towards making computers easier to use.
First Generation / Machine Languages
Low-Level Languages (LLL)

Second Generation / Assembly languages
Third Generation / Procedure-Oriented languages
High-Level Languages (HLL)
Fourth Generation / Problem-oriented Languages
Fifth Generation / Natural Languages
4.4. Low-Level & high-level languages
In low-level languages, programs are written by means of the memory and registers available on the computers. They are closer to the computer architecture. A program written in a low-level language can be extremely efficient, making optimum use of both computer memory and processing time.  However, to write a low-level program takes a substantial amount of time, as well as a clear understanding of the inner workings of the processor itself. Since the internal architecture of computer differs from one computer to another, each computer requires separate low-level programming language. Because of this, the low-level languages are called machine-dependent languages. The first and second generations, i.e., machine and assembly languages are coming under this category. Assembly languages need to be translated into machine code using Assembler.
Unlike low-level languages, the high-level programming languages are closer to human languages. They permit faster development of large programs. The main advantage of high-level languages over low-level languages is that they are easier to read, write, and maintain. Ultimately, programs written in a high-level language must be translated into machine language by a compiler or interpreter. The high level languages are further classified into
§  Procedural-Oriented or Third Generation
§  Problem-Oriented or Fourth Generation, and
§  Natural or Fifth Generation
Examples of high-level languages include Ada, Algol, BASIC, COBOL, C, C++, FORTRAN, LISP, Pascal, and Prolog.
4.5. First Generation / Machine language
Machine language is a collection of binary digits or bits that the computer reads and interprets. It is also called binary language. Machine language is the only language a computer is capable of understanding. An instruction prepared in any machine language will have at least two parts. The first part is the command or operation, and it tells the computer what function is to be performed. All computers have an operation code for each of its functions. The second part of the instruction is the operand, and it tells the computer where to find or store the data that has to be manipulated. Thus the programs are written by means of memory and registers available on the computers.
Advantages
§  Faster Execution since the computer directly starts executing it.
§  Resource Utilization – makes optimum use of both computer memory and processing time.
Disadvantages
§  Machine dependent – Because internal design of each computer is different from others, separate machine language is needed for every computer.
§  Difficult to program – The programmer must remember the dozens of code numbers for the commands or must constantly refer to the reference card.
§  Error Prone – Since the programmer needs to concentrate more on the codes, it is very difficult to concentrate fully on the logic of the problem. Hence it is easy to make errors while using machine code.
§  Difficult to Modify – It is very difficult to locate errors in machine instructions.
4.6. Second generation / Assembly languages
Sometimes referred to as assembly, an assembly language is also a low-level programming language designed for specific processors. . Assembly languages have the same structure and set of commands as machine languages, but they enable a programmer to use names instead of numbers.  Hence it is easier for humans to understand rather than machine/binary language. Below is an example of assembly language code.
mov     ax,4C00h
int        21h
The instructions written in assembly language needs to be translated into machine code. This is done by a translator program called Assembler. The assembler program recognizes the character strings that make up the symbolic names of the various machine operations, and substitutes the required machine code for each instruction. At the same time, it also calculates the required address in memory for each symbolic name of a memory location, and substitutes those addresses for the names. The final result is a machine-language program that can run on its own at any time; the assembler and the assembly-language program are no longer needed. To help distinguish between the "before" and "after" versions of the program, the original assembly-language program is also known as the source code, while the final machine-language program is called the object code.
If an assembly-language program needs to be changed or corrected, it is necessary to make the changes to the source code and then re-assemble it to create a new object program.
Advantages
§  Easier to understand and use than machine language
§  Easy to locate and correct errors
§  Easier to modify
Disadvantages
§  Machine dependent
§  Knowledge of hardware required
§  Programs are usually very long
4.7. Third generation / procedural-oriented languages
High-level languages are often classified according to whether they solve general problems or specific problems. General-purpose programming languages are called procedural languages or third generation languages. They are languages such as Pascal, BASIC, COBOL, and FORTRAN, which are designed to express the logic, the procedure, of a problem. Because of their flexibility, procedural languages are able to solve a variety of problems.
Procedural languages have many advantages over machine and assembly languages:
§  The program statements resemble English and hence are easier to work with.
§  Because of their English-like nature, less time is required to program a problem.
§  Once coded, programs are easier to understand and to modify.
§  The programming languages are machine-independent.
However, procedure-oriented languages still have some disadvantages compared to machine and assembly languages:
§  Programs execute more slowly.
§  The languages use computer resources less efficiently.
4.8. FOURTH GENERATION / Problem-oriented Languages
Third-generation languages, such as BASIC or Pascal, require you to instruct the computer in step-by-step fashion. Fourth-generation languages, also known as problem-oriented languages, are high-level languages designed to solve specific problems or develop specific applications by enabling you to describe what you want rather than step-by-step procedures for getting there. All 4GLs are designed to reduce programming effort, the time it takes to develop software, and the cost of software development.
Fourth-generation languages may be categorized into several kinds of application development tools, which include:
§  Personal computer applications software
§  Query languages and report generators
§  Decision support systems and financial planning languages
§  Application generators
Personal computer applications software - These include word processors, spreadsheets, database managers, business graphics and integrated packages. Learning to use Lotus 1-2-3, dBase or PowerPoint can help you develop your own applications.
Query languages and report generators - Query languages allow people who are not programmers to search a database using certain selection commands. Query languages, for example, are used by airline or railway reservations personnel needing ticket information. Report generators are designed for people needing to prepare reports easily. Examples of query languages and report generators include QUEL, QBE, SQL, FOCUS, QUEST, Progress 4GL, Oracle reports, Report Builder, RPG II, etc.
Decision support systems - Decision support systems are interactive software designed to help managers make decisions. Financial planning languages are particular kinds of decision support systems that are employed for mathematical, statistical and forecasting procedures among other uses. Some examples of decision support systems and financial planning languages include Application System, Command Center, EXPRESS, FCS, IFPS, etc.
Application generators - An application generator consists of a software system with numbers of program modules, preprogrammed for various functions. So that the programmer or user can simply state which function is needed for a particular application, and the system will select the appropriate modules and run a program m to meet the user’s needs. Some examples of application generators are FOCUS, INGRESS, SAS, IDEAL, RAPID/3000, TELON, UFO, etc.
The following table summarizes some of the major differences between third-generation languages (3GLs) and fourth-generation languages (4GLs).
Third-generation languages
Fourth-generation languages
Intended for use by professional programmers
May be used by a non-programming end user as well as a professional programmer.
Requires specification of how to perform tasks.
Requires specification of what task is to be performed (system determines how to perform the task).
All alternatives must be specified.
Default alternatives are built in; an end user need not specify these alternatives.
Require large number of procedural instructions.
Require far fewer instructions.
Code may be difficult to read, understand and maintain.
Code is easy to understand and maintain because of English-like commands.
Language developed for batch operation.
Language developed primarily for on-line use.
Can be difficult to learn.
Easy to learn.
Difficult to debug.
Easy to debug.
Typically file-oriented.
Typically database-oriented.
4.9. FIFTH GENERATION / Natural languages
Natural languages are still in the developmental stages, but they promise to have profound effect, particularly in the areas of artificial intelligence and expert systems. They are designed to make the connection between people and computer more natural.
Two popular natural languages are LISP and PROLOG.
4.10. COMPILERS AND INTERPRETERS
For a high-level language to work on the computer it must be translated into machine language. There are two kinds of translators — compilers and interpreters — and high level languages are called either compiled languages or interpreted languages.
In a compiled language, a translation program is run to convert the programmer’s entire high-level program, which is called the source code, into a machine language code. This translation process is called compilation.  The machine language code is called the object code and can be saved and either run (executed) immediately or later. Some of the most widely used compiled languages are COBOL, C, C++, FORTRAN, etc.
In an interpreted language, a translation program converts each program statement into machine code just before the program statement is to be executed. Translation and execution occur immediately, one after another, one statement at a time. Unlike the compiled languages, no object code is stored and there is no compilation. This means that in a program where one statement is executed several times (Such as reading an employ’s payroll record), that statement is converted to machine language each time it is executed. The most frequently used interpreted language is BASIC.
Compiler languages are better than interpreted languages as they can be executed faster and more efficiently once the object code has been obtained. On the other hand interpreter languages do not need to create object code and so are usually easier to develop — that is, to code and test.
THE COMPILATION PROCESS
The objective of the compiler is to transform a program written in a high-level programming language from source code into object code. Programmers write program in a form called source code. Source code must go through several steps before it becomes an executable program.
The first step is to pass the source code through a compiler, which translates the high-level language instructions into object code. The final step in producing an executable program, after the compiler has produced object code, is to pass the object code through a linker. The linker combines modules and gives real values to all symbolic addresses.
Every high-level programming language comes with a compiler. In effect, the compiler is the language, because it defines which instructions are acceptable.
Because compilers translate source code into object code, which is unique for each type of computer, many compilers are available for the same language. For example, there is a FORTRAN compiler for PCs and another for Apple Macintosh computers. In addition, the compiler industry is quite competitive, so there are actually many compilers for each language on each type of computer. More than a dozen companies develop and sell C compilers for the PC.
<< End of Section >>
5
operating systems

5.1. Introduction
It is the most important program that runs on a computer. Every general-purpose computer must have an operating system to run other programs. An Operating System, or OS, is a software program that enables the computer hardware to communicate and operate with the computer software. Without a computer Operating System, a computer would be useless. Operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen, keeping track of files and directories on the disk, and controlling peripheral devices such as disk drives and printers.  The operating system is also responsible for security, ensuring that unauthorized users do not access the system. The two main objectives of an operating system are:
  • To manage the hardware resources of a computer
  • To provide the applications with a easy way to use the hardware resources of the computer without the applications having to know all of those details.
5.2. Functions of OS
The main functions of an operating system can be summarized as follows:
1.    Input/Output or Device Management refers to coordination and control of various input-output devices and is an important function of the operating system. This involves receiving the request for I/O interrupts and communicating back to the requesting process.
2.    Memory Management: The Operating system manages the sharing of internal memory among multiple applications. It allocates memory to itself and its resident system programs, sets aside areas for application program and user partition, arranges the I/O buffers and reserves storage for specialized purposes.
3.    Process management: In a multitasking operating system where multiple programs can be running at the same time, the operating system determines which applications should run in what order and how much time should be allowed for each application before giving another application a turn. On computers that can provide parallel processing, an operating system can manage how to divide the program so that it runs on more than one processor at a time.
4.    File Management: computers use a lot of data/programs which are stored in secondary storage devices. File management function of an OS involves keeping track of all the different files and maintaining the integrity of data stored in the files including file directory structure.
5.    Job Control: when the user wants to run an application program, he must communicate with the OS, telling it what to do. He does this using OS’s job control language or JCL. JCL consists of a number of OS commands, called system commands that control the functioning of the operating system.
6.    House Keeping includes the functions like creating a file system, Copying, deleting, moving files, Multitasking programs, Starting the computer, Interfacing with the hardware, Program intercommunication & Networking. I also include the security, protection, and resource accounting backup.
5.3. Classification of OS
Operating systems can be classified as follows:
Single User OS: A single-user operating system provides access to the computer system by a single user at a time. If another user needs access to the computer system, they must wait till the current user finishes what they are doing and leaves. Operating systems such as MS-DOS and Windows 95 are essentially single user operating systems.  
Multi-user OS: A multi-user operating system lets more than one user access the computer system at one time. Access to the computer system is normally provided via a network, so that users access the computer remotely using a terminal or other computer. Examples are UNIX, Linux, Solaris, MVS (Multiple Virtual Storage), XENIX and Windows NT.
Multitasking OS: Multitasking is the ability to execute more than one task/program at the same time. A multitasking OS allows more than one program to run concurrently. Examples are OS/2, Windows NT, UNIX, and Amiga OS.
Multiprocessing OS: It is similar to Multitasking OS. The difference is that in multitasking OS, only one CPU is involved, whereas in multiprocessing OS, more than one CPU is involved. Thus a Multiprocessing Operating System is capable of supporting and utilizing more than one computer processor. Linux, Windows 2000, MVS and UNIX are two most widely used multiprocessing operating systems.
Network Operating System (NOS)
Network Operating System (NOS) is an operating system that includes special functions for connecting computers and devices into a network. Examples are Novell NetWare, Windows NT and 2000, Sun Solaris and IBM OS/2.  The Cisco IOS (Internet Operating System) is also a network operating system.
Multithreading OS
Multithreading Operating systems allow different parts of a software program to run concurrently. Operating systems that would fall into this category are Linux, Unix, and Windows 2000.
<< End of Section >>



No comments:

Post a Comment