Computer science is a field of study that deals with the structure, operation, and application of computers and computer systems.
Computer science includes engineering activities, such as the design of computers and of the hardware and software of computer systems, and theoretical, mathematical activities, such as the analysis of algorithms and performance studies of systems. It also involves experimentation with new computer systems and their potential applications.
Computer science was established as a discipline in the early 1960s. Its roots lie mainly in the fields of mathematics (e.g., Boolean algebra) and electrical engineering (e.g., circuit design). The major subdisciplines of computer science are (1) architecture (the design and study of computer systems), an area that overlaps extensively with computer engineering; (2) software, including such topics as software engineering, programming languages, operating systems, information systems and databases, artificial intelligence, and computer graphics; and (3) theory, including computational methods and numerical analysis as well as data structures and algorithms.
Text 2. Computer
Computeris any of various automatic electronic devices that solve problems by processing data according to a prescribed sequence of instructions. Such devices are of three general types: analog, digital, and hybrid. They differ from one another in terms of operating principle, equipment design and application.
The analog computer operates on data represented by continuously variable quantities, such as angular positions or voltages, and provides a physical analogy of the mathematical problem to be solved. Capable of solving ordinary differential equations, it is well suited for use in systems engineering, particularly for implementing real-time simulated models of processes and equipment. Another common application is the analysis of networks, such as those for electric-power distribution.
Unlike the analog computer, which operates on continuous variables, the digital computer works with data in discrete form — i.e., expressed directly as the digits of the binary code. It counts, lists, compares, and rearranges these binary digits, or bits, of data in accordance with very detailed program instructions stored within its memory. The results of these arithmetic and logic operations are translated into characters, numbers, and symbols that can be readily understood by the human operator or into signals intelligible to a machine controlled by the computer. Digital computers can be programmed to perform a host of varied tasks. As a consequence, more than 90 percent of the computers in use today are of this type. Government and business make extensive use of the digital computer's ability to organize, store, and retrieve information by setting up huge data files. Its capacity to adjust the performance of systems or devices without human intervention also lends itself to many applications. For example, the digital computer is used to control various manufacturing operations, machine tools, and complex laboratory and hospital instruments. The same capability has been exploited to automate the operational systems of high-performance aircraft and spacecraft. Among the multitude of other significant applications of the digital computer are its use as a teaching aid (e.g., in the remedial instruction of basic language and mathematics skills) and its employment in scientific research to analyze data and generate mathematical models.
The hybrid computer combines the characteristics and advantages of analog and digital systems; it offers greater precision than the former and more control capability than the latter. Equipped with special conversion devices, it utilizes both analog and discrete representation of data. In recent years hybrid systems have been used in simulation studies of nuclear-power plants, guided-missile systems, and spacecraft, in which a close representation of a dynamic system is essential.
Mechanical analog and digital computing devices date back to the 17th century. A logarithmic calculating device, which was the precursor of the slide rule and is often regarded as the first successful analog device, was developed in 1620 by Edmund Gunter, an English mathematician. The first mechanical digital calculating machine was built in 1642 by the French scientist-philosopher Blaise Pascal. During the ensuing centuries, the ideas and inventions of many mathematicians, scientists, and engineers paved the way for the development of the modern computer.
The direct forerunners of present-day analog and digital systems emerged about 1940. John V.Atanasoff built the first electronic digital computer in 1939. Howard Aiken's fully automatic large-scale calculator using standard machine components was completed in 1944. J. Presper Eckert and John W. Mauchly completed the first programmed general-purpose electronic digital computer in 1946. The first stored-program computers were introduced in the late 1940s, and subsequent computers have increasingly become faster and more powerful.
Text 3. Software
Computer programs, the software that is becoming an ever-larger part of the computer system, are growing more and more complicated, requiring teams of programmers and years of effort to develop. As a consequence, a new subdiscipline, software engineering, has arisen. The development of a large piece of software is perceived as an engineering task, to be approached with the same care as the construction of a skyscraper, for example, and with the same attention to cost, reliability, and maintainability of the final product. The software-engineering process is usually described as consisting of several phases, variously defined but in general consisting of: (1) identification and analysis of user requirements, (2) development of system specifications (both hardware and software), (3) software design (perhaps at several successively more detailed levels), (4) implementation (actual coding), (5) testing, and (6) maintenance.
Even with such an engineering discipline in place, the software-development process is expensive and time-consuming. Since the early 1980s, increasingly sophisticated tools have been built to aid the software developer and to automate as much as possible the development process. Such computer-aided software engineering (CASE) tools span a wide range of types, from those that carry out the task of routine coding when given an appropriately detailed design in some specification language to those that incorporate an expert system to enforce design rules and eliminate software defects prior to the coding phase.
As the size and complexity of software has grown, the concept of reuse has become increasingly important in software engineering, since it is clear that extensive new software cannot be created cheaply and rapidly without incorporating existing program modules (subroutines, or pieces of computer code). One of the attractive aspects of object-oriented programming is that code written in terms of objects is readily reused. As with other aspects of computer systems, reliability — usually rather vaguely defined as the likelihood of a system to operate correctly over a reasonably long period of time is a key goal of the finished software product. Sophisticated techniques for testing software have therefore been designed. For example, a large software product might be deliberately "seeded" with artificial faults, or "bugs"; if they are all discovered through testing, there is a high probability that most actual faults likely to cause computational errors have been discovered as well. The need for better trained software engineers has led to the development of educational programs in which software engineering is either a specialization within computer science or a separate program. The recommendation that software engineers, like other engineers, be licensed or certified is gaining increasing support, as is the momentum toward the accreditation of software engineering degree programs.