Element Base Development Influence on Data Processing
Computer architecture, in wide sense, is an aggregate of its properties and characteristics, considered from user?s viewpoint: performance for class of problems the computer is acquired to solve; programming system; operating system; structure of external devices, structure and methods of their managing. The concept of macro-architecture can be interpreted as the picture of a digital computer from the programmer and user point of view, for which the systems of operations and commands, types of the processed information, data and commands formats, modes of addresses and other attributes are most essential.
When describing computer architectures, block-diagrams illustrating their structure are usually given. Computer structure is a description of computer composition, links, and interaction of its principal functional devices (units), accompanied by graphical block diagrams. Computer description may either be restricted to its structure description or be extended to its architecture description with all characteristics of the computer resources the user is accessible to [2], [3], [4], [6], [13], [38], [42], [45], [49], [55], [56], [59], [61], [63], [65].
Mechanical computer ALU (so-called zero generation) already could execute addition, subtraction, multiplication, and division. First-generation computers used vacuums tubes as an element base. The Electronic Numerical Integrator And Computer (ENIAC) is counted to be the first such computer, which used the decimal arithmetic for calculations. Later, John von Neumann came to conclusion that instead of decimal must be used binary arithmetic. The von Neumann basic principle of program control is an algorithm of representation in the form of an operator scheme, which specifies the rule of calculation as a composition of conversion operations and information analysis. The principle of program control may be realized in a computer with many methods. The von Neumann architecture has been used for more than 60 years and according to it programs and data are placed in the same memory area. That?s why, only the data area or the program area can be accessible to in the same memory access cycle.
At the same time Small Electronic Calculating Machine (SECM) that performed simultaneous processing of words was developed under the direction of S.?.L?b?d?v. The functionally-structural organization of SECM allowed working in the binary system with triple-address instructions set, thus the program of calculations was kept in RAM. Fast-acting Electronic Calculating Machine (FECM) was created later using the work experience based on SECM.
The first computers realized calculable processes on the basis of collective electronic mechanisms at which the simultaneous concerted action of elements was provided regardless of their spatial in the structure of machine. However, technology of planning and making first generation computer from a ?top? to the bottom from elements to the system limited a closeness and number of elements in the system and did not the same allow realizing the resource of productivity incident to the electronic systems.
In computer of the second generation transistors replaced vacuums tubes that allowed multiplying productivity and logical possibilities of computer. Transistors realized structural principle of technology at the level of separate amplifier. Within the framework of the second generation under the direction of V.M. Glushkov computers Promin? and Mir-1 were created, in which the multilevel asynchronous microprogram management was first applied. In computer Mir-1 tabular ALU was used, built on the basis of arithmetic matrix of consistently-parallel action. Subsequent realization on computer Mir-2 of ALU device for alphabetic-analytical transformations and conversational mode of operations, using a display with a control pen, allows counting Mir-2 by the first personal computer.
FECM-6 with throughput of 1 million operations per second used combination of operations address implementation to RAM with work of ALU and control units, and counted five preview levels of commands.
The conveyorization of commands, at which the command of ALU was simultaneously executed and a selection was made from memory of next command, was realized also in computer of Stretch (IBM 7030).
The third generation is characterized to those, that instead of transistors the computer sites began to build on the integrated circuits, which on the degree of integration is divided on small (SSI), middle (MSI), large (LSI), and also semiconductor memory. The integrated circuits will realize structural principle of technology at the level of logical element and higher. LSI is formed from top to bottom, from the general structure of chart to separate its details. Productivity of technology began to increase in direct ratio degrees of integration, traditional principles of planning were however yet saved.
V.M. Glushkov was one of the first scientists, who exposed to the revision Neumann principles of organization of computer creation of macro-conveyer computer, in which to every separate processor on the next step of calculations is assigned such, which allows to it long time to work autonomic, got realization in the multiprocessor computer systems of ?? 2701 and ?? 1766.
Most computers related to the third generation on the features, entered in the complement of series System/360 and Cyber. Besides conveyer treatment began to apply the data processing by an array processor. For example, in computer ILLIAC-IV simultaneously a processors cells array under the management of master processor executed the distributed processing of one task. The individual processing elements are grouped in four arrays, each containing 64 elements and control unit. The four arrays may be connected together under program control to permit multi-processing or single-processing operation. The ILLIAC-IV array processes a common control unit which decodes the instructions and generates control signals for processing elements in the array.
Fourth-generation microprocessors and computers are realized with VLSI circuits. Structured design ?from top to bottom? from the devices to the elements accelerated the process of creation of new digital computers. The Harvard architecture differentiates program and data areas so that there are different access buses to each of them. This provides data and program accessibility in the same cycle of the processor execution and thus increases the total speed of response. In modern processors a modified Harvard architecture is often used that employs the same bus to access to external memories while inside the chip there are separate buses to increase the speed of response. Such an approach minimizes the total cost of the system conserving the advantages of the Harvard architecture. In addition to this, different caching and conveyor processing schemes are used.
Every generation of digital computers is characterized by the approach to realization of the arithmetic-logical data processing. Thus approaches step-ups the fast-acting in the conditions of realization on one element base are even slowing the fast-acting in the conditions of realization on the elements of other (including more accomplished) element base. However, taking into account that development of the computing engineering, as well as development in general, goes ?on a spiral?, the most successful technical decisions should include bases of the past arithmetic-logical data processing and must be examined from positions of their evolution, when one technical decision rejects other, and then, vice versa, is rejected by previous one. For example, many computer methods of acceleration additions coming on changing the simplest realization of ALU, to the return to it?s up-to-date development of element base.