Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Brief history of computer technology (2)

History, an account mostly false, of events mostly unimportant.

Ambrose Bierce

History books, which contain no lies, are extremely dull. Fifth generation (1984-1990)

The development of the next generation of computer sys­tems is characterized mainly by the acceptance of parallel processing. The fifth generation saw the introduction of machines with hundreds of processors that could all be working on different parts of single program. The scale of integration in semiconductors continued at an incredible pace - by 1990 it was possible to build chips with a million components - and semiconductor memories became standard on all computers.

Other new developments were widespread use of computer networks and the increasing use of single-user workstations. Prior to 1985 large scale parallel processing was viewed as a research goal. For example, a machine was designed in which 20 processors were connected up to a single memory module. Each processor had its own local cache memory, a special memory subsystem that temporarily holds data or program instructions to improve overall computer performance. Most caches copy data from a standard computer memory, RAM, to a type of memory that allows faster data access by the CPU.

Disk Caches are designed to compensate for the speed dis­crepancy between the very fast CPU and the much slower disc


UNIT 4

drives. Internal and external memory caches are designed to com­pensate for the discrepancy between the CPU and the slower RAM chips. All Caching Systems are designed to prevent main memory, RAM, from being an information bottleneck between the CPU and the much slower hard disc drives.

On the other hand, Intel instead of using one memory mod­ule connected each processor to its own memory and used a net­work interface to connect processors. This distributed memory ar­chitecture meant that large systems, using more processors, could be built. The largest machine had 128 processors.

Toward the end of this period a third type of parallel processor was introduced to the market. In this style of machine, known as data-parallel or SIMD, there were several thousand very simple processors. All processors worked under the direction of a single control unit.

Scientific computing in this period was still dominated by vector processing. The term «vector» has two common meanings. The first is in the geometric sense: a vector defines a direction and magnitude. The second concerns the formatting of fonts. If a font is a vector font, it is defined as a line of relative size and direction rather than as a collection of pixels. This makes it easier to change the size of the font, but puts a bigger load on the device that has to display the fonts.

Most manufacturers of vector processors introduced parallel models, but there were very few processors in these parallel ma­chines.

In the area of computer networking, both wide area network (WAN) and local area network (LAN) technology developed at a rapid pace, stimulating a transition from the traditional mainframe computing environment toward a distributed computing envi­ronment in which each user has his own workstation for relatively simple task (editing and compiling programs, reading mail) but sharing large, expensive resources such as file servers and super­computers. RISC technology (a style of internal organization of the CPU) and plummeting costs for RAM brought tremendous gain in computational power of relatively low cost workstations and serv­ers. This period also saw a marked increase in both the quality and quantity of scientific visualization.




Taking Computer/or Granted

Sixth generation (1990- ...)

Transitions between generations in computer technology are hard to define, especially as they are taking place. Some changes, such as the switch from vacuum tubes to transistors, are immediately ap­parent as fundamental changes, but others are clearly only in retros­pect. Many of the developments in computer systems since 1990 reflect gradual improvements over established systems, and thus it is hard to claim they represent a transition to a new «generation», but other de­velopments will prove to be significant changes.

This generation is beginning in parallel computing, both in the hardware area and in improved understanding of how to develop al­gorithms to exploit diverse, massively parallel architectures. Parallel systems now compete with vector processors in terms of total compu­ting power and most expect parallel systems to dominate the future.

Combinations of parallel / vector architectures are well es­tablished, and one corporation (Fujitsu) has announced plans to build a system with over two hundred of vector processors. Manu­facturers have set themselves the goal of achieving teraflops (1012 arithmetic operations per second) performance by the middle of the decade, and it's clear only a system with a thousand proces­sors or more will obtain this. Workstation technology has contin­ued to improve, with processors designs now using a combination of RISC, pipelining, and parallel processing. This development has sparked an interest in heterogeneous computing: a program started on one workstation can find idle workstations elsewhere in the local network to run parallel subtasks.

One of the most dramatic changes in the sixth generation will be the explosive growth of wide area networking. Network bandwidth has expanded tremendously in the last few years and will continue to improve for the next several years. Network technology is becoming more widespread than its original strong base in universities and gov­ernment laboratories as it is rapidly finding application in education, community networks and private industry.

Swiftly changing situation in computer progress is tightly connected with the impressive permanent improvement of


UNIT 4

microprocessors, which are known as central processing units (CPUs). A microprocessor, which includes a huge amount of transistors, is fabricated on a surface of a thin silicon layer with the help of a very complicated and precise semiconductor technology. Such electronic elements are usually called integrated circuits or chips. Engineers' striving for a multifunctional processor with high speed of operations performance forces them to increase the number of transistors and, as a consequence, to lessen their size. This dynamic of the INTEL CPU development is shown in the table below:

 

Name of processor Date of introduction Number of transistors Size of transistors, microns Speed, MIPS
6,000 0,64
29,000 0,33
134,000 1,5
275,000 1,5
1,200,000
Pentium 3,100,000 0,8
Pentium II 7,500,000 0,35
Pentium III 9,500,000 0,25
Pentium 4 42,000,000 0,18 1,700

Date: 2015-12-11; view: 1227


<== previous page | next page ==>
Vocabulary peculiarities | MIPS stands for million instructions per second.
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.006 sec.)