Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Performance of memory different types

Memory type ?ategory Record cancellation Byte altering Vola-tility Application
SRAM Read/Write Electric Yes Yes Cache-memory of the 2nd level
DRAM Read/Write Electric Yes Yes Main memory
ROM Only Read Impossible No No Large size devices
PROM Only Read Impossible No No Small size devices
EPROM Mainly Read Ultra-violet rays No No Device simulation
EEPROM Mainly Read Electric Yes No Device simulation
Flash Read/Write Electric No No Digital cameras

The most contemporary type of EEPROM is flash-memory. As against EPROM, which is erased by ultra-violet rays, and SRAM, which is erased in bytes, flash-memory is written and erased in units. As any EEPROM, flash-memory may be erased without extracting it from the integrated circuit. Many manufacturers produce small circuit boards with hundreds megabytes of flash-memory. They used for storing images in digital cameras and for other purposes. It is possible that flash-memory will replace disks since their time of access is 100 ns. The basic technical problem is the limited number of their deletions. It is about 10,000 and disks may be used for years regardless of the number of their rewritings.

For example, the computer microJava usually uses Flash-memory, which is connected via PCI as it is shown in Fig. 1.33.

Virtual Memories.Virtual memory is a technique that uses main memory as a ?cache? for secondary storage. In any computer system in which the currently active programs and data do not fit into the physical RAM space, secondary storage devices such as magnetic disks or tapes are used to hold the overflow. This space problem was first solved by requiring programmers to explicitly move programs or parts of programs from secondary storage to RAM when they are to be executed. However, the problem of management of the available RAM space is a machine-dependent problem that should not need to be solved by the programmer. The general techniques of automatically moving the required program and data blocks into the physical RAM for execution are called virtual-memory techniques. Programs, and hence the CPU, reference an instruction and data space that is independent of the physical RAM space. The binary addresses that the CPU issues for either instructions or data are called virtual or logical addresses. The mechanism that operates on these virtual addresses and translates them into actual locations in the physical hierarchy is usually implemented by a combination of hardware and software components. If the result of translating (or mapping) a specific virtual address is a physical RAM location, the contents of that location are used immediately as required. On the other hand, if the location is not in the RAM its contents must be brought into a suitable location in the RAM and then used.

The simplest form of translation method is based on the assumption that all programs and data are composed of fixed-length pages. These pages are basic units of word blocks that must always occupy contiguous locations, whether they are resident in the RAM or in secondary storage. Pages are commonly 512 or 1024 words long. They constitute the basic unit of information that is moved back and forth between the RAM and secondary storage whenever the translation mechanism determines that a move is required. This discussion clearly parallels many of the ideas that were introduced in the cache memory section. The cache concept is intended to bridge the speed gap between the CPU and the RAM. Hence the cache control is implemented in the hardware. The virtual-memory idea is primarily meant to bridge the size gap between the RAM and secondary storage. It is usually implemented in part by software techniques. Conceptually, cache techniques and virtual-memory techniques involve very similar ideas. They differ mainly in the details of their implementation.



An address translation method based on the concept of fixed-length pages is shown schematically in Fig. 1.34. Each virtual address generated by the CPU, whether it is for an instruction fetch or an operand fetch/store operation, is interpreted as a page number (high-order bits) followed by a word number (low-order bits). A page table in the RAM specifies the location of the pages that are currently in the RAM. The starting address of the page table is kept in the page table base register. By adding the page number to the contents of this register, the address of the corresponding entry in the page table is obtained. The contents of this location name the block of the RAM where the requested page currently resides; or if the page is not in the RAM, the page table entry points to where the page is in the secondary storage. The control bits indicate whether or not the page is present in the RAM. They also may contain some past usage information, etc., for purposes of implementing the page replacement algorithm.

If the page table is stored in the RAM unit, as assumed above, then two RAM accesses need to be made for every RAM access requested by a program. This degradation in execution speed by a factor of 2 is the price that is paid for the programming convenience of a wider addressable memory space. It is not necessary that the page table be implemented in the RAM unit. The system can operate faster if the page table is stored in a small fast memory.

When a new page is to be brought from secondary storage to the RAM, the page table may provide the details of where this data can be found on a magnetic disk. On the other hand, it may provide an address pointer to a block of words in the RAM where this detailed information is stored. In either case, a long delay is now incurred while the page transfer into the RAM takes place. An I/O channel or DMA operation usually does this. At this point, the CPU may be used to execute another task whose pages are in the RAM.

The general problem of deciding which page is to be removed from a full RAM, when a new page is to be brought in, is just as critical here as in the cache situation. The notion that programs tend to spend most of their time in a few localized areas is also equally applicable. Since RAMs are considerably larger than cache memories, it should be possible to keep relatively larger portions of a program in the RAM. This will reduce the frequency of transfers to and from secondary storage. Concepts like the Least Recently Used (LRU) replacement algorithm can be applied to page replacement. The data that determines the LRU page or set of pages can be kept in the page table control bits. The need to update the LRU information on every RAM reference is another reason why the page table should be implemented in a high-speed memory.

All computer components develop very unevenly. If CPU speed grows very quickly (about 60% per year), memory speed increases only by 7% per year, which means that the transistors of the CPU are idle for the greater part of the operation time. This disagreement between the development of CPU speed and memory speed degrades the communication abilities between the CPU and its environment [53].


Date: 2016-06-12; view: 102


<== previous page | next page ==>
Architecturally-Structural Memory Organization Features | Element Base Development Influence on Data Processing
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.007 sec.)