Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Hybrid Distributed-Shared Memory

The largest and fastest computers in the world today employ both shared and distributed memory architectures.

The shared memory component is usually a cache coherent SMP machine. Processors on a given SMP can address that machine's memory as global.

The distributed memory component is the networking of multiple SMPs. SMPs know only about their own memory - not the memory on another SMP. Therefore, network communications are required to move data from one SMP to another.

Current trends seem to indicate that this type of memory architecture will continue to prevail and increase at the high end of computing for the foreseeable future.

Advantages and Disadvantages: whatever is common to both shared and distributed memory architectures.

Message Passing Model

The message passing model demonstrates the following characteristics:# A set of tasks that use their own local memory during computation. Multiple tasks can reside on the same physical machine and/or across an arbitrary number of machines.

# Tasks exchange data through communications by sending and receiving messages.

# Data transfer usually requires cooperative operations to be performed by each process. For example, a send operation must have a matching receive operation.

Models

There are several parallel programming models in common use:

o Shared Memory

o Threads

o Message Passing

o Data Parallel

o Hybrid

Parallel programming models exist as an abstraction above hardware and memory architectures.

Although it might not seem apparent, these models are NOT specific to a particular type of machine or memory architecture. In fact, any of these models can (theoretically) be implemented on any underlying hardware. Two examples:

St Model

1. Shared memory model on a distributed memory machine: Kendall Square Research (KSR) ALLCACHE approach.

Machine memory was physically distributed, but appeared to the user as a single shared memory (global address space). Generically, this approach is referred to as "virtual shared memory". Note: although KSR is no longer in business, there is no reason to suggest that a similar implementation will not be made available by another vendor in the future.

Nd Model

2. Message passing model on a shared memory machine: MPI on SGI Origin.

The SGI Origin employed the CC-NUMA type of shared memory architecture, where every task has direct access to global memory. However, the ability to send and receive messages with MPI, as is commonly done over a network of distributed memory machines, is not only implemented but is very commonly used.

* Which model to use is often a combination of what is available and personal choice. There is no "best" model, although there certainly are better implementations of some models over others.

* The following sections describe each of the models mentioned above, and also discuss some of their actual implementations.

Shared Memory Model(detailed)



In the shared-memory programming model, tasks share a common address space, which they read and write asynchronously.

Various mechanisms such as locks / semaphores may be used to control access to the shared memory.

An advantage of this model from the programmer's point of view is that the notion of data "ownership" is lacking, so there is no need to specify explicitly the communication of data between tasks. Program development can often be simplified.

An important disadvantage in terms of performance is that it becomes more difficult to understand and manage data locality.

Keeping data local to the processor that works on it conserves memory accesses, cache refreshes and bus traffic that occurs when multiple processors use the same data.

Unfortunately, controlling data locality is hard to understand and beyond the control of the average user.

Threads Model

In the threads model of parallel programming, a single process can have multiple, concurrent execution paths.Perhaps the most simple analogy that can be used to describe threads is the concept of a single program that includes a number of subroutines:

The main program a.out is scheduled to run by the native operating system. a.out loads and acquires all of the necessary system and user resources to run. a.out performs some serial work, and then creates a number of tasks (threads) that can be scheduled and run by the operating system concurrently.Each thread has local data, but also, shares the entire resources of a.out. This saves the overhead associated with replicating a program's resources for each thread. Each thread also benefits from a global memory view because it shares the memory space of a.out. A thread's work may best be described as a subroutine within the main program. Any thread can execute any subroutine at the same time as other threads.

 

Implementations:

* From a programming perspective, message passing impl's commonly comprise a library of subroutines that are imbedded in source code. The programmer is responsible for determining all parallelism.

* Historically, a variety of message passing libraries have been available since the 1980s. These implementations differed substantially from each other making it difficult for programmers to develop portable applications.

* In 1992, the MPI Forum was formed with the primary goal of establishing a standard interface for message passing implementations.

* Part 1 of the Message Passing Interface (MPI) was released in 1994. Part 2 (MPI-2) was released in 1996. Both MPI specifications are available on the web at http://www-unix.mcs.anl.gov/mpi/.

* MPI is now the "de facto" industry standard for message passing, replacing virtually all other message passing implementations used for production work. Most, if not all of the popular parallel computing platforms offer at least one implementation of MPI. A few offer a full implementation of MPI-2.

* For shared memory architectures, MPI implementations usually don't use a network for task communications. Instead, they use shared memory (memory copies) for performance reasons.


Date: 2016-03-03; view: 970


<== previous page | next page ==>
Parallel Computer Memory Architectures | Data Parallel Model
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.006 sec.)