Home Random Page



Data Parallel Model

The data parallel model demonstrates the following characteristics: *

o Most of the parallel work focuses on performing operations on a data set. The data set is typically organized into a common structure, such as an array or cube.

o A set of tasks work collectively on the same data structure, however, each task works on a different partition of the same data structure.

o Tasks perform the same operation on their partition of work, for example, "add 4 to every array element".

* On shared memory architectures, all tasks may have access to the data structure through global memory. On distributed memory architectures the data structure is split up and resides as "chunks" in the local memory of each task.

Programming with the data parallel model is usually accomplished by writing a program with data parallel constructs. The constructs can be calls to a data parallel subroutine library or, compiler directives recognized by a data parallel compiler.

Fortran 90 and 95 (F90, F95): ISO/ANSI standard extensions to Fortran 77.

* Contains everything that is in Fortran 77

* New source code format; additions to character set

* Additions to program structure and commands

* Variable additions - methods and arguments

* Pointers and dynamic memory allocation added

* Array processing (arrays treated as objects) added

* Recursive and new intrinsic functions added

* Many other new features

Implementations are available for most common parallel platforms.


# High Performance Fortran (HPF): Extensions to Fortran 90 to support data parallel programming.

* Contains everything in Fortran 90

* Directives to tell compiler how to distribute data added

* Assertions that can improve optimization of generated code added

* Data parallel constructs added (now part of Fortran 95)

HPF compilers were common in the 1990s, but are no longer commonly implemented.

# Compiler Directives: Allow the programmer to specify the distribution and alignment of data. Fortran implementations are available for most common parallel platforms.

# Distributed memory implementations of this model usually have the compiler convert the program into standard code with calls to a message passing library (MPI usually) to distribute the data to all the processes. All message passing is done invisibly to the programmer.

Other Models

Other parallel programming models besides those previously mentioned certainly exist, and will continue to evolve along with the ever changing world of computer hardware and software. Only three of the more common ones are mentioned here.


# In this model, any two or more parallel programming models are combined.

# Currently, a common example of a hybrid model is the combination of the message passing model (MPI) with either the threads model (POSIX threads) or the shared memory model (OpenMP). This hybrid model lends itself well to the increasingly common hardware environment of networked SMP machines.

# Another common example of a hybrid model is combining data parallel with message passing. As mentioned in the data parallel model section previously, data parallel implementations (F90, HPF) on distributed memory architectures actually use message passing to transmit data between tasks, transparently to the programmer.

SPMD (Single Program Multiple Data)

SPMD is actually a "high level" programming model that can be built upon any combination of the previously mentioned parallel programming models.

# A single program is executed by all tasks simultaneously.

# At any moment in time, tasks can be executing the same or different instructions within the same program.

# SPMD programs usually have the necessary logic programmed into them to allow different tasks to branch or conditionally execute only those parts of the program they are designed to execute. That is, tasks do not necessarily have to execute the entire program - perhaps only a portion of it.

# All tasks may use different data

MPMD(Multiple Program Multiple Data)

Like SPMD, MPMD is actually a "high level" programming model that can be built upon any combination of the previously mentioned parallel programming models.

MPMD applications typically have multiple executable object files (programs). While the application is being run in parallel, each task can be executing the same or different program as other tasks.

# All tasks may use different data


Date: 2016-03-03; view: 853

<== previous page | next page ==>
Hybrid Distributed-Shared Memory | Too many people involved in communication with the client.
doclecture.net - lectures - 2014-2022 year. Copyright infringement or personal data (0.014 sec.)