Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Some General Parallel Terminology

Task- A logically discrete section of computational work. A task is typically a program or program-like set of instructions that is executed by a processor.

Parallel Task- A task that can be executed by multiple processors safely (yields correct results)

Serial Execution - Execution of a program sequentially, one statement at a time. In the simplest sense, this is what happens on a one processor machine. However, virtually all parallel tasks will have sections of a parallel program that must be executed serially.

Parallel Execution- Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time.

Pipelining - Breaking a task into steps performed by different processor units, with inputs streaming through, much like an assembly line; a type of parallel computing.

Shared Memory - From a strictly hardware point of view, describes a computer architecture where all processors have direct (usually bus based) access to common physical memory. In a programming sense, it describes a model where parallel tasks all have the same "picture" of memory and can directly address and access the same logical memory locations regardless of where the physical memory actually exists.

Symmetric Multi-Processor (SMP) -Hardware architecture where multiple processors share a single address space and access to all resources; shared memory computing.

Distributed Memory - In hardware, refers to network based memory access for physical memory that is not common. As a programming model, tasks can only logically "see" local machine memory and must use communications to access memory on other machines where other tasks are executing.

Communications - Parallel tasks typically need to exchange data. There are several ways this can be accomplished, such as through a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employed.

Synchronization - The coordination of parallel tasks in real time, very often associated with communications. Often implemented by establishing a synchronization point within an application where a task may not proceed further until another task(s) reaches the same or logically equivalent point.

Synchronization usually involves waiting by at least one task, and can therefore cause a parallel application's wall clock execution time to increase.

Granularity- In parallel computing, granularity is a qualitative measure of the ratio of computation to communication.

* Coarse: relatively large amounts of computational work are done between communication events

* Fine: relatively small amounts of computational work are done between communication events

Observed Speedup

Observed speedup of a code which has been parallelized, defined as:

wall-clock time of serial execution

-----------------------------------

wall-clock time of parallel execution

One of the simplest and most widely used indicators for a parallel program's performance.



Parallel Overhead- The amount of time required to coordinate parallel tasks, as opposed to doing useful work.

Parallel overhead can include factors such as:

* Task start-up time

* Synchronizations

* Data communications

* Software overhead imposed by parallel compilers, libraries, tools, operating system, etc.

* Task termination time

Massively Parallel- Refers to the hardware that comprises a given parallel system - having many processors.

The meaning of "many" keeps increasing, but currently, the largest parallel computers can be comprised of processors numbering in the hundreds of thousands.

Embarrassingly Parallel- Solving many similar, but independent tasks simultaneously; little to no need for coordination between the tasks.

Scalability- Refers to a parallel system's (hardware and/or software) ability to demonstrate a proportionate increase in parallel speedup with the addition of more processors. Factors that contribute to scalability include:

* Hardware - particularly memory-cpu bandwidths and network communications

* Application algorithm

* Parallel overhead related

* Characteristics of your specific application and coding

Multi-core Processors- Multiple processors (cores) on a single chip.

Cluster Computing-Use of a combination of commodity units (processors, networks or SMPs) to build a parallel system.

Supercomputing / High Performance Computing- Use of the world's fastest, largest machines to solve large problems.


Date: 2016-03-03; view: 930


<== previous page | next page ==>
Concepts and Terminology | Parallel Computer Memory Architectures
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.006 sec.)