Traditionally, software has been written for serial computation:
* To be run on a single computer having a single Central Processing Unit (CPU);
* A problem is broken into a discrete series of instructions.
* Instructions are executed one after another.
* Only one instruction may execute at any moment in time.
In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:
* To be run using multiple CPUs
* A problem is broken into discrete parts that can be solved concurrently
* Each part is further broken down to a series of instructions
* Instructions from each part execute simultaneously on different CPUs
Parallel Computing
The compute resources can include:
* A single computer with multiple processors;
* An arbitrary number of computers connected by a network;
* A combination of both.
The computational problem usually demonstrates characteristics such as the ability to be:
* Broken apart into discrete pieces of work that can be solved simultaneously;
* Execute multiple program instructions at any moment in time;
* Solved in less time with multiple compute resources than with a single compute resource.
The Universe is Parallel:
Parallel computing is an evolution of serial computing that attempts to emulate what has always been the state of affairs in the natural world: many complex, interrelated events happening at the same time, yet within a sequence. For example:
* Galaxy formation
* Planetary movement
* Weather and ocean patterns
* Tectonic plate drift
* Rush hour traffic
* Automobile assembly line
* Building a space shuttle
* Ordering a hamburger at the drive through.
Uses for Parallel Computing:
Historically, parallel computing has been considered to be "the high end of computing", and has been used to model difficult scientific and engineering problems found in the real world. Some examples:
Today, commercial applications provide an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data in sophisticated ways. For example:
* Databases, data mining
* Oil exploration
* Web search engines, web based business services
* Medical imaging and diagnosis
* Pharmaceutical design
* Management of national and multi-national corporations
* Financial and economic modeling
* Advanced graphics and virtual reality, particularly in the entertainment industry
* Networked video and multi-media technologies
* Collaborative work environments
Why use Parallel computing?
Main Reasons:
a. - Save time and/or money: In theory, throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel clusters can be built from cheap, commodity components.
b. - Solve larger problems: Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. For example:
* "Grand Challenge" (en.wikipedia.org/wiki/Grand_Challenge) problems requiring PetaFLOPS and PetaBytes of computing resources.
* Web search engines/databases processing millions of transactions per second
c. - Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously. For example, the Access Grid (www.accessgrid.org) provides a global collaboration network where people from around the world can meet and conduct work "virtually".
d. - Use of non-local resources: Using compute resources on a wide area network, or even the Internet when local compute resources are scarce. For example:
* SETI@home (setiathome.berkeley.edu) uses over 330,000 computers for a compute power over 528 TeraFLOPS (as of August 04, 2008)
* Folding@home (folding.stanford.edu) uses over 340,000 computers for a compute power of 4.2 PetaFLOPS (as of November 4, 2008)
e. - Limits to serial computing: Both physical and practical reasons pose significant constraints to simply building ever faster serial computers:
* Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements.
* Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or atomic-level components, a limit will be reached on how small components can be.
* Economic limitations - it is increasingly expensive to make a single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.
Decision: Current computer architectures are increasingly relying upon hardware level parallelism to improve performance: