parallel computing, parallel processing

50 %
50 %
Information about parallel computing, parallel processing
Education

Published on April 14, 2009

Author: apex123

Source: slideshare.net

Description

parallel computing brief.

Parallel Computing

Traditionally, software has been written for serial computation: To be run on a single computer having a single Central Processing Unit (CPU) A problem is broken into a discrete series of instructions. Instructions are executed one after another. Only one instruction may execute at any moment in time.

Traditionally, software has been written for serial computation:

To be run on a single computer having a single Central Processing Unit (CPU)

A problem is broken into a discrete series of instructions.

Instructions are executed one after another.

Only one instruction may execute at any moment in time.

Limits to serial computing - both physical and practical reasons pose significant constraints to simply building ever faster serial computers: Transmission speeds - the speed of a serial computer is directly dependent upon how fast data can move through hardware. Absolute limits are the speed of light (30 cm/nanosecond) and the transmission limit of copper wire (9 cm/nanosecond). Increasing speeds necessitate increasing proximity of processing elements. Limits to miniaturization - processor technology is allowing an increasing number of transistors to be placed on a chip. However, even with molecular or Atomic-level components, a limit will be reached on how small components can be. Economic limitations - it is increasingly expensive to make single processor faster. Using a larger number of moderately fast commodity processors to achieve the same (or better) performance is less expensive.

Limits to serial computing - both physical and practical reasons pose significant

constraints to simply building ever faster serial computers:

Transmission speeds - the speed of a serial computer is directly dependent upon

how fast data can move through hardware. Absolute limits are the speed

of light (30 cm/nanosecond) and the transmission limit of copper wire

(9 cm/nanosecond). Increasing speeds necessitate increasing proximity

of processing elements.

Limits to miniaturization - processor technology is allowing an increasing number

of transistors to be placed on a chip. However, even with molecular or

Atomic-level components, a limit will be reached on how small

components can be.

Economic limitations - it is increasingly expensive to make single processor faster.

Using a larger number of moderately fast commodity processors to

achieve the same (or better) performance is less expensive.

What is Parallel Computing? In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different CPUs

What is Parallel Computing?

In the simplest sense, parallel computing is the simultaneous use of multiple compute

resources to solve a computational problem.

To be run using multiple CPUs

A problem is broken into discrete parts that can be solved concurrently

Each part is further broken down to a series of instructions

Instructions from each part execute simultaneously on different CPUs

Why Do Parallel Computing? Limits of single CPU computing Available memory Performance Parallel computing allows: Solve problems that don’t fit on a single CPU’s memory space Solve problems that can’t be solved in a reasonable time We can run… Larger problems Faster More cases

Limits of single CPU computing

Available memory

Performance

Parallel computing allows:

Solve problems that don’t fit on a single CPU’s memory space

Solve problems that can’t be solved in a reasonable time

We can run…

Larger problems

Faster

More cases

Parallel computing: use of multiple computers or processors working together on a common task. Each processor works on its section of the problem Processors can exchange information Grid of Problem to be solved CPU #1 works on this area of the problem CPU #3 works on this area of the problem CPU #4 works on this area of the problem CPU #2 works on this area of the problem y x exchange exchange

Parallel computing: use of multiple computers or processors working together on a common task.

Each processor works on its section of the problem

Processors can exchange information

 

Basic Components of a Parallel (or Serial) Computer CPU MEM C PU M EM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM CPU MEM

Classes of parallel computers Parallel computers can be roughly classified according to the level at which the hardware supports parallelism. This classification is broadly analogous to the distance between basic computing nodes. These are not mutually Exclusive.

Multicore computing A multicore processor is a processor that includes multiple execution units ("cores"). These processors differ from superscalar processors, which can issue multiple instructions per cycle from one instruction stream (thread); by contrast, a multicore processor can issue multiple instructions per cycle from multiple instruction streams. Each core in a multicore processor can potentially be superscalar as well—that is, on every cycle, each core can issue multiple instructions from one instruction stream. Symmetric multiprocessing A symmetric multiprocessor (SMP) is a computer system with multiple identical processors that share memory and connect via a bus. Bus contention prevents bus architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors." Because of the small size of the processors and the significant reduction in the requirements for bus bandwidth achieved by large caches, such symmetric multiprocessors are extremely cost-effective, provided that a sufficient amount of memory bandwidth exists.”

Symmetric multiprocessing

A symmetric multiprocessor (SMP) is a computer system with multiple identical processors that share memory and connect via a bus. Bus contention prevents bus architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors." Because of the small size of the processors and the significant reduction in the requirements for bus bandwidth achieved by large caches, such symmetric multiprocessors are extremely cost-effective, provided that a sufficient amount of memory bandwidth exists.”

Distributed computing A distributed computer (also known as a distributed memory multiprocessor) is a distributed memory computer system in which the processing elements are connected by a network. Distributed computers are highly scalable. Cluster computing  A cluster is a group of loosely coupled computers that work together closely, so that in some respects they can be regarded as a single computer. Clusters are composed of multiple standalone machines connected by a network. While machines in a cluster do not have to be symmetric, load balancing is more difficult if they are not. The most common type of cluster is the Beowulf cluster, which is a cluster implemented on multiple identical commercial off-the-shelf computers connected with a TCP/IP Ethernet local area network.

Massive parallel processing A massively parallel processor (MPP) is a single computer with many networked processors. MPPs have many of the same characteristics as clusters, but they are usually larger, typically having "far more" than 100 processorsIn an MPP, “each CPU contains its own memory and copy of the operating system and application. Each subsystem communicates with the others via a high-speed interconnect. Grid computing Grid computing is the most distributed form of parallel computing. It makes use of computers communicating over the Internet to work on a given problem. Because of the low bandwidth and extremely high latency available on the Internet, grid computing typically deals only with embarrassingly parallel problems. Most grid computing applications use middleware, software that sits between the operating system and the application to manage network resources and standardize the software interface.

 

Types of Parallelism : Two Extremes Data parallel Each processor performs the same task on different data Example - grid problems Task parallel Each processor performs a different task Example - signal processing Most applications fall somewhere on the continuum between these two extremes There are two other types of parallelism: Bit-level parallelism Instruction-level parallelism

Data parallel

Each processor performs the same task on different data

Example - grid problems

Task parallel

Each processor performs a different task

Example - signal processing

Most applications fall somewhere on the continuum between these two extremes

There are two other types of parallelism:

Bit-level parallelism

Instruction-level parallelism

Parallel Computers

Add a comment

Related presentations

Related pages

Parallel computing - Wikipedia, the free encyclopedia

Parallel computing is a type of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can ...
Read more

Parallel Computing Toolbox - MATLAB - MathWorks Deutschland

Die Parallel Computing Toolbox ermöglicht Ihnen die Nutzung eines Mehrkerncomputers, GPU, Cluster, Grid oder Cloud-Dienst zur Lösung von rechen- und ...
Read more

Introduction to Parallel Computing - High Performance ...

Array Processing Parallel Solution 2: ... A search on the WWW for "parallel programming" or "parallel computing" will yield a wide variety of information.
Read more

Parallel Computing - Journal - Elsevier

Parallel Computing is an international journal presenting the practical use of parallel computer systems, including high performance architecture, system ...
Read more

Parallel.js: Parallel computing with Javascript

Parallel Computing with Javascript. Parallel.js is a tiny library for multi-core processing in Javascript. It was created to take full advantage of the ...
Read more

What is Parallel Processing? Webopedia Definition

Parallel processing makes a program run faster because there are more CPUs ... Parallel processing is also called parallel computing. PREVIOUS parallel port.
Read more

Massively Parallel Processing – Wikipedia

Massively Parallel Processing Der Begriff Massenparallelrechner (MPR) oder englisch ... Ein Massiv-paralleler Computer ist demnach ein Parallelrechner, ...
Read more

Parallel processing - Wikipedia, the free encyclopedia

Parallel processing may refer to: Parallel computing; Parallel processing (DSP implementation) – Parallel processing in digital signal processing
Read more

Parallelrechner – Wikipedia

Für massiv parallele Anwendungen wurden deshalb mittlerweile auch Rechner ... OpenCL Open Computing Language ... OpenMP Open Multi-Processing; Siehe auch ...
Read more