advertisement

Parallel Computing

50 %
50 %
advertisement
Information about Parallel Computing
Technology

Published on October 5, 2008

Author: ameyawaghmare

Source: slideshare.net

Description

Please contact me to download this pres.A comprehensive presentation on the field of Parallel Computing.It's applications are only growing exponentially day by days.A useful seminar covering basics,its classification and implementation thoroughly.
Visit www.ameyawaghmare.wordpress.com for more info
advertisement

A Presentation on Parallel Computing -Ameya Waghmare(Rno 41,BE CSE) Guided by-Dr.R.P.Adgaonkar(HOD),CSE Dept.

 

Parallel computing is a form of computation in which many instructions are carried out simultaneously operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (in parallel). Why is it required?

Parallel computing is a form of computation in which many instructions are carried out simultaneously operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (in parallel).

Why is it required?

With the increased use of computers in every sphere of human activity,computer scientists are faced with two crucial issues today. Processing has to be done faster like never before Larger or complex computation problems need to be solved

With the increased use of computers in every sphere of human activity,computer scientists are faced with two crucial issues today.

Processing has to be done faster like never before

Larger or complex computation problems need to be solved

Increasing the number of transistors as per Moore’s Law isn’t a solution,as it also increases the frequency scaling and power consumption. Power consumption has been a major issue recently,as it causes a problem of processor heating. The perfect solution is PARALLELISM In hardware as well as software.

Increasing the number of transistors as per Moore’s Law isn’t a solution,as it also increases the frequency scaling and power consumption.

Power consumption has been a major issue recently,as it causes a problem of processor heating.

The perfect solution is PARALLELISM

In hardware as well as software.

Difference With Distributed Computing When different processors/computers work on a single common goal,it is parallel computing. Eg.Ten men pulling a rope to lift up one rock,supercomputers implement parallel computing. Distributed computing is where several different computers work separately on a multi-faceted computing workload. Eg.Ten men pulling ten ropes to lift ten different rocks,employees working in an office doing their own work.

When different processors/computers work on a single common goal,it is parallel computing.

Eg.Ten men pulling a rope to lift up one rock,supercomputers implement parallel computing.

Distributed computing is where several different computers work separately on a multi-faceted computing workload.

Eg.Ten men pulling ten ropes to lift ten different rocks,employees working in an office doing their own work.

Difference With Cluster Computing A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. Eg.,In an office of 50 employees,group of 15 doing some work,25 some other,and remaining 10 something else. Similarly,in a network of 20 computers,16 working on a common goal,whereas 4 on some other common goal. Cluster Computing is a specific case of parallel computing.

A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer.

Eg.,In an office of 50 employees,group of 15 doing some work,25 some other,and remaining 10 something else.

Similarly,in a network of 20 computers,16 working on a common goal,whereas 4 on some other common goal.

Cluster Computing is a specific case of parallel computing.

Difference With Grid Computing Grid Computing makes use of computers communicating over the Internet to work on a given problem. Eg.When 3 persons,one of them from USA,another from Japan and a third from Norway are working together online on a common project. Websites like Wikipedia,Yahoo!Answers,YouTube,FlickR or open source OS like Linux are examples of grid computing. Again,it serves a san example of parallel computing.

Grid Computing makes use of computers communicating over the Internet to work on a given problem.

Eg.When 3 persons,one of them from USA,another from Japan and a third from Norway are working together online on a common project.

Websites like Wikipedia,Yahoo!Answers,YouTube,FlickR or open source OS like Linux are examples of grid computing.

Again,it serves a san example of parallel computing.

The Concept Of Pipelining In computing, a pipeline is a set of data processing elements connected in series, so that the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion; in that case, some amount of buffer storage is often inserted between elements.

In computing, a pipeline is a set of data processing elements connected in series, so that the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion; in that case, some amount of buffer storage is often inserted between elements.

Approaches To Parallel Computing Flynn’s Taxonomy SISD(Single Instruction Single Data) SIMD(Single Instruction Multiple Data) MISD(Multiple Instruction Single Data) MIMD(Multiple Instruction Multiple Data)

Flynn’s Taxonomy

SISD(Single Instruction Single Data)

SIMD(Single Instruction Multiple Data)

MISD(Multiple Instruction Single Data)

MIMD(Multiple Instruction Multiple Data)

Approaches Based On Computation Massively Parallel Embarrassingly Parallel Grand Challenge Problems

Massively Parallel

Embarrassingly Parallel

Grand Challenge Problems

Massively Parallel Systems It signifies the presence of many independent units or entire microprocessors, that run in parallel. The term massive connotes hundreds if not thousands of such units.

It signifies the presence of many independent units or entire microprocessors, that run in parallel.

The term massive connotes hundreds if not thousands of such units.

Example:the Earth Simulator(Supercomputer from 2002-2004)

Embarrassingly Parallel Systems An embarrassingly parallel system is one for which no particular effort is needed to segment the problem into a very large number of parallel tasks. Examples include surfing two websites simultaneously , or running two applications on a home computer. They lie to an end of spectrum of parallelisation where tasks can be readily parallelised.

An embarrassingly parallel system is one for which no particular effort is needed to segment the problem into a very large number of parallel tasks.

Examples include surfing two websites simultaneously , or running two applications on a home computer.

They lie to an end of spectrum of parallelisation where tasks can be readily parallelised.

Grand Challenge Problems A grand challenge is a fundamental problem in science or engineering, with broad applications, whose solution would be enabled by the application of high performance computing resources that could become available in the near future. Grand Challenges were USA policy terms set as goals in the late 1980s for funding high-performance computing and communications research in part in response to the Japanese 5th Generation (or Next Generation) 10-year project.

A grand challenge is a fundamental problem in science or engineering, with broad applications, whose solution would be enabled by the application of high performance computing resources that could become available in the near future.

Grand Challenges were USA policy terms set as goals in the late 1980s for funding high-performance computing and communications research in part in response to the Japanese 5th Generation (or Next Generation) 10-year project.

Types Of Parallelism Bit-Level Instructional Data Task

Bit-Level

Instructional

Data

Task

Bit-Level Parallelism When an 8-bit processor needs to add two 16-bit integers,it’s to be done in two steps. The processor must first add the 8 lower-order bits from each integer using the standard addition instruction, Then add the 8 higher-order bits using an add-with-carry instruction and the carry bit from the lower order addition

When an 8-bit processor needs to add two 16-bit integers,it’s to be done in two steps.

The processor must first add the 8 lower-order bits from each integer using the standard addition instruction,

Then add the 8 higher-order bits using an add-with-carry instruction and the carry bit from the lower order addition

Instruction Level Parallelism The instructions given to a computer for processing can be divided into groups, or re-ordered and then processed without changing the final result. This is known as instruction-level parallelism. i.e.,ILP.

The instructions given to a computer for processing can be divided into groups, or re-ordered and then processed without changing the final result.

This is known as instruction-level parallelism.

i.e.,ILP.

An Example 1. e = a + b 2. f = c + d 3. g = e * f Here, instruction 3 is dependent on instruction 1 and 2 . However,instruction 1 and 2 can be independently processed.

1. e = a + b

2. f = c + d

3. g = e * f

Here, instruction 3 is dependent on instruction 1 and 2 .

However,instruction 1 and 2 can be independently processed.

Data Parallelism Data parallelism focuses on distributing the data across different parallel computing nodes. It is also called as loop-level parallelism.

Data parallelism focuses on distributing the data across different parallel computing nodes.

It is also called as loop-level parallelism.

An Illustration In a data parallel implementation, CPU A could add all elements from the top half of the matrices, while CPU B could add all elements from the bottom half of the matrices. Since the two processors work in parallel, the job of performing matrix addition would take one half the time of performing the same operation in serial using one CPU alone.

In a data parallel implementation, CPU A could add all elements from the top half of the matrices, while CPU B could add all elements from the bottom half of the matrices.

Since the two processors work in parallel, the job of performing matrix addition would take one half the time of performing the same operation in serial using one CPU alone.

Task Parallelism Task Parallelism focuses on distribution of tasks across different processors. It is also known as functional parallelism or control parallelism

Task Parallelism focuses on distribution of tasks across different processors.

It is also known as functional parallelism or control parallelism

An Example As a simple example, if we are running code on a 2-processor system (CPUs "a" & "b") in a parallel environment and we wish to do tasks "A" and "B" , it is possible to tell CPU "a" to do task "A" and CPU "b" to do task 'B" simultaneously, thereby reducing the runtime of the execution.

As a simple example, if we are running code on a 2-processor system (CPUs "a" & "b") in a parallel environment and we wish to do tasks "A" and "B" , it is possible to tell CPU "a" to do task "A" and CPU "b" to do task 'B" simultaneously, thereby reducing the runtime of the execution.

Key Difference Between Data And Task Parallelism Data Parallelism It is the division of threads(processes) or instructions or tasks internally into sub-parts for execution. A task ‘A’ is divided into sub-parts and then processed. Task Parallelism It is the divisions among threads(processes) or instructions or tasks themselves for execution. A task ‘A’ and task ‘B’ are processed separately by different processors.

Data Parallelism

It is the division of threads(processes) or instructions or tasks internally into sub-parts for execution.

A task ‘A’ is divided into sub-parts and then processed.

Task Parallelism

It is the divisions among threads(processes) or instructions or tasks themselves for execution.

A task ‘A’ and task ‘B’ are processed separately by different processors.

Implementation Of Parallel Comput ing In Software When implemented in software(or rather algorithms), the terminology calls it ‘parallel programming’. An algorithm is split into pieces and then executed, as seen earlier.

When implemented in software(or rather algorithms), the terminology calls it ‘parallel programming’.

An algorithm is split into pieces and then executed, as seen earlier.

Important Points In Parallel Programming Dependencies-A typical scenario when line 6 of an algorithm is dependent on lines 2,3,4 and 5 Application Checkpoints-Just like saving the algorithm, or like creating a backup point. Automatic Parallelisation-Identifying dependencies and parallelising algorithms automatically.This has achieved limited success.

Dependencies-A typical scenario when line 6 of an algorithm is dependent on lines 2,3,4 and 5

Application Checkpoints-Just like saving the algorithm, or like creating a backup point.

Automatic Parallelisation-Identifying dependencies and parallelising algorithms automatically.This has achieved limited success.

Implementation Of Parallel Computing In Hardware When implemented in hardware, it is called as ‘parallel processing’. Typically,when a chunk of load for execution is divided for processing by units like cores,processors,CPUs,etc.

When implemented in hardware, it is called as ‘parallel processing’.

Typically,when a chunk of load for execution is divided for processing by units like cores,processors,CPUs,etc.

An Example:Intel Xeon Series Processors

 

 

References http://portal.acm.org/citation.cfm?id=290768&coll=portal&dl=ACM http://www-users.cs.umn.edu/~karypis/parbook/ www.cs.berkeley.edu/~yelick/cs267-sp04/lectures/01/lect01-intro www.cs.berkeley.edu/~demmel/cs267_Spr99/Lectures/Lect_01_1999b http://www.intel.com/technology/computing/dual-core/demo/popup/dualcore.swf www.parallel.ru/ftp/computers/intel/xeon/24896607.pdf www.intel.com

http://portal.acm.org/citation.cfm?id=290768&coll=portal&dl=ACM

http://www-users.cs.umn.edu/~karypis/parbook/

www.cs.berkeley.edu/~yelick/cs267-sp04/lectures/01/lect01-intro

www.cs.berkeley.edu/~demmel/cs267_Spr99/Lectures/Lect_01_1999b

http://www.intel.com/technology/computing/dual-core/demo/popup/dualcore.swf

www.parallel.ru/ftp/computers/intel/xeon/24896607.pdf

www.intel.com

ANY QUERIES? Thank You!

Thank You!

Add a comment

Related presentations

Presentación que realice en el Evento Nacional de Gobierno Abierto, realizado los ...

In this presentation we will describe our experience developing with a highly dyna...

Presentation to the LITA Forum 7th November 2014 Albuquerque, NM

Un recorrido por los cambios que nos generará el wearabletech en el futuro

Um paralelo entre as novidades & mercado em Wearable Computing e Tecnologias Assis...

Microsoft finally joins the smartwatch and fitness tracker game by introducing the...

Related pages

Parallel computing - Wikipedia

Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can ...
Read more

Parallel Computing Toolbox - MATLAB - de.mathworks.com

Die Parallel Computing Toolbox ermöglicht Ihnen die Nutzung eines Mehrkerncomputers, GPU, Cluster, Grid oder Cloud-Dienst zur Lösung von rechen- und ...
Read more

Parallelrechner – Wikipedia

OpenCL Open Computing Language ... (Cost-Optimized Parallel COde Breaker), ein FPGA-basierter Parallelrechner; Einzelnachweise ...
Read more

Parallel Computing - Journal - Elsevier

Parallel Computing is an international journal presenting the practical use of parallel computer systems, including high performance architecture,...
Read more

Introduction to Parallel Computing: Amazon.de: Ananth ...

Ananth Grama - Introduction to Parallel Computing jetzt kaufen. ISBN: 9780201648652, Fremdsprachige Bücher - Programmieren
Read more

Parallel Computing - MATLAB & Simulink Solutions

MathWorks parallel computing products along with MATLAB and Simulink enable you to perform large-scale simulations and data processing tasks using ...
Read more

parallel 2017 - Softwarekonferenz für Parallel Programming ...

// Performanter durch Parallel Computing. Die parallel 2017 ist das wichtigste deutsche Event zur Parallelprogrammierung.
Read more

Parallel Computing Toolbox - MATLAB

Parallel Computing Toolbox enables you to harness a multicore computer, GPU, cluster, grid, or cloud to solve computationally and data-intensive problems.
Read more

Welcome: PC² // Paderborn Center for Parallel Computing

Computing and service center of University of Paderborn, Germany. Hosting and participating several national and international projects, operating ...
Read more

PARRALLEL-COMPUTE.COM - Education and Research Center

Parallel-Compute.Com Education and Research Center . About; Home; Menu. About; Home; Blog. Archives. February 2016; January 2016; Laser Saber By Shadowhawk ...
Read more