advertisement

Csc2013 exascale in-the-us-towns

75 %
25 %
advertisement
Information about Csc2013 exascale in-the-us-towns
Technology

Published on March 3, 2014

Author: jtownsil

Source: slideshare.net

Description

Conference on Scientific Computing 2013 (CSC 2013)

Invited talk.

Title: Exascale in US
advertisement

Exascale in the US John Towns Director, Collaborative eScience Program Office, NCSA jtowns@ncsa.illinois.edu National Center for Supercomputing Applications University of Illinois at Urbana–Champaign

Setting Some Context • It is a bad idea to project too far in the future based on current technology • but provides a basis for worst case scenarios • Challenge: build a system that will support sustained 1.0 EFLOP/s performance • assume: code can sustain 5% of peak performance • need a 20 Eflop/s system • assume: no constraints on power, parallelism, MTBF, scales of interconnects,… • assume: application can be written • i.e. ignore scaling, thread counts, message passing issues, memory size constraints, languages, libraries, development tools, … • Let’s try to do this with today’s technology

Current Systems in the US: Titan (Cray XK7 @ ORNL) • Performance • peak: 27 Pflop/s • HPL: 17.6 Pflop/s • #2 in 11/13 TOP500 • sustained: ?? • Facility Support • 4,352 sqft / 404 m2 • 8.2 MW • Floating Point Support: • 18,688 compute nodes • 299,008 AMD Opteron “cores” • 16 “cores” /node • 18,688 NVIDIA Kepler (K20) GPUs • 1 GPU/node • Data Support • 710 TiB of memory • 584 TiB with CPUs (2GB/core) • 126 TiB on GPUs (6 GB/GPU) • 10 PB of disk storage

Current Systems in the US: Sequoia (IBM BlueGene/Q @ LLNL) • Performance: • Floating Point Support: • peak: 20 Pflop/s • HPL: 17.2 Pflop/s • #3 in 11/13 TOP500 • 98,304 compute nodes • 1.57M PowerPC A2 cores • 16 cores /node • sustained: ?? • Facility Support • 3,000 sqft / 280 • 7.9 MW m2 • Data Support • 1.5 PiB of memory • 1GB/core • 50 PB of disk storage

Current Systems in the US: Mira (IBM BlueGene/Q @ ANL) • Performance: • Floating Point Support: • peak: 10 Pflop/s • HPL: 8.59 Pflop/s • #5 in 11/13 TOP500 • 49,152 compute nodes • 786,432 PowerPC A2 cores • 16 cores /node • sustained: ?? • Facility Support • 1,500 sqft / 140 (estimated) • 4.0 MW m2 • Data Support • 768 TiB of memory • 1GB/core • 35 PiB of disk storage

Current Systems in the US: Stampede (Dell C8220 @ TACC) • Performance • Floating Point Support: • peak: 9.6 Pflop/s • HPL: 5.17 Pflop/s • #7 in 11/13 TOP500 • sustained: ?? • Facility Support • 8,000 sqft / 745 m2 • 4.5 MW • 6,400 compute nodes • 102,400 Intel SandyBridge cores • 16 cores/node • 6,880 Xeon Phi co-processors • 1 Phi/node • Data Support • 270 TiB of memory • 2GB/core (for most nodes) • 14 PB of disk storage

Current Systems in the US: Blue Waters (Cray XE6/XK7 @ NCSA) • Performance • Floating Point Support: • peak: 13.3 Pflop/s • HPL: n/a • NOT in TOP500 • sustained: 1.0 Pflop/s • Facility Support • sqft / m2 • 14 MW • 26,864 compute nodes • 396,032 AMD Opteron “cores” • 16 “cores” /node • 4,224 NVIDIA Kepler (K20) GPUs • 1 GPU/node on those nodes • Data Support • 1.5 PiB of memory • 2GB/core • 26.4 PiB of disk storage

Brain Dead Projection • Simply build larger versions of current systems • of course this is stupid, but we can learn a few things • Assume: • real application can get 5% of peak of system (optimistic!) • applications can be scaled to necessary levels

CPU Execution Cores Accelerators Threads Memory Power Space (M) (M) (B) (PiB) (MW) (M sqft) Titan 226.8 Stampede Blue Waters 35.6 525.9 6,220 3.3 1607.7 Sequoia 14.2 0.0 1.6 1536.0 8,090 3.1 218.5 14.7 1.9 562.5 9,600 17.1 609.8 6.5 16.8 2309.8 21,558 7.7 • No avoiding O(1B) threads of execution • Commodity solution will not get there first • Likely need some combination of lower power processors and accelerators • ARM + Phi anyone? • Memory will likely need to be <<0.5 GB/thread

Step-by-Step-Instructions • How to create an exascale computing system in 1,200 easy steps .... 10

Trying to move Forward in the US: Politics • “Exascale” fell out of favor in the US political scene • it is recovering, but this slowed things down • recently, Congress has become much more interested in the scale of systems being deployed elsewhere around the world • Legislation recently introduced to provide funding to support moving forward • suggests DOE-University partnerships • could result in US$220M

Trying to move Forward in the US: Actions • DARPA UHPC program: 1 PFLOPS rack at 57 KW in 2015 (50 GFLOPS/W) • http://www.darpa.mil/Our_Work/MTO/Programs/Ubiquitous_High_Performance_Comp uting_(UHPC).aspx • 3 phases, like HPCS • pushed by previous DARPA leadership • seems to have less support from current leadership • DARPA studies: • ExtremeScale Hardware Study (Kogge) • http://users.ece.gatech.edu/~mrichard/ExascaleComputingStudyReport s/exascale_final_report_100208.pdf • ExtremeScale Software Study (Sarkar) • http://users.ece.gatech.edu/~mrichard/ExascaleComputingStudyReport s/ECSS%20report%20101909.pdf • ExtremeScale Resiliency Study (Elnozahy) • http://institutes.lanl.gov/resilience/docs/IBM%20Mootaz%20White%20P aper%20System%20Resilience.pdf • NSF Efforts: no word…

Trying to move Forward in the US: Community Efforts • NSF supporting Big Data and Extreme-Scale Computing • http://www.exascale.org/bdec/ • US + EU + Japan • builds on IESP • looks at crosscutting issues of Big Data and Extreme-Scale computing • DOE has sponsored a series of workshops recently • Workshop on Modeling & Simulation of Exascale Systems & Applications (Sep 2013) • http://hpc.pnl.gov/modsim/2013/ • Workshop on Applied Mathematics Research for Exascale Computing (Aug 2013) • https://collab.mcs.anl.gov/display/examath/ExaMath13+Workshop • Productive Programming Models for Exascale (Aug 2012) • http://xsci.pnnl.gov/ppme/

Add a comment

Related presentations

Presentación que realice en el Evento Nacional de Gobierno Abierto, realizado los ...

In this presentation we will describe our experience developing with a highly dyna...

Presentation to the LITA Forum 7th November 2014 Albuquerque, NM

Un recorrido por los cambios que nos generará el wearabletech en el futuro

Um paralelo entre as novidades & mercado em Wearable Computing e Tecnologias Assis...

Microsoft finally joins the smartwatch and fitness tracker game by introducing the...