Supercomputing

50 %
50 %
Information about Supercomputing
Entertainment

Published on October 2, 2007

Author: Danielle

Source: authorstream.com

VORTONICS: Vortex Dynamics on Transatlantic Federated Grids:  VORTONICS: Vortex Dynamics on Transatlantic Federated Grids US-UK TG-NGS Joint Projects Supported by NSF, EPSRC, and TeraGrid Slide2:  Evident coherent structures in Navier-Stokes flow Intuitively useful: Tornado or smoke ring Theoretically useful: Helicity and linking number No single agreed upon mathematical definition Difficulties with visualization Vortex interactions poorly understood… Vortex Cores Scientific & Computational Challenges:  Scientific & Computational Challenges Physical challenges: Reconnection and Dynamos Vortical reconnection governs establishment of steady-state in Navier-Stokes turbulence Magnetic reconnection governs heating of solar corona The astrophysical dynamo problem Exact mechanism and space/time scales unknown and represent important theoretical challenges Mathematical challenges: Identification of vortex cores, and discovery of new topological invariants associated with them Discovery of new and improved analytic solutions of Navier-Stokes equations for interacting vortices Computational challenges: Enormous problem sizes, memory requirements, and long run times Algorithmic complexity scales as cube of Re Substantial postprocessing for vortex core identification Largest present runs and most future runs will require geographically distributed domain decomposition (GD3) Is GD3 on Grids a sensible approach? Homogeneous turbulence driven by force of Arnold-Beltrami-Childress (ABC) form Simulations to Study Reconnection:  Simulations to Study Reconnection Aref & Zawadzki (1992) presented numerical evidence that two nearby elliptical vortex rings will partially link Benchmark problem in vortex dynamics Used vortex-in-cell method for 3D Euler flow Some numerical diffusion associated with VIC method, but very small Example: Hopf Link:  Example: Hopf Link Two linked circular vortex tubes as initial condition Latice Boltzmann algorithm for Navier-Stokes with very low viscosity (0.002 in lattice units) ELI variational result in dark blue and red Vorticity thresholding in light blue The dark blue and red curves do not unlink in the time scale of this simulation! Example: Aref & Zawadzki’s Ellipses: Front View:  Example: Aref & Zawadzki’s Ellipses: Front View Parameters obtained by correspondence with Aref & Zawadzki Lattice Boltzmann simulation with very low viscosity They do not link in the time scale of this simulation! Same Ellipses: Side View:  Same Ellipses: Side View Note that not all minima are shown in the late stages of this evolution - only the time continuation of the original pair of ellipses Again: They do not link in the time scale of this simulation! Lattice Remapping, Fourier Resizing, and Computational Steering:  Lattice Remapping, Fourier Resizing, and Computational Steering At its lowest level, VORTONICS contains a general remapping library for dynamically changing the layout of the computational lattice across the processors (pencils, blocks, slabs) using MPI All data on computational lattice can be Fourier resized (FFT, augmentation or truncation in k space, inverse FFT) as it is remapped All data layout features are dynamically steerable VTK used for visualization (each rank computes polygons locally) Grid-enabled with MPICH-G2 so that simulation, visualization, and steering can be run anywhere, or even across sites Vortex Generator Component:  Vortex Generator Component Given parametrization of knot or link Future: “Draw” a vortex knot Superpose contributions from each Each site on 3D grid performs line integral Divergenceless, parameter-independent Periodic boundary conditions requires Ewald-like sum over image knots Poisson solve (FFT) to get velocity field Components for Fluid Dynamics:  Components for Fluid Dynamics Navier-Stokes codes Multiple-relaxation-time lattice Boltzmann Entropic lattice Boltzmann Pseudospectral Navier-Stokes solver All codes parallelized with MPI (MPICH-G2) Domain decomposition Halo swapping Slide11:  Components for Visualization: Extremal Line Integral (ELI) Method Intuition: Line integral of vorticity along vortex core is large Definition: A vortex core is the curve along which line integral of vorticity is a local maximum in the space of all curves in the fluid domain… …with appropriate boundary conditions For smoke ring, periodic BC’s For tornado or trailing vortex on airplane wing, one end is attached to zero-velocity boundary, other at infinity For “hairpin” vortex, two ends attached to boundary Result is one-dimensional curve along vortex core Two References available (Phil. Trans. & Physica A) Slide12:  ELI Algorithm Evolve curve in “fictitious time” t “Equilibrium” of GL equation is a vortex core Ginsburg-Landau equation for which line integral is a Lyapunov functional Slide13:  Computational Steering All components use computational steering Almost all parameters are steerable time step frequency of checkpoints outputs, logs, graphics stop and restart read from checkpoint even spatial lattice dimensions (dynamic lattice resizing) halo thickness Scenarios for Using TFD Toolkit:  Scenarios for Using TFD Toolkit Run with periodic checkpointing until a topological change is noticed Rewind to last checkpoint before topological change, refine spatial and temporal discretization, viscosity Postprocessing of vorticity field and search for vortex cores can be migrated All components portable and may run locally or on geographically separated hardware Slide15:  Cross-Site Runs Before, During, and After SC05 Federated Grids US TeraGrid NCSA San Diego Supercomputing Center Argonne National Laboratory Texas Advanced Computing Center Pittsburgh Supercomputing Center UK National Grid Service CSAR Task distribution GD3 - is it sensible for large computational lattices? Slide16:  Run Sizes to Date / Performance Multiple Relaxation Time Lattice Boltzmann (MRTLB) model 600,000 SUPS/processor when run on one multiprocessor Performance scales linearly with np when run on one multiprocessor 3D lattice sizes up to 6453 run prior to SC05 across six sites NCSA, SDSC, ANL, TACC, PSC, CSAR 528 CPU’s to date, and larger runs in progress as we speak! Amount of data injected into network. Strongly bandwidth limited. Effective SUPS/processor Reduced by factor approximately equal to number of sites Therefore SUPS approximately constant as problem grows in size Discussion / Performance Metric:  Discussion / Performance Metric We are aiming for lattice sizes that can not reside at any one SC Center, but… Bell, Gray, Szalay, “PetaScale Computational Systems: Balanced CyberInfrastructure in a Data-Centric World” (September, 2005) If data can be regenerated locally, don’t send it over the grid (105 ops/byte) Higher disk to processing ratios - large disk farms Thought experiment: Enormous lattice, local to one SC Center, by swapping n sublattices to disk farm If we can not exceed this performance, it is not worth using the Grid for GD3 Make the very optimistic assumption that disk access time not limiting Clearly total SUPS constant, since it is one single multiprocessor Therefore SUPS/processor degrades by 1/n We can do that now. That is precisely the scaling that we see now. GD3 is a win! And things are only going to improve… Improvements in store: UDP with added reliability (UDT) in MPICH-G2 will improve bandwidth Multithreading in MPICH-G2 will overlap communication and computation to hide latency and bulk data transfers Disk swap in volume, interprocessor communications on surface, keep in processors! Conclusions:  Conclusions GD3 is already a win on today’s TeraGrid/NGS, with today’s middleware With improvements to MPICH-G2, TeraGrid infrastructure, and middleware, GD3 will become still more desirable The TeraGrid will enable scientific computation with larger lattice sizes than have ever been possible It is worthwhile to consider algorithms that push the envelope in this regard, including relaxation of PDE’s in 3+1 dimensions

Add a comment

Related presentations

Related pages

Supercomputer – Wikipedia

Name Standort Tera FLOPS Konfiguration Energiebedarf Zweck; Sunway TaihuLight: National Supercomputing Center, Wuxi, Jiangsu: 93.000,00: 40.960 Sunway ...
Read more

Supercomputing - SC12

SC has been at the forefront in gathering the best and brightest minds in supercomputing together, with our unparalleled technical papers, tutorials ...
Read more

Supercomputing Systems & Home - scs.ch

Seit August 2016 verfügt die Firma SCS AG über ein neues Departement, geleitet von Alexis Guanella. Das Departement fokussiert sich auf das ...
Read more

ZKI-Arbeitskreises Supercomputing

In vielen wichtigen Forschungsfeldern ist der Einsatz von Hoch- und Höchstleistungsrechnern (a.k.a. Supercomputer) unverzichtbar für den ...
Read more

Supercomputer - Wikipedia

A supercomputer is a computer with a high-level computational capacity compared to a general ... The history of supercomputing goes back to the ...
Read more

Leibniz Supercomputing Centre (LRZ)

The LRZ is the computer centre for Munich's universities and for the Bavarian Academy of Sciences and Humanities. It is also a national centre for High ...
Read more

Home | TOP500 Supercomputer Sites

News K Computer Comes Out on Top in HPCG Supercomputing Benchmark TOP500 News Team | November 22, 2016 01:40 CET
Read more

Home Page - SC16

Home; Attendees. Registration; Housing; Conference Busing; Attendee Guide; First Time Attendees; Assistance with Visas; On-Site Child Care; Important Deadlines
Read more

SC15

Watch the SC15 HPC Matters plenary with Intel's Diane Bryant; Click here for the Conference Program Booklet; Science Advocate and Emmy Award Winning Actor ...
Read more

Home - ISC 2017

ISC High Performance 2017 THE EVENT FOR HIGH PERFORMANCE COMPUTING, NETWORKING AND STORAGE. A new year, a new conference!
Read more