QsNetIII, An HPC Interconnect For Peta Scale Systems

75 %
25 %
Information about QsNetIII, An HPC Interconnect For Peta Scale Systems

Published on February 16, 2009

Author: federicapisani

Source: slideshare.net


QsNetIII Network
–Multi-stage switch network
–Evolution of the QsNetIIdesign
–Increased use of commodity hardware
–Increasing support for standard software
•QsNetIII Components
–ASICs Elan5 and Elite5
–Adapters, switches, cables
–Firmware, drivers, libraries
–Diagnostics, documentation

QsNetIII An HPC Interconnect for PetaScale Systems Duncan Roweth, Quadrics Ltd ISC08 Dresden June 2008

Quadrics Background • Develops interconnect products for the HPC market – HPC Linux systems – AlphaServer SC systems • Quadrics is owned by the Finmeccanica group • Quadrics will be 12 years old in July

Interconnect Network – QsNet • QsNetIII Network • QsNetIII Components – – Multi-stage switch network ASICs Elan5 and Elite5 – – Evolution of the QsNetII design Adapters, switches, cables – – Increased use of commodity hardware Firmware, drivers, libraries – – Increasing support for standard Diagnostics, documentation software

Elan5 Adapter Overview CX4/ CX4/ • QSNetIII QSNetIII QsNetIII 2 × 25 Gbit/s links • PCIe, PCIe2 host interface Elan5 Adapter Link Link • Multiple packet engines Packet Engine Packet Engine Packet Engine Packet Engine Packet Engine Packet Engine Packet Engine 16K inst cache 16K inst cache 16K inst cache 16K inst cache 16K inst cache 16K inst cache 16K inst cache • 512KB of high bandwidth on 9K data buffers 9K data buffers 9K data buffers 9K data buffers 9K data buffers 9K data buffers 9K data buffers chip local memory Fabric • SDRAM interface to optional x8 local memory Bridge Host I/F Local Memory Local Functions Object Cache Tags TLB • Buffer manager, object Buffer Manager External cache Cmd Launch SDRAM i/f Ext i/f Free List PCIe 16K x 8 x 8 banks = 1MB ECC RAM PLL cache SERDES External EEPROM Clocks PCIe DDRII 16 Lanes

QsNetIII Adapter Overview • QM700 PCIe x16 • 128MB adapter memory • 2 QSFP links • Half height low profile • Adapters variants – PCIe Gen2 – Blade formats – 10Gbit/s Ethernet 10GBase-CX4

Elite5 - Overview • Physical layer DDR XAUI – 4 x 6.25Gbit/s (2.5Gbytes/s) in each direction • 32-way crosspoint router • 32 virtual channels per link • Fat tree or mesh topologies • Adaptive routing • Broadcast & barrier support • Memory mapped stats & error counters accessed via control network

QsNetIII Adaptive Routing • Packet by packet dynamic routing – Single cycle routing decision • Selects route based on – Link state, errors etc – Number of pending acks • High radix switches – 2 routing decisions for 2048 nodes • More flexible than QsNetII – Operates on groups of links – Can adaptively route up or down

Bandwidth scalability – 1024 nodes • Bandwidth achieved when 1024 nodes all communicate at the same time • QsNetII provides better average bandwidth and much narrower spread in best to worst case performance System Interconnect Min Max Average Atlas Infiniband 95 762 263 QsNetII Thunder 248 403 369 Data from Lawrence Livermore National Lab, published at the Sonoma OpenFabrics workshop June 2007

QsNetIII Device Overview Elan Elite Manufacturing partner LSI/TSMC G90 process Semi custom ASICs, 500MHz system clock High performance BGA package 672 pin 982 pin 17W 18W

QsNetIII – Federated Network Switches • Node switch chassis – 128 links up 128 down • Same chassis provides multiple top switch configurations: – 644 512-way systems – 328 1024-way systems – 1616 2048-way systems – 832 4096-way systems

QsNetIII Network 4096–way

QsNetIII cables • QSFP connectors throughout • Optical cables (e.g.Luxtera), 5-300m – PVDF Plenum rated – LSZH available as an option • Active copper cables (Gore), 8-20m • Copper cables (Gore) 1-10m • No longer Quadrics proprietary • Bit error rates are a big issue at 5 Gbps and above – Optical cables between switches – Short copper cables from nodes

QsNetIII for HP BladeSystem Elan5 mezzanine adapter 2 QsNet links Elite5 switch module PCI-E x8 (initially) Full bandwidth 128 MB of memory 16 links to the blades (via backplane) 16 links to back of the module

2048-way QsNetIII BladeSystem Network

Building a 16K node system in 2009/10 • Single water cooled rack will • 8 Blade switches per rack provide 1000-2000 standard • Connect 128 of these racks cores ~12-25 TF. with 1024-way top switches • Single fibre cable per node - for full bi-section bandwidth.

QsNetIII Fault Tolerance • All of the QsNetII Features – CRCs on every packet – Automatic retransmission – Adaptive routing avoids failed links – Redundant routes – Redundant, hot plugable, PSUs and fans + Full line rate testing of each link as it comes up – Switches generate CRPAT, CJPAT or PRBS packets – Links are only added to the route tables when they are (a) up, (b) connect to the right place, and (c) can transfer data without error.

Software Model – Firmware & Drivers • Base firmware in the ROMs • Firmware modules loadable with the device driver – Elan, OpenFabrics, 10GE Ethernet, … • Kernel modules – elan5, elan, rms • Device dependent library (libelan5) • Device independent library (libelan) • User libraries

Software Model – Elan Libraries • Point-to-point message • Optimised collectives passing • Locks and atomics ops • One-sided put/get • Global memory allocation • Transparent rail striping

Why Quadrics? • Focus on the most demanding HPC applications • Delivers large system scalability – All nodes achieve host adapter bandwidth at the same time – Minimal spread between best and worst case performance – Low and uniform latency – Highly optimised collectives • Single supplier of interconnect hardware, software, support • Stability of our products • Track record of delivering production systems • European company

Add a comment

Related presentations

Related pages

Scale Systems | LinkedIn

Scale Systems Articles, experts, jobs, and more: get all the professional insights you need on LinkedIn
Read more

HPC Market and Interconnects Report - NASDAQ OMX Corporate ...

HPC Market and Interconnects Report ... HPC Interconnect Report, 2009 . 7 HPC Market - Interconnect ... Ability to scale, system scalability, ...
Read more


... An HPC Interconnect For Peta Scale Systems. ... QsNetIII, An HPC Interconnect For Peta Scale Systems. ... Quadrics supports the currently shipping ...
Read more

Sun and HPC: From Systems to PetaScale

Sun and HPC: From Systems to PetaScale ... Radical Interconnect Simplification Unparalleled HPC Fabric ... • Solution scales to 5,000+ nodes
Read more

Aurora Departmental HPC Systems - Technology

EUROTECH HPCAURORA DEPARTMENTAL HPC SYSTEMS. 2. ... QsNetIII, An HPC Interconnect For Peta Scale Systems AURORA Aurora Aurora Aurora aurora Comments.
Read more

ppOpen-HPC: Open Source Infrastructure for Development and ...

post-peta-scale system with heterogeneous computing nodes. “ppOpen-HPC” is five-year ... ¾ Final version of ppOpen-HPC for Post-Peta-Scale System
Read more

ppOpen-HPC: Open Source Infrastructure for Development and ...

mized for post-peta-scale systems. ... Proceedings of the 4th Fault Tolerance for HPC at eXtreme Scale (FTXS) 2014, in conjunction with DSN2014 (2014)
Read more