Waters Grid & HPC Course

100 %
0 %
Information about Waters Grid & HPC Course
Technology

Published on January 22, 2009

Author: jimliddle

Source: slideshare.net

Description

This is the course that was presented by James Liddle and Adam Vile for Waters in September 2008.

The book of this course can be found at: http://www.lulu.com/content/4334860

Data cache Dealing with compromise Dr Adam Vile Head of Grid, HPC and Technical Computing http://www.excelian.com Jim Liddle CEO Jana Technology Services Dr Adam Vile Head of Grid, HPC and Technical Consulting Excelian

Dr Adam Vile

Head of Grid, HPC and Technical Computing

Jim Liddle

CEO Jana Technology Services

Dr Adam Vile

Head of Grid, HPC and Technical Consulting

Excelian

Agenda We are running 2 connected sessions about data grid today Introductions Brainstorm: Objectives Session 1 – Moving data on the grid Break Session 2 – Building a Data Grid Summary and wrap up http://www.excelian.com

We are running 2 connected sessions about data grid today

Introductions

Brainstorm: Objectives

Session 1 – Moving data on the grid

Break

Session 2 – Building a Data Grid

Summary and wrap up

Who are we, and why are we here? Introductions http://www.excelian.com

Possible Objectives Some suggested objectives be aware of the variety of approaches to moving data around a large distributed system understand limitations and benefits of these approaches Understand the different data cache topologies and replication strategies know the compromises that must be made in combining scalability, low latency and data movement on a grid understand, at a high level, which architectures and topologies are appropriate for each problem Understand the Data centre requirements to meet growth in grids in relation to data Have a view on Utility computing and its applicability for performance and efficiency in compute and data grid We are flexible, and so lets focus on some of the things you atre interested in http://www.excelian.com

Some suggested objectives

be aware of the variety of approaches to moving data around a large distributed system

understand limitations and benefits of these approaches

Understand the different data cache topologies and replication strategies

know the compromises that must be made in combining scalability, low latency and data movement on a grid

understand, at a high level, which architectures and topologies are appropriate for each problem

Understand the Data centre requirements to meet growth in grids in relation to data

Have a view on Utility computing and its applicability for performance and efficiency in compute and data grid

We are flexible, and so lets focus on some of the things you atre interested in

Session 1 – Moving Data on the grid Agenda Session 1 – Moving data on the grid Presentation: Approaches to data movement on the grid Brainstorm: Data storage and movement use cases Break Presentation: Data cache topologies - issues for scaling data Brainstorm: Data cache: scenarios, use cases and applications Summary http://www.excelian.com

Session 1 – Moving data on the grid

Presentation: Approaches to data movement on the grid

Brainstorm: Data storage and movement use cases

Break

Presentation: Data cache topologies - issues for scaling data

Brainstorm: Data cache: scenarios, use cases and applications

Summary

Brainstorm Objectives http://www.excelian.com

Presentation Approaches to data movement on the grid http://www.excelian.com

Compute grid - Where are we ? (Compute) Grid has addressed a set of needs of the finance industry: More (and more) resource Scalability Robustness Higher utilisation Control of the hardware cost base Putting this in context – Grid has enabled business: Pricing and risking of more complex instruments Pricing and risking of more instruments Making completion or risk runs overnight and T0 P&L a reality Keeping up with increased volumes There are a set of new issues to address Scalability is not unlimited (cf. Ahmdal's law) as the grid gets wider, data movement becomes a problem Low latency requirements cannot be satisfied by grid

(Compute) Grid has addressed a set of needs of the finance industry:

More (and more) resource

Scalability

Robustness

Higher utilisation

Control of the hardware cost base

Putting this in context – Grid has enabled business:

Pricing and risking of more complex instruments

Pricing and risking of more instruments

Making completion or risk runs overnight and T0 P&L a reality

Keeping up with increased volumes

There are a set of new issues to address

Scalability is not unlimited (cf. Ahmdal's law)

as the grid gets wider, data movement becomes a problem

Low latency requirements cannot be satisfied by grid

Scaling the compute grid Compute problems in finance are embarrassingly distributed To achieve maximum scalability the compute time must outweigh any grid overhead, which is made up of: Resource allocation Task transfer time Task start-up time Data transfer time To make grid work effectively communication must be kept to a minimum. Hence tasks should be: Independent data is not shared between tasks Stateless data is not persistent on compute engines While desirable, this is not always possible The problem of scalability becomes a problem of how to get the data to the right place, at the right time

Compute problems in finance are embarrassingly distributed

To achieve maximum scalability the compute time must outweigh any grid overhead, which is made up of:

Resource allocation

Task transfer time

Task start-up time

Data transfer time

To make grid work effectively communication must be kept to a minimum. Hence tasks should be:

Independent

data is not shared between tasks

Stateless

data is not persistent on compute engines

While desirable, this is not always possible

The problem of scalability becomes a problem of how to get the data to the right place, at the right time

Getting the data to the right place at the right time The key reducing job turnaround time is to ensure that data and compute have a good locality of reference Data movement patterns for grid Move the data with the compute Data and compute task are packaged together in the client and then distributed by the grid. This is fine for small grids and small data packets (KB to low MB's) Move the data and the compute to the same place at the same time This is difficult to achieve currently as it requires communication between grid and data delivery vendor software Move the data to where the compute is Fine for smaller size data packets Typically achieved by use of a shared file system, which has limited scale Move the compute to where the data is Often the most efficient and achievable A good use case for this pattern is calibration

The key reducing job turnaround time is to ensure that data and compute have a good locality of reference

Data movement patterns for grid

Move the data with the compute

Data and compute task are packaged together in the client and then distributed by the grid.

This is fine for small grids and small data packets (KB to low MB's)

Move the data and the compute to the same place at the same time

This is difficult to achieve currently as it requires communication between grid and data delivery vendor software

Move the data to where the compute is

Fine for smaller size data packets

Typically achieved by use of a shared file system, which has limited scale

Move the compute to where the data is

Often the most efficient and achievable

A good use case for this pattern is calibration

Data movement mechanisms –file systems Shared file systems Are at best a temporary solution Performance and scalability issues eventually create a network bottleneck A low number of simultaneous reads supported (500) ‏ Parallel file systems Good for large amounts of data (> 50 GB) ‏ Simple, well understood interface Good scalability (4026 nodes on GPFS) ‏ File system Limitations Single point of failure and contention (although there is clustering) ‏ Disk based, increasing read and write time Limited support for windows (for parallel file systems) Limited support for file replication across regions Infrastructure, rather than application, centric

Shared file systems

Are at best a temporary solution

Performance and scalability issues

eventually create a network bottleneck

A low number of simultaneous reads supported (500) ‏

Parallel file systems

Good for large amounts of data (> 50 GB) ‏

Simple, well understood interface

Good scalability (4026 nodes on GPFS) ‏

File system Limitations

Single point of failure and contention (although there is clustering) ‏

Disk based, increasing read and write time

Limited support for windows (for parallel file systems)

Limited support for file replication across regions

Infrastructure, rather than application, centric

Data movement mechanisms – Data Grid Level 0 Data Grid Distribute large sets of static data to compute nodes Focus is on moving and sharing Terabytes to Petabytes of data Examples – CERN (High energy physics), ROADNet (Real time observatories)‏ Typically level 0 data grids rely on a Storage Resource Broker (SRB) – middleware that provides an interface to heterogeneous data storage resources over a network Supports Shared files systems, Databases, real time data sources etc. Level 1 Data Grid Distributes and manages dynamic data over large sets of compute nodes Supports transactions and events The focus is on ensuring that data is available in near real time Examples – Real time pricer Technology – Data Cache

Level 0 Data Grid

Distribute large sets of static data to compute nodes

Focus is on moving and sharing Terabytes to Petabytes of data

Examples – CERN (High energy physics), ROADNet (Real time observatories)‏

Typically level 0 data grids rely on a Storage Resource Broker (SRB) – middleware that provides an interface to heterogeneous data storage resources over a network

Supports Shared files systems, Databases, real time data sources etc.

Level 1 Data Grid

Distributes and manages dynamic data over large sets of compute nodes

Supports transactions and events

The focus is on ensuring that data is available in near real time

Examples – Real time pricer

Technology – Data Cache

What do you get in a level 1 Data Grid technology? Access methods Map and/or minimal SQL interface Management capabilities and policies Data integrity Data recoverability Event notification Transactional support can support distributed transactions Both two phase commit and compensated transactions Synchronization of data optimistic vs pessimistic (locking / version control) synchronous or asynchronous ‏ peer to peer or centralization

Access methods

Map and/or minimal SQL interface

Management capabilities and policies

Data integrity

Data recoverability

Event notification

Transactional support

can support distributed transactions

Both two phase commit and compensated transactions

Synchronization of data

optimistic vs pessimistic (locking / version control)

synchronous or asynchronous ‏

peer to peer or centralization

A Word on Data Cache technology! Used to reduce contention on Database Used to handle transient data state Not all data needs to be persisted Used to increase performance and reduce latency to read and write data Locate data near to computation to be performed Held in memory for speed Represents the state of data at a point in time http://www.excelian.com

Used to reduce contention on Database

Used to handle transient data state

Not all data needs to be persisted

Used to increase performance and reduce latency to read and write data

Locate data near to computation to be performed

Held in memory for speed

Represents the state of data at a point in time

Brainstorm What use cases are supported by the following technologies in relation to grid and large scale distribution of data: A central database Replicated databases Shared file system GridFTP Data Grid Data Cache http://www.excelian.com

What use cases are supported by the following technologies in relation to grid and large scale distribution of data:

A central database

Replicated databases

Shared file system

GridFTP

Data Grid

Data Cache

You say Compute Grid I say Data Grid ! Often used interoperably to describe distributed computing but not the same If compute power is the limiting resource then a Compute Grid is needed eg. DataSynapse, Platform Eg computing A Monte Carlo Simulation If access to data or computing lots of data is the limiting resource then a Data Grid is needed Combination of Compute Grid and Cache or could be a single product Oracle Coherence, Gemstone Gemfire, GigaSpaces XAP Eg Foreign Currency Exchange http://www.excelian.com

Often used interoperably to describe distributed computing but not the same

If compute power is the limiting resource then a Compute Grid is needed

eg. DataSynapse, Platform

Eg computing A Monte Carlo Simulation

If access to data or computing lots of data is the limiting resource then a Data Grid is needed

Combination of Compute Grid and Cache or could be a single product

Oracle Coherence, Gemstone Gemfire, GigaSpaces XAP

Eg Foreign Currency Exchange

Compute Grid Topology: Master Worker http://www.excelian.com

Caching Topologies Embedded Local Cache http://www.excelian.com Master / Local Cache Load

Embedded Local Cache

Load

Caching Topologies Master / Local http://www.excelian.com Master / Local Cache Load Load on Demand Data Tier Load Read on Demand Read on Demand

Master / Local

Load

Load on Demand

Data Tier

Load

Read on Demand

Read on Demand

Caching Topologies Replicated Cache http://www.excelian.com Put Put Get Get

Replicated Cache

Put

Put

Get

Get

Caching Topologies Partitioned Cache http://www.excelian.com Put Put Get

Partitioned Cache

Put

Put

Get

Caching Topologies Hierarchical Cache http://www.excelian.com Get Exists in cache after request

Hierarchical Cache

Get

Exists in cache after request

Caching Topologies Write Through Cache http://www.excelian.com Exits in Cache after write

Write Through Cache

Exits in Cache after write

Caching Topologies Read Through Cache http://www.excelian.com If not in Cache, exists in cache after read

Read Through Cache

If not in Cache, exists in cache after read

Workshop 1 – The Scenario A bank wants to build a new Risk Application that calculates risk across all books within the global market. A common enough scenario. To achieve this we want to implement a distributed application that has access to real-time data. The business want the system to be scalable enough to cope with all current deal scenarios but also to be able to cope with 5 times the volume growth over the next 3 years. We have 4 different topologies about how we could approach this What are the pro’s / cons of each ? Are there any more topologies we could use ? http://www.excelian.com

A bank wants to build a new Risk Application that calculates risk across all books within the global market. A common enough scenario.

To achieve this we want to implement a distributed application that has access to real-time data.

The business want the system to be scalable enough to cope with all current deal scenarios but also to be able to cope with 5 times the volume growth over the next 3 years.

We have 4 different topologies about how we could approach this

What are the pro’s / cons of each ?

Are there any more topologies we could use ?

Topology 1 http://www.excelian.com

Topology 2 http://www.excelian.com

Topology 3 http://www.excelian.com

Topology 4 http://www.excelian.com

Workshop 2 The business wants a new Trading client system to allow "traders" to monitor the market and submit trades. The Read/write ratio is extremely high Events have to be delivered in as close to real-time as possible. There are three trading countries in which data is monitored in which data and trades are executed in either London, New York or SIngapore The Current approaches uses mostly messaging (IIOP, JMS, Sockets) to implement the system but is suffering from broadcast issues and scaling is difficult What are the challenges of designing such a system and how could you implement using a caching based solution ?

The business wants a new Trading client system to allow "traders" to monitor the market and submit trades.

The Read/write ratio is extremely high

Events have to be delivered in as close to real-time as possible.

There are three trading countries in which data is monitored in which data and trades are executed in either London, New York or SIngapore

The Current approaches uses mostly messaging (IIOP, JMS, Sockets) to implement the system but is suffering from broadcast issues and scaling is difficult

What are the challenges of designing such a system and how could you implement using a caching based solution ?

We started you off http://www.excelian.com

NY London 10Mbs Replication Chall e nges: Bandwidth Solution : Batching Compression Async replication Data is kept local Update are local based on ownership

Solution :

Batching

Compression

Async replication

Data is kept local

Update are local based on ownership

Cluster Cluster NY London Sync Replciation Within site ASync Replication between sites 10Mbs Chall e nges: Reliability

NY London DB DB load update Chall e nges: Audit of Record

W hat other challenges ? http://www.excelian.com

Workshop 3 The business wants to build a new algorithmic Trading application which will allow them to have scale and performance and allow them to concentrate on the analysis and algorithms What are the options for building this using Caching What other products could they look at What are the pros/cons of the approaches

The business wants to build a new algorithmic Trading application which will allow them to have scale and performance and allow them to concentrate on the analysis and algorithms

What are the options for building this using Caching

What other products could they look at

What are the pros/cons of the approaches

Summary Synchronisation and Replication http://www.excelian.com

Brainstorm: Come up with a matrix of scenarios, topologies and replication strategies that match the following use cases: I want to run my risk reports on a snapshot of data taken at 5:00 pm. I run them on a grid separated between london and singapore and collect data from both locations. I want to cache results from intra-day pricing calculations for a period of time so that I can avoid re-calculating them if I need the price I want to run my overnight batch on 10000 nodes and write my results back to the results database as I calculate them http://www.excelian.com

Come up with a matrix of scenarios, topologies and replication strategies that match the following use cases:

I want to run my risk reports on a snapshot of data taken at 5:00 pm. I run them on a grid separated between london and singapore and collect data from both locations.

I want to cache results from intra-day pricing calculations for a period of time so that I can avoid re-calculating them if I need the price

I want to run my overnight batch on 10000 nodes and write my results back to the results database as I calculate them

Session 1 ends See you back for session 2 after coffee http://www.excelian.com

Session 2 – Building a data grid Presentation: Achieving low latency Brainstorm: Data cache infrastructure Brainstorm: Utility and cloud computing Presentation: Data Cache vendors, open source and selection critera Summary http://www.excelian.com

Presentation: Achieving low latency

Brainstorm: Data cache infrastructure

Brainstorm: Utility and cloud computing

Presentation: Data Cache vendors, open source and selection critera

Summary

Achieving really low latency Presentation http://www.excelian.com

There are three aspects of data in a distributed architecture that are difficult to manage simultaneously Scalability Is all of the data required for all tasks or can we benefit from partitioning and cache regions? Peer to peer replication implies unlimited scalability Consistency Does every compute task have to have the same data available? If one task writes data to the store, does every node need that data (transactions)‏ Hierarchical cache improves consistency Low latency Requires data and compute task to be in the same place at the same time a good locality of reference and/or data affinity Near cache makes access faster

Scalability

Is all of the data required for all tasks or can we benefit from partitioning and cache regions?

Peer to peer replication implies unlimited scalability

Consistency

Does every compute task have to have the same data available?

If one task writes data to the store, does every node need that data (transactions)‏

Hierarchical cache improves consistency

Low latency

Requires data and compute task to be in the same place at the same time

a good locality of reference and/or data affinity

Near cache makes access faster

But what can we really do to achieve low latency at scale with consistency? It is all to do with what you move and how you move it Preferably don’t move anything anywhere : Use a big machine with loads of CPU and large amounts of onboard RAM Multicore SMP Supercomputers If you must move it: Move it as fast as you can: Use data caching with Infiniband, 10G Ethernet or Myrinet And Only move what you need to Reduce the granularity of data Once it has moved, keep it there and don’t change it Capitalise on Data Cache and grid capabilities to support temporal and geographical affinity

It is all to do with what you move and how you move it

Preferably don’t move anything anywhere :

Use a big machine with loads of CPU and large amounts of onboard RAM

Multicore

SMP

Supercomputers

If you must move it:

Move it as fast as you can:

Use data caching with Infiniband, 10G Ethernet or Myrinet

And Only move what you need to

Reduce the granularity of data

Once it has moved, keep it there and don’t change it

Capitalise on Data Cache and grid capabilities to support temporal and geographical affinity

Laying out the compromise “ Large scale data distribution” and “near real time” are largely incompatible It’s more about compromise Which do you want? Scalability X Consistency = High latency Scalability X low latency = Inconsistency Consistency X Low Latency = Low Scalability

“ Large scale data distribution” and “near real time” are largely incompatible

It’s more about compromise

Which do you want?

Scalability X Consistency = High latency

Scalability X low latency = Inconsistency

Consistency X Low Latency = Low Scalability

Brainstorm: Add to your matrix of scenarios the following: I want to drive algorithmic pricing off of a small grid (I need a small grid as some of the models would take too long to run without parallelisation)‏ I want to recalculate my model prices in relation to changes in market data as soon as possible using a 100 node grid. What happens if i need to use a 1000 node grid http://www.excelian.com

Add to your matrix of scenarios the following:

I want to drive algorithmic pricing off of a small grid (I need a small grid as some of the models would take too long to run without parallelisation)‏

I want to recalculate my model prices in relation to changes in market data as soon as possible using a 100 node grid. What happens if i need to use a 1000 node grid

Discussion Physical DataGrid infrastructure http://www.excelian.com

Discussion Assume you want to take the load off of a central database store that the grid compute nodes accessed during a calculation and that you place a data cache in front of the database. What do you think would be the correct ratio of compute to data grid nodes would be for: Read only Read and write Justify your decision What physical infrastructure would you need to build out to enable this? Think about CPU Memory I/O Network http://www.excelian.com

Assume you want to take the load off of a central database store that the grid compute nodes accessed during a calculation and that you place a data cache in front of the database. What do you think would be the correct ratio of compute to data grid nodes would be for:

Read only

Read and write

Justify your decision

What physical infrastructure would you need to build out to enable this? Think about

CPU

Memory

I/O

Network

Exploiting Cloud and Utility compute – the case for data grid? http://www.excelian.com

Cloud & Utility compute Overview http://www.excelian.com

Vendors to look out for CohesiveFT Servers as a Service Hypervisor transformation management RPath Appliances as a service Appliance Hypervisor transformation FlexiScale Enterprise Cloud Based on Xen RightScale Fine Tuning the cloud Elastra Deploy and manage services on public and private clouds Vcloud VMWARE cloud computing Operating systemf or the data centre GigaSpaces Scale out Application server for the Cloud http://www.excelian.com

CohesiveFT

Servers as a Service

Hypervisor transformation management

RPath

Appliances as a service

Appliance Hypervisor transformation

FlexiScale

Enterprise Cloud

Based on Xen

RightScale

Fine Tuning the cloud

Elastra

Deploy and manage services on public and private clouds

Vcloud

VMWARE cloud computing

Operating systemf or the data centre

GigaSpaces

Scale out Application server for the Cloud

Brainstorm: Add to your matrix of scenarios the following : I want to make use of my outsourced compute facility to run grid calculations. For this I need current market and static data. My enterprise market and static data store is 70GB in size. Now create other scenarios and add then to your matrix http://www.excelian.com

Add to your matrix of scenarios the following :

I want to make use of my outsourced compute facility to run grid calculations. For this I need current market and static data. My enterprise market and static data store is 70GB in size.

Now create other scenarios and add then to your matrix

Presentation Practical considerations – the vendors and selection criteria http://www.excelian.com

The Main vendors GigaSpaces XAP Originally based on JavaSpaces Can function as Compute Grid + DataGrid API is java, based on POJO and Spring Can still use JavaSpaces API and supports .Net, C++, Scripting, JDBC Oracle Coherence Originally from Jcache implementation Early success as bolt onto J2EE API is java, based on distributed Hashmap Also supports .Net, C++ Gemstone Gemfire Two versions Java and C++ Native C++ is big selling point Native C++ ApI but also support .Net and JDBC All vendors partner with DataSynapse http://www.excelian.com

GigaSpaces

XAP

Originally based on JavaSpaces

Can function as Compute Grid + DataGrid

API is java, based on POJO and Spring

Can still use JavaSpaces API and supports .Net, C++, Scripting, JDBC

Oracle

Coherence Originally from Jcache implementation

Early success as bolt onto J2EE

API is java, based on distributed Hashmap

Also supports .Net, C++

Gemstone

Gemfire

Two versions Java and C++

Native C++ is big selling point

Native C++ ApI but also support .Net and JDBC

All vendors partner with DataSynapse

What is the Developers view http://www.excelian.com

Reading a Trade Object Simple Scenario to read and write a trade object using major vendors http://www.excelian.com

Simple Scenario to read and write a trade object using major vendors

Reading/Writing a Trade Object GigaSpaces GigaSpaces http://www.excelian.com

GigaSpaces

Reading/Writing a Trade Object Gemstone Gemstone http://www.excelian.com

Gemstone

Reading/Writing a Trade Object using Coherence Coherence http://www.excelian.com

Coherence

So what does this tell us ? Results All implementations are easy to use Easy to create cache Easy to write to and from cache All provide in-memory implementations All of the implementations provide ways to add indexing for fast reads All of them provide mechanisms for advanced querying GigaSpaces: Supports SQL Gemstone: Support SQL Coherence: SQL like All of the them provide ability to add listeners on cache data change All of them provide transactions All of them provide locking http://www.excelian.com

Results

All implementations are easy to use

Easy to create cache

Easy to write to and from cache

All provide in-memory implementations

All of the implementations provide ways to add indexing for fast reads

All of them provide mechanisms for advanced querying

GigaSpaces: Supports SQL

Gemstone: Support SQL

Coherence: SQL like

All of the them provide ability to add listeners on cache data change

All of them provide transactions

All of them provide locking

What are the open source Data Caching choices http://www.excelian.com

Open Source choices EHCache Pure Java in Process Cache Acts as pluggable cache to Hibernate 2.1 Cache4J Cache for Java Objects Simple API Designed for multi-threading SwarmCache Simple Distributed Cache Optimised for Read Only Jcache Reference implementation of JSR-107 JSR-107 static for along time MemCache In Memory Hash Table Lacks security / authentication http://www.excelian.com

EHCache

Pure Java in Process Cache

Acts as pluggable cache to Hibernate 2.1

Cache4J

Cache for Java Objects

Simple API

Designed for multi-threading

SwarmCache

Simple Distributed Cache

Optimised for Read Only

Jcache

Reference implementation of JSR-107

JSR-107 static for along time

MemCache

In Memory Hash Table

Lacks security / authentication

What are the considerations for choosing a vendor ? http://www.excelian.com

What should you think about when choosing a caching product ? What topologies does it support ? HA / Resiliency ? What management and monitoring features does it have ? API support Versioning Performance Scalability Product and API integration Replication strategies Authentication / Security Largest cluster support size No of Clients that can connect Network requirements Unicast / multicast Read Mostly required or read / write or write mostly ? Think about the features you need now and the future Collections support; Lease Management, Queries, Continuous queries etc http://www.excelian.com

What topologies does it support ?

HA / Resiliency ?

What management and monitoring features does it have ?

API support

Versioning

Performance

Scalability

Product and API integration

Replication strategies

Authentication / Security

Largest cluster support size

No of Clients that can connect

Network requirements

Unicast / multicast

Read Mostly required or read / write or write mostly ?

Think about the features you need now and the future

Collections support; Lease Management, Queries, Continuous queries etc

Review http://www.excelian.com

Add a comment

Related presentations

Related pages

Smart Grid Sets HPC Course - hpcwire.com

When the power goes out, customers aren’t the only ones kept in the dark. The outage can come as a surprise to electricity providers too, which helps ...
Read more

jimliddle - HubSlide

Waters Grid & HPC Course. This is the course that was presented by James Liddle and A... 6 months ago. Technology. Securing Office 365 and SharePoint.
Read more

high-performance computing | Research Computing

The course is recommended for both ... BLUE WATERS UNDERGRADUATE ... country to immerse themselves in a year of focused high-performance computing (HPC) ...
Read more

Assessing the E ect of High Performance Computing ...

Assessing the E ect of High Performance Computing Capabilities on Academic Research ... HPC" does not include grid ... HPC instrumentation. Of course, ...
Read more

Dell Charts HPC Course to the Enterprise - hpcwire.com

Dell Charts HPC Course to the ... NCSA Releases Profile on Blue Waters Graduate Fellow ... which are backed by three-year-old Nvidia GRID K520 ...
Read more

Blue Waters User Portal | News

News Blue Waters Graduate ... Seeking Students and Instructors for the Blue Waters Intro to HPC Virtual Course. ... Science Grid This Week: Blue Waters ...
Read more