DOEPPDGScottVisit hbn052202s

100 %
0 %
Information about DOEPPDGScottVisit hbn052202s
Travel-Nature

Published on March 25, 2008

Author: Tommaso

Source: authorstream.com

The Caltech CMS/L3 Group Physics, Software and Computing; Grids and Networks for HENP:  The Caltech CMS/L3 Group Physics, Software and Computing; Grids and Networks for HENP CALTECH L3/CMS GROUP:  CALTECH L3/CMS GROUP E. Aslakson, J. Bunn, G. Denis, P. Galvez, M. Gataullin, K. Holtman, S. Iqbal, I. Legrand, X. Lei, V. Litvin, H. Newman, S. Ravot, S. Shevchenko, S. Singh, E. Soedermadji, C. Steenberg, R. Wilkinson, L. Zhang, K. Wei, Q. Wei, R. Y. Zhu L3 At LEP: 1981 - 2002 CMS At LHC 1994 - 2020+ Search for Higgs, SUSY, New Physics from Electroweak to Quantum Gravity Precision Electroweak to the TeV Scale Emphasis on Precision e/g Measurements MINOS At FNAL: 2001 - 2006+ Neutrino Oscillations and Flavor Mixing L3/CMS GROUP Personnel In FY 2002:  L3/CMS GROUP Personnel In FY 2002 Harvey Newman Professor Julian Bunn Senior Staff Scientist (CACR/HEP; ALDAP) Vladimir Litvin US CMS Software Engineers; Distributed Iosif LeGrand Computing and Data Systems Sylvain Ravot (from 8/01) Network Engineer (LHCNet) Takako Hickey (to 5/02) CS: Grid Production Systems (PPDG) Conrad Steenberg Software Engineer: CMS Grid SW (PPDG) Koen Holtman CS: CMS Grid SW and Databases (ALDAP) Suresh Singh (from 3/01) Grid Production and Tier2 Support (GriPhyN/iVDGL) Saima Iqbal (from 11/01) ORACLE Databases (ALDAP) Edwin Soedermadji (from 11/01) CMS Grid SW + Tier2 Support (iVDGL) Eric Aslakson (from 12/01) Grid Software (PPDG) Rick Wilkinson Staff Scientist: CMS Core SW+Reconstruction Marat Gataullin, Xia Lei Graduate Students on L3 Renyuan Zhu Member of the Professional Staff Sergey Shevchenko Senior Research Fellow Philippe Galvez Senior Network Engineer Gregory Denis, Kun Wei Multimedia Engineers (VRVS) Liyuan Zhang, Qing Wei Visiting Scientists (Laser Optics Specialists) N. Wisniewski, T. Lee Students Part-time: Photon Reconstruction L3 PHYSICS RESULTS and the CALTECH GROUP:  L3 PHYSICS RESULTS and the CALTECH GROUP Of the 255 L3 Physics Papers Published to Date 39 have been written by Caltech group members, and rely on their analysis 38 more have been produced under Caltech group leadership Slide5:  LEP 1 (Z0 Peak: Ecm= 88-93 GeV) Led 3 of 8 Analysis Groups: New Particles, Taus, QCD Precision Electroweak: t +t -(g) M. Gruenewald 1993 e+e- (g) W. Lu 1997 Inclusive Hard Photons With Jets D. Kirkby 1995 LEP 2 (to Ecm= 209 GeV) Led 2 of 3 Particle Search Groups: SUSY + Exotics W Physics: Mass, s and El-Weak Couplings A. Shvorob 2000 Physics with Single or Multi-g and E-Missing M. Gataullin 2002 Anomalous Couplings; SUSY; n Counting; Searches for Supersymmetric Leptons Lei Xia 2002 ® THE CALTECH L3/CMS GROUP L3 THESES and PHYSICS ANALYSIS Slide6:  Evidence for the Higgs at LEP at M~115.5 GeV The LEP Program Has Now Ended ALEPH L3 Hnn Candidate (?) Two well b-tagged jets m~114.4 GeV (error~3 GeV) The Large Hadron Collider (2007-):  The Large Hadron Collider (2007-) The Next-generation Particle Collider The largest superconductor installation in the world Bunch-bunch collisions at 40 MHz, Each generating ~20 interactions Only one in a trillion may lead to a major physics discovery Real-time data filtering: Petabytes per second to Gigabytes per second Accumulated data of many Petabytes/Year Large data samples explored and analyzed by thousands of globally dispersed scientists, in hundreds of teams Four LHC Experiments: The Petabyte to Exabyte Challenge:  Four LHC Experiments: The Petabyte to Exabyte Challenge ATLAS, CMS, ALICE, LHCB Higgs + New particles; Quark-Gluon Plasma; CP Violation Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP 0.1 to 1 Exabyte (1 EB = 1018 Bytes) (2007) (~2012 ?) for the LHC Experiments Higgs Events In CMS:  Higgs Events In CMS Higgs to Two Photons Higgs to Four Muons General purpose pp detector; well-adapted to lower initial lumi Crystal ECAL for precise electron and photon measurements Precise All-Silicon Tracker (223 m2); Three Pixel Layers Excellent muon ID and precise momentum measurements (Tracker + Standalone Muon) Hermetic jet measurements with good resolution FULL CMS SIMULATION Higgs, SUSY and Dark Matter Discovery Reach at CMS:  Higgs, SUSY and Dark Matter Discovery Reach at CMS The Full Range of SM Higgs Masses Will Be Covered MH < 1 TeV SUSY Signals Likely To Be Visible In The First (few) fb- 1 LHC First runs in 2007 In the MSSM Higgs Sector Mh < 130 GeV Maximum Nearly All the Parameter Space Will Be Explored Discovery Reach for SUSY Squarks and Gluinos to M > 2 TeV (Not Sensitive to SM Backgrounds) Cosmologically Interesting Region Of SUSY Parameter Covered SUSY Leptons CALTECH CONTRIBUTIONS and LEADERSHIP In CMS and LHC:  CALTECH CONTRIBUTIONS and LEADERSHIP In CMS and LHC US CMS Collaboration Board Chair (1998 - 2000; 2000-2002; Re-Nominated in May 2002) Originated and Helped Launch US CMS S&C Project Led MONARC Project Original LHC Grid Data Hierarchy Model; Set Computing, Data and Network Requirements Co-PI on PPDG, GriPhyN/iVDGL and ALDAP Grid Projects; Grid Software and Systems Development Tier2 Prototype: Caltech and UCSD Regional Center and Network System Design for CMS India, Pakistan, China, Brazil, Romania, ... High Bandwidth Networking and Remote Collaboration Systems for LHC and HENP Co-PI of TAN WG; ICFA-SCIC Chair; PI of I2 HENP WG VRVS System for Global Collaboration in HENP CMS ECAL Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF):  Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF) US-CERN Link: 622 Mbps this month DataTAG 2.5 Gbps Research Link in Summer 2002 10 Gbps Research Link by Approx. Mid-2003 Transoceanic Networking Integrated with the Abilene, TeraGrid, Regional Nets and Continental Network Infrastructures in US, Europe, Asia, South America Baseline evolution typical of major HENP links 2001-2006 Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*]:  Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] [*] Installed BW. Maximum Link Occupancy 50% Assumed See http://gate.hep.anl.gov/lprice/TAN (US-CERN update) Slide14:  GENEVA ABILENE ESNET CALREN NewYork STAR-TAP STARLIGHT DataTAG Project EU-Solicited Project. CERN, PPARC (UK), Amsterdam (NL), and INFN (IT); and US (DOE/NSF: UIC, NWU and Caltech) partners Main Aims: Ensure maximum interoperability between US and EU Grid Projects Transatlantic Testbed for advanced network research 2.5 Gbps Wavelength Triangle 7/02 (10 Gbps Triangle in 2003) Wave Triangle TeraGrid NCSA, ANL, SDSC, Caltech:  TeraGrid NCSA, ANL, SDSC, Caltech NCSA/UIUC ANL UIC Multiple Carrier Hubs Starlight / NW Univ Ill Inst of Tech Univ of Chicago Indianapolis (Abilene NOC) I-WIRE Pasadena San Diego DTF Backplane(4x: 40 Gbps) Abilene Chicago Indianapolis Urbana OC-48 (2.5 Gb/s, Abilene) Multiple 10 GbE (Qwest) Multiple 10 GbE (I-WIRE Dark Fiber) Solid lines in place and/or available in 2001 Dashed I-WIRE lines planned for Summer 2002 Source: Charlie Catlett, Argonne StarLight: Int’l Optical Peering Point (see www.startap.net) A Preview of the Grid Hierarchy and Networks of the LHC Era Internet2 HENP WG [*] :  Internet2 HENP WG [*] Mission: To help ensure that the required National and international network infrastructures (end-to-end) Standardized tools and facilities for high performance and end-to-end monitoring and tracking, and Collaborative systems are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the at-large scientific community. To carry out these developments in a way that is broadly applicable across many fields Formed an Internet2 WG as a suitable framework: Oct. 26 2001 [*] Co-Chairs: S. McKee (Michigan), H. Newman (Caltech); Sec’y J. Williams (Indiana) Website: http://www.internet2.edu/henp; also see the Internet2 End-to-end Initiative: http://www.internet2.edu/e2e HENP Projects: Object Databases and Regional Centers to Data Grids:  RD45, GIOD Networked Object Databases MONARC LHC Regional Center Computing Model: Architecture, Simulation, Strategy, Politics ALDAP Novel Database Structures & Access Methods for Astrophysics and HENP Data PPDG GriPhyN Production-Scale Data Grids; iVDGL Int’l Testbeds as Grid Laboratories EU Data Grid HENP Projects: Object Databases and Regional Centers to Data Grids MONARC Project :  MONARC Project Models Of Networked Analysis At Regional Centers Caltech, CERN, Columbia, FNAL, Heidelberg, Helsinki, INFN, IN2P3, KEK, Marseilles, MPI Munich, Orsay, Oxford, Tufts PROJECT GOALS Developed the “Baseline Models” for LHC Specified the main parameters characterizing the Model’s performance: throughputs, latencies, bottlenecks Verified resource requirement baselines: Computing, Data handling, Networks TECHNICAL GOALS ACHIEVED Defined the Analysis Process Defined RC Architectures and Services Provided Guidelines for the final Models Provided a Simulation Toolset for Further Model studies 2.5 Gbps 2.5 Gbps Univ 2 CERN ~700k SI95 1000+ TB Disk; Robot Tier2 Ctr ~50k SI95 ~100 TB Disk Robot FNAL/BNL ~200k SI95 650 Tbyte Disk; Robot 0.6-2.5 Gbps N X2.5 Gbps 2.5 Gbps 2.5 Gbps Univ 1 Univ M Model Circa 2006 Slide19:  The simulation program developed within MONARC (Models Of Networked Analysis At Regional Centers) uses a process- oriented approach for discrete event simulation, and provides a realistic modelling tool for large scale distributed systems. Modeling and Simulation: MONARC System (I. Legrand) SIMULATION of Complex Distributed Systems for LHC LHC Data Grid Hierarchy (2007):  LHC Data Grid Hierarchy (2007) Tier 1 Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL: 200k SI95; 600 TB IN2P3 Center INFN Center RAL Center Institute Institute Institute Institute ~0.25TIPS Workstations ~100-400 MBytes/sec 2.5-10 Gbps 0.1–10 Gbps Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec ~2.5 Gbps ~2.5-10 Gbps Tier 0 +1 Tier 3 Tier 4 Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/( Tier1)/( Tier2) ~1:1:1 Slide21:  11,020 Hosts; 6205 Registered Users in 65 Countries 42 (7 I2) Reflectors Annual Growth 2 to 3X The Particle Physics Data Grid (PPDG):  The Particle Physics Data Grid (PPDG) First Round: Optimized cached read access to 10-100 Gbytes drawn from a total data set of 0.1 to ~1 Petabyte Site to Site Data Replication Service 100 Mbytes/sec ANL, BNL, Caltech, FNAL, JLAB, LBNL, SDSC, SLAC, U.Wisc/CS; Florida Multi-Site Cached File Access Service University CPU, Disk, Users PRIMARY SITE DAQ, Tape, CPU, Disk, Robot Satellite Site Tape, CPU, Disk, Robot University CPU, Disk, Users University CPU, Disk, Users University CPU, Disk, Users University CPU, Disk, Users University CPU, Disk, Users PRIMARY SITE DAQ, Tape, CPU, Disk, Robot Satellite Site Tape, CPU, Disk, Robot University CPU, Disk, Users University CPU, Disk, Users University CPU, Disk, Users Satellite Site Tape, CPU, Disk, Robot Particle Physics Data Grid Collaboratory Pilot (2001-2003):  Particle Physics Data Grid Collaboratory Pilot (2001-2003) DOE MICS/HENP partnership DB file/object-collection replication, caching, catalogs, end-to-end Practical orientation: networks, instrumentation, monitoring Computer Science Program of Work CS1: Job Description Language CS2: Schedule and Manage Data Processing and Placement Activities CS3 Monitoring and Status Reporting CS4 Storage Resource Management CS5 Reliable Replication Services CS6 File Transfer Services … CS11 Grid-enabled Analysis Tools (Led by J. Bunn, Caltech) “The Particle Physics Data Grid Collaboratory Pilot will develop, evaluate and deliver vitally needed Grid-enabled tools for data-intensive collaboration in particle and nuclear physics. Novel mechanisms and policies will be vertically integrated with Grid Middleware, experiment specific applications and computing resources to provide effective end-to-end capability.” Slide24:  MISSION FOCUS Allow thousands of physicists to share data and computing resources, for scientific processing and analyses TECHNICAL FOCUS: End-to-End Applications & Integrated Production Systems Using Robust Data Replication Intelligent Job Placement and Scheduling Management of Storage Resources Monitoring and Information Global Services FOUNDATION: Coop. + Reliance On Other SCIDAC Projects Security: Uniform Authentication, Authorization Reliable High Speed Data Transfer; Network Management Common, Interoperable Middleware Services: “De Facto Standards” PPDG: Focus and Foundations CMS: Productions and Computing Data Challenges:  CMS: Productions and Computing Data Challenges Already completed 2000,1: Single site production challenges w/up to 300 nodes ~5 Million events, Pileup for 1034 2000,1: GRID Enabled prototypes demonstrated 2001,2: Worldwide production infrastructure 12 Regional Centers comprising 21 computing installations Underway Now Worldwide production 10 million events for DAQ TDR 1000 CPU’s in use Production and Analysis at CERN and offsite Being Scheduled Single Site Production Challenges Test code performance, computing perf. bottlenecks etc Multi Site Production Challenges Test Infrastructure, GRID prototypes, networks, replication… Single- and Multi-Site Analysis Challenges Stress local and GRID prototypes under quite different conditions to Analysis Grid-Related R&D Projects in CMS: Caltech, FNAL, UCSD, UWisc, UFL (1):  Grid-Related R&D Projects in CMS: Caltech, FNAL, UCSD, UWisc, UFL (1) Installation, Configuration and Deployment of Prototype Tier1 and Tier2 Centers at FNAL, Caltech/UCSD, Florida Co-Authored Grid Data Management Pilot (GDMP); with EU DataGrid Detailed CMS Grid Requirements Documents, CMS Notes 2001/037, 2001/047 Revised PPDG/GriPhyN Architecture; Division of Labor Large Scale Automated Distributed Simulation Production DTF “TeraGrid” Prototype: CIT, Wisconsin Condor, NCSA Distributed (Automated) MOnte Carlo Production (MOP): FNAL “MONARC” Distributed Systems Modeling; Simulation system applications to Grid Hierarchy management Site configurations, analysis model, workload for LHC Applications to strategy development; e.g. inter-site load balancing using a “Self Organizing Neural Net” (SONN) Agent-based System Architecture for Distributed Dynamic Services Grid-Related R&D Projects in CMS: Caltech, FNAL, UCSD, UWisc, UFL (2):  Grid-Related R&D Projects in CMS: Caltech, FNAL, UCSD, UWisc, UFL (2) Large Scale Data Query Optimization “Bit-Sliced TAGs” for Data Exploration Development of Prototypes for Object-Collection Extraction and Delivery (SC2001; COJAC) Development of a Network-Efficient Interactive Remote Analysis Service (Java or C++ Clients; C++ Servers): “CLARENS” Development of a “Grid Enabled Analysis Environment” Development of security infrastructure for managing a “Virtual Organization” Robust (Scalable, Fault Tolerant) Execution Service: “RES” High Throughput Network Developments Network Monitoring Systems In the US and at CERN Work with CERN External Network, SLAC IEPM, I2 HENP and I2 E2E groups Cal-Tier2 Prototype (contd.):  Cal-Tier2 Prototype (contd.) California Tier2 Prototype Work Plan:  California Tier2 Prototype Work Plan R&D on distributed computing model Tier2s have ~1/3 of the organized resources Contribute to R&D on Optimization of Site Facilities Leverage Expertise at CACR and SDSC Network and system expertise at Caltech and UCSD Strategies for production processing and analysis Deliverance of CMS production milestones (PRS) Support US-based physics analysis and some data distribution among CA universities: CIT, UCD, UCLA, UCR, UCSB, UCSD 30 Enthusiastic Users Common Interests: EMU, and e/Gamma (w/Tracker) Using our Local Software and Systems Expertise Slide30:  MONARC Simulation System Validation CMS Proto-Tier1 Production Farm at FNAL CMS Farm at CERN Grid-enabled Data Analysis: SC2001 Demo by K. Holtman, J. Bunn (CMS/Caltech):  Demonstration of the use of Virtual Data technology for interactive CMS physics analysis at Supercomputing 2001, Denver (Nov 2001) Interactive subsetting and analysis of 144,000 CMS QCD events (105 GB) Tier 4 workstation (Denver) gets data from two tier 2 servers (Caltech and San Diego) Prototype tool showing feasibility of these CMS computing model concepts: Navigates from tag data to full event data Transparently accesses `virtual' objects through Grid-API Reconstructs On-Demand (=Virtual Data materialisation) Integrates object persistency layer and grid layer Peak throughput achieved: 29.1 Mbyte/s; 78% efficiency on 3 Fast Ethernet Ports Grid-enabled Data Analysis: SC2001 Demo by K. Holtman, J. Bunn (CMS/Caltech) COJAC: CMS ORCA Java Analysis Component: Java3D Objectivity JNI Web Services:  COJAC: CMS ORCA Java Analysis Component: Java3D Objectivity JNI Web Services Upcoming Grid Challenges: Global Secure Workflow Management and Optimization:  Upcoming Grid Challenges: Global Secure Workflow Management and Optimization Workflow Management, Balancing Policy Versus Moment-to-moment Capability to Complete Tasks Balance High Levels of Usage of Limited Resources Against Better Turnaround Times for Priority Jobs Goal-Oriented; According to (Yet to be Developed) Metrics Maintaining a Global View of Resources and System State Global System Monitoring, Modeling, Realtime tracking; feedback on the Macro- and Micro-Scales Realtime Error Detection, Redirection and Recovery Global Distributed System Optimization Adaptive Learning: new paradigms for execution optimization and Decision Support New Mechanisms; New Metrics User-Grid Interactions: the Grid-Enabled Analysis Environment Guidelines, Agents Agent-Based Distributed System: JINI Prototype (Caltech/Pakistan):  Agent-Based Distributed System: JINI Prototype (Caltech/Pakistan) Includes “Station Servers” (static) that host mobile “Dynamic Services” Servers are interconnected dynamically to form a fabric in which mobile agents travel, with a payload of physics analysis tasks Prototype is highly flexible and robust against network outages Amenable to deployment on leading edge and future portable devices (WAP, iAppliances, etc.) “The” system for the travelling physicist The Design and Studies with this prototype use the MONARC Simulator, and build on SONN studies  See http://home.cern.ch/clegrand/lia/ Globally Scalable Monitoring Service CMS (Caltech and Pakistan):  RC Monitor Service Farm Monitor Client (other service) Lookup Service Lookup Service Registration Farm Monitor Discovery Proxy Component Factory GUI marshaling Code Transport RMI data access Push & Pull rsh & ssh existing scripts snmp Globally Scalable Monitoring Service CMS (Caltech and Pakistan) US CMS Prototypes and Test-beds:  US CMS Prototypes and Test-beds Tier-1 and Tier-2 Prototypes and Test-beds operational Facilities for event simulation including reconstruction Sophisticated processing for pile-up simulation User cluster and hosting of data samples for physics studies Facilities and Grid R&D: e.g. MOP; VDT; Monitoring Systems Slide37:  (Physicists’) Application Codes Experiments’ Software Framework Layer Modular and Grid-aware: Architecture able to interact effectively with the lower layers Grid Applications Layer (Parameters and algorithms that govern system operations) Policy and priority metrics and parameters Workflow evaluation metrics, results Task-Site Coupling proximity metrics, results Global End-to-End System Services Layer Monitoring and Tracking Component performance Workflow monitoring and evaluation mechanisms Error recovery and redirection mechanisms System self-monitoring, evaluation and optimization mechanisms Application Architecture: Interfacing to the Grid MONARC SONN: 3 Regional Centres Learning to Export Jobs (Day 9):  MONARC SONN: 3 Regional Centres Learning to Export Jobs (Day 9) NUST 20 CPUs CERN 30 CPUs CALTECH 25 CPUs 1MB/s ; 150 ms RTT 1.2 MB/s 150 ms RTT 0.8 MB/s 200 ms RTT Day = 9 <E> = 0.73 <E> = 0.66 <E> = 0.83 By I. Legrand Slide39:  Development of the Grid-enabled production environment is progressing, BUT Most of the physicists’ effort, and half of the resources will be devoted to Analysis. So focus on The “Grid-enabled Analysis Environment” (GAE) This is where the real “Grid Challenges” lie Use by a large diverse community; 100s - 1000s of tasks with different technical demands; priorities; security challenges The problem of high resource usage versus reasonable turnaround time for tasks Need to study and generate guidelines for users; to get work done Need to understand how/how much one can/should automate operations with Grid tools The GAE is where the the keys to “success” or “failure” are Where the physics gets done; where physicists “live” Where the Grid E2E Services and Grid Apps. Layers get built Focus on the Grid-Enabled Analysis Environment (GAE) GriPhyN: PetaScale Virtual Data Grids:  GriPhyN: PetaScale Virtual Data Grids Virtual Data Tools Request Planning & Scheduling Tools Request Execution & Management Tools Transforms Distributed resources (code, storage, computers, and network) Resource Management Services Resource Management Services Security and Policy Services Security and Policy Services Other Grid Services Other Grid Services Interactive User Tools Production Team Individual Investigator Workgroups Raw data source PPDG Collaboratory Grid Pilot:  PPDG Collaboratory Grid Pilot “In coordination with complementary projects in the US and Europe, this proposal is aimed at meeting the urgent needs for advanced Grid-enabled technology and strengthening the collaborative foundations of experimental particle and nuclear physics. Our research and development will focus on the missing or less developed layers in the stack of Grid middleware and on issues of end-to-end integration and adaptation to local requirements. Each experiment has its own unique set of computing challenges, giving it a vital function as a laboratory for CS experimentation. At the same time, the wide generality of the needs of the physicists for effective distributed data access, processing, analysis and remote collaboration will ensure that the Grid technology that will be developed and/or validated by this proposed collaboratory pilot will be of more general use.” Launching the GAE: Recasting a Mainstream CMS NTuple Analysis:  Launching the GAE: Recasting a Mainstream CMS NTuple Analysis Strategic Oversight and Direction: Harvey Newman PPDG/CS11 Analysis Tools Team Leader: Julian Bunn Security Infrastructure for Managing a VO: Conrad Steenberg Analysis Architecture and Grid integration: Koen Holtman Ntuple  AOD + RDBMS Conversion (JETMET and General ntuple). Eric Aslakson Reproduction of PAW-based ntuple analysis on Tier2, timing measurements, identification of possible optimization: Edwin Soedarmadji ROOT version of ntuple analysis, reproduction of results, timing measurements, data access via CLARENS server: Conrad Steenberg RDBMS population with analysis objects from ntuple data; Web services: Julian Bunn (SQLServer), Saima Iqbal (Oracle 9i), Eric Aslakson (Tools), Edwin Soedarmadji (Optimisations/stored procedures) Tier2 Operations and System Support: Suresh Singh CMS Reconstruction Software and Production: Rick Wilkinson, Vladimir Litvin, Suresh Singh Monitoring, Simulation, Optimization: Legrand (+PK, RO Groups) Interactive Environment; Object-Collection Prototypes: All Additional Slides on CMS Work Related to Grids:  Additional Slides on CMS Work Related to Grids Some Extra Slides On CMS Work on Grid-Related Activities, and Associated Issues Follow Computing Challenges: Petabyes, Petaflops, Global VOs:  Computing Challenges: Petabyes, Petaflops, Global VOs Geographical dispersion: of people and resources Complexity: the detector and the LHC environment Scale: Tens of Petabytes per year of data 5000+ Physicists 250+ Institutes 60+ Countries Major challenges associated with: Communication and collaboration at a distance Managing globally distributed computing & data resources Cooperative software development and physics analysis New Forms of Distributed Systems: Data Grids Links Required to US Labs and Transatlantic [*]:  Links Required to US Labs and Transatlantic [*] [*] Maximum Link Occupancy 50% Assumed; OC3=155 Mbps; OC12=622 Mbps; OC48=2.5 Gbps; OC192=10 Gbps LHC Grid Computing Project DRAFT – High Level Milestones:  LHC Grid Computing Project DRAFT – High Level Milestones Prototype of Hybrid Event Store (Persistency Framework) Hybrid Event Store available for general users Distributed production using grid services First Global Grid Service (LCG-1) available Distributed end-user interactive analysis Full Persistency Framework LCG-1 reliability and performance targets “50% proto.” (LCG-3) available LHC Global Grid TDR Appli- cations Grid Develop GriPhyN: There are Many Interfaces to the Grid:  GriPhyN: There are Many Interfaces to the Grid “Views” of the Grid (By Various “Actors”) From Inside a Running Program By Users Running Interactive Sessions From Agent (armies) Gathering, Disseminating Information From Operations Console(s); Supervisory Processes/Agents From Workflow Monitors; “Event”-Handlers From Grid Software Developers/Debuggers From Grid- and Grid-Site Administrators Nature of “Queries” to the Grid From Running Processes (e.g. via DB Systems) By Users Via a DB System, and/or a Query Language By Grid-Instrumentation Processes, for Grid Operations By Agents Gathering Information CMS Interfaces to the Grid:  CMS Interfaces to the Grid The CMS “Data Grid” from the user’s point of view is a way to deliver processed results Object Collections (REC, AOD, DPDs, etc.) From the Grid’s own view it’s a system to monitor, track, marshall and co-schedule resources (CPU, storage, networks) to deliver results, and information to: Users, user’s batch jobs, managing processes (agents), and Grid operators & developers Early Grid Architecture is file-based Identifying (naming) object collections, extracting them to build a file list for transport, are CMS’ responsibility We will have to learn how to deal with the Grid replica catalogs for data/metadata Grid Tools will be general, not too elaborate; interfacing these Tools to CARF and the ODBMS will be a CMS Task Additional tools will be needed for data browsing, Grid progress and state tracking; perhaps some work scheduling PPDG Deliverable Set 1:  PPDG Deliverable Set 1 Distributed Production File Service (CS-5, CS-6, CS-2) Based on a GDMP and Globus tools; build on existing CS collaborations with the Globus, Condor and LBL Storage management teams. CMS requirements include: Global namespace definition for files Specification language and user interface to specify: a set of jobs, job parameters, input datasets, job flow and dataset disposition specification Pre-staging and caching files; resource reservation (storage, CPU; perhaps network) HRM integration with HPSS, Enstore and Castor, using GDMP Globus information infrastructure, metadata catalog, digital certificates; PKI adaptation to security at HENP labs System language to describe distributed system assets and capabilities; match to requests; define priorities and policies. Monitoring tools and displays to: locate datasets, track task progress, data flows, and estimated time to task completion; display site facilities’ state (utilization, queues, processes) using SNMP ... PPDG Deliverable Set 2:  PPDG Deliverable Set 2 2. Distributed Interactive Analysis Service (CS-1, CS-2) CMS requirements include: User interface for locating, accessing, processing and/or delivering (APD) files for analysis Tools to display systems’ availability, quotas, priorities to the user Monitoring tools for cost estimation, tracking APD of files; request redirection Display file replicas, properties, estimated time for access or delivery Integration into the CMS the distributed interactive user analysis environment (UAE) PPDG Deliverable Set 3:  PPDG Deliverable Set 3 3. Object-Collection Access – Extensions to CS-1, CS- 2 CMS requirements include: Object-Collection extraction, transport and delivery Integration with ODBMS Metadata catalog concurrent support

Add a comment

Related presentations

Related pages

Ppt Carf-standard-for-sci | Powerpoint Presentations and ...

View and Download PowerPoint Presentations on CARF STANDARD FOR SCI PPT. Find PowerPoint Presentations and Slides using the power of XPowerPoint.com, find ...
Read more

Scalable, Fault-tolerant Management of Grid Services ...

Scalable, Fault-tolerant Management of Grid Services Scalable, Fault-tolerant Management of Grid Services: Application to Messaging Middleware .
Read more

International Networks and the US-CERN Link | Many PPT

International Networks and the US-CERN Link ICFA and Global Networks for HENP . National and International Networks, with sufficient (rapidly increasing)
Read more

Research Methods for Working with Helsinki Testbed Data ...

http://pcbunn.cithep.caltech.edu/uscmssw/DOEPPDGScottVisit_hbn052202s.ppt. Preview. Download. Filesize: 5085 KB | format : .PPT . www.vdsoft.ir Other Methods.
Read more

bYTEBoss Builders_Inactive

Doeppdgscottvisit Hbn052202s; Computing Model Progress ; Tbhsa 2009 Schooling Series Championship Results; Sheet3. Sheet2. Sheet1. TBHSA 2009 ...
Read more

International Networks and the US-CERN Link - PCBUNN Website

Title: International Networks and the US-CERN Link Author: Harvey Newman Last modified by: Harvey Newman Created Date: 4/14/2001 4:47:45 PM Document ...
Read more

jini network technology PPT Powerpoint Presentations and ...

jini network technology - PPT slides, PowerPoint presentations for download - JINI Network Technology Palki Chakrabarti JINI Computer has been redefined ...
Read more