IHEP in EGEE ver4

60 %
40 %
Information about IHEP in EGEE ver4

Published on September 27, 2007

Author: Danielle



Slide1:  Participation of IHEP in EGEE Project Vadim Petukhov (IHEP, Protvino, Russia) E-mail: Slide2:  Talk Outline IHEP within the RDIG Consortium; EGEE activities in IHEP; Russian CIC, ROC and RCs relation schema; Existing resources (farms, storage, links); Usage of resources; Software (middleware versions); Requirement for Russian Tier2 to support LHC Experiments. Slide3:  IHEP within RDIG RDIG Consortium (Russian Data Intensive GRID) has been formed with a view to maintenance of full-scale participation of Russia in EGEE project. It is also planned to involve Russian organizations from various areas of science and education in the EGEE project. Institute of High Energy Physics (IHEP) is one of the 8 institutes consisting in the consortium. It is largest particle accelerator center in Russia. 2 500 Personals. 20 specialists are involved in EGEE project with overall activities near 8 FTE. Slide4:  IHEP Activity in EGEE Project NA2 – Dissemination and Outreach (5 specialists) NA3 – User Training and Induction (8 specialists) NA4 - Application Identification and Support (3 specialists) SA1 - European Grid Operations, Support and Management ( 17 specialists) Activity NA2 NA3 NA4 SA1 Total FTE 0.53 1.35 0.8 5.25 7.93 Slide5:  Activity NA2: The main areas of dissemination in IHEP are: Translation of the main EGEE documentations into Russian. Preparation and support of the Web-site Slide6:  Activity NA2 (cont): Attraction some Russian scientific organizations in EGEE (especially from Ministry for Atomic Energy); Support mail lists and web base collaborative tools; Organizing of a meetings and Workshops: Workshop “ GRID-EGEE infrastructure in Russia for support of scientific research”, Protvino, 17-19 Jan, 2005 Slide7:  Activity NA3: IHEP has a leading role for NA3 activity in RDIG: Organizing training of users to GRID software; Preparation of training courses and materials; Maintenance of functioning of the distributed courses. Elena Slabospitskaya ( is Leader of NA3 activity in RDIG Slide8:  Activity NA4: The main areas IHEP in NA4 activity are: Management of ATLAS VO in Russia; Support of ALICE, CMS and LHCb pilot applications; Preparation for DCs; Testing GRID software with reference to specific targets of various experiments; Discussion and work out of the decisions for the resourses required. Application SW :  Application SW ALICE ATLAS VO-alice-ALIROOT-v4-03-04 VO-atlas-release-10.0.1 VO-alice-AliEn-1.33.15 VO-atlas-lcg-release-0.0.3 VO-alice-ALIROOT-CVS HEADV VO-atlas-release-9.0.4 ROOT V5-02-00 ROOT V5-03-01 LHCB VO-lhcb-Gauss-v15r13 CMS VO-lhcb-Gauss-v19r4 VO-cms-CMKIN_4_2_0_dar VO-lhcb-Gaudi-v15r3 VO-lhcb-Gaudi-v15r5 DTEAM VO-lhcb-DaVinchi-v12r4 teamd-testingthesite1 VO-lhcb-DaVinchi-v12r11 VO-lhcb-RTTC-v1 VO-lhcb-Boole-v8r4 CIC, ROC and RCs relation for SA1 in Russia:  CIC, ROC and RCs relation for SA1 in Russia OMC - Operations Management Centre dCIC - distributed Core Infrastructure Centre dROC - distributed Regional Operations Centre RC - Resource Centre Slide11:  SA1/ROC: The EGEE-RDIG federation runs a distributed ROC, focused at middleware deployment and runtime support for the sites in the region. The middleware repository for the RDIG resource centre is created and supported. (; This repository is located at IHEP and include a CVS (Concurrent Versioning System ) deployment area where all the resource centers can store and retrieve their configuration files as it is being done at LCG-2; Slide12:  The List of informative notification for Administrators of RCs is supported. SA1/ROC Slide13:  SA1/ROC For participation in the Pre-Production Service (PPS) the cluster consisting of seven computers was created Slide14:  Resources in IHEP for EGEE/LCG Slide15:  EGEE resources in IHEP: Computer resources: 55 KSI2K Disks: 5 ТБ Tapes: 5 ТБ External link: 100 Мб/с Slide18:  Software status in IHEP: Slide19:  It is planned in 2006 year (on the agreement with Gaztelecom) to organize a channel using gigabit Ethernet technology and to lease out so-called "dark" optical fibres for the extension of the IHEP optical cable to the international communication node in Moscow. Slide20:  Link IHEP – Internet load Slide21:  A distributed Tier 2 aggregates the resources of the constituent institutes to provide a significant number of resources. While each site may not have the manpower to develop expertise to run a production service, pooling technical knowledge and support can make this possible. In aggregating their resources, if one site encounters problems and has to go offline, the Tier 2 as a whole can still be providing a production quality service. Participating institutes: : Moscow ITEP, SINP MSU, RRC KI Moscow region JINR, IHEP St. Petersburg PNPI RAS Russian Tier2 Cluster is planned to be connected to the CERN Tier1 Centre. Russian Tier2 Cluster Slide22:  ALICE ATLAS CMS LHCb ALICE ATLAS CMS LHCb ALICE ATLAS CMS LHCb 2006 573 370 800 400 130 160 200 100 87 60 300 2007 1000 711 1250 1000 251 320 310 300 163 122 460 100 2008 1319 1296 1625 1500 456 600 410 440 345 332 670 380 2009 1673 2396 2000 2000 844 1000 670 820 696 482 1450 360 2010 2173 3496 3250 2500 1344 1800 970 1200 1246 682 2000 440 Russian Tier2 Cluster Planning Table 1. Resources requested by Experiments. The resources indicated are assumed to be available for using to October each year. CPU KSI2K DISK (usable) TB Tape Active TB Summary parameters of the Russian Tier2 Cluster:  Summary parameters of the Russian Tier2 Cluster Slide24:  Some general requirements to the Cluster architecture: The farm has to have integrated uniform hardware structure. One farm is usually shared by several VOs, and the corresponding restructuring of CPU resources can be done by means of the local scheduler. Scalability: the farm resources are steadily growing in time and each extension of the farm should not cause its strong rearrangement. Flexibility, Reliability of critical elements, and of course Price/Performance factors are crucial points. The main elements of the farm are the following: A farm building block is 20 WNs attached to an Ethernet switch with 24 1Gb/s ports. One 1Gb/s port is “external”, i.e. it is used for connection with the MSS. There are some SEs with I/O Ethernet lines of 1 Gb/sec each. To unify all these elements in the whole an environment with 40 – 160Gb/s throughput is needed. It should be able to connect each incoming line to all the others. This commutator can be constructed on the base of an Ethernet stackable switches. Slide25:  A proposal on building blocks for RU-Tier2 Cluster Slide26:  Thanks for your attention. Slide28:  OMC - Operations Management Centre RC - Resource Centre dCIC - distributed Core Infrastructure Centre dROC - distributed Regional Operations Centre Slide32:  Activity SA1: The main areas IHEP in SA1 activity are:

Add a comment

Related presentations

Related pages

SIMPLE = T / conforms to FITS standard BITPIX = -64 / array data type ...
Read more