LUCIFER ACON Wokshop May 2006 Ottawa Sergi

0 %
100 %
Information about LUCIFER ACON Wokshop May 2006 Ottawa Sergi

Published on October 16, 2007

Author: Arkwright26


Slide1:  Sergi Figuerola ACON Workshop May 11th 2006, Ottawa, CA Lambda User Controlled Infrastructure For European Research Slide2:  Disclaimer: The name “LUCIFER” is not intended to have any religious meaning in the context of the present proposal; it is used with its original meaning in Latin; from the dictionary: Lu·ci·fer (lū'sə-fər)  [Lat.,=light-bearing] [ Middle English, from Old English, morning star, LUCIFER, from Latin Lūcifer, from lūcifer, light-bringer : lūx, lūcis, light + -fer, -fer. ] The word LUCIFER comes from the latin "LUCIFERum, i", which simply means "light bearer / carrier / bringer", since it is a composed word made of: “lux, lucis” = light “-fer” suffix, coming from the irregular verb "fero, fers, tuli, latum, ferre" that means "to bring / carry / bear". Based on its etymology, the name LUCIFER has been deemed appropriate for a project dealing with full Optical networks and fibers. A fiber, after all, is a “light carrier”. Lambda User Controlled Infrastructure For European Research Lambda User Controlled Infrastructure For European Research:  Lambda User Controlled Infrastructure For European Research Lu·ci·fer (lū'sə-fər)  [Lat.,= light-bearing, light-carrier] EU Research Networking Test-beds FP6 IST program 30 months project, to begin in 3Q 2006 Partners and supporters Research Networks: CESNET, PSNC, SURFnet, FCCN, RedIRIS, GARR, GN2, CANARIE National test-beds: Viola, OptiCAT, UKLight Equipment Manufacturers: Adva, Hitachi, Nortel Tech SMEs: Nextworks Research & Academic Institutions: RESIT - AIT, Fraunhofer SCAI, Fraunhofer IMK, Fundaciò i2CAT, IBBT, Research Centre Jülich, University of Amsterdam, University of Bonn, University of Essex, University of Wales-Swansea, SARA Non-EU Research Institutes: MCNC (US), CCT (US), CRC (Canada), UCSD (US) Vision & Mission Address some of the key technical challenges that enable on-demand, end-to-end Grid network services across multiple domains Treat the underlying network as first-class Grid resource Demonstrate solutions and functionalities across a test-bed involving GÈANT2, European NRENs, CBDF and GLIF facilities Demonstrate on demand service delivery across access-independent multi-domain/multi-vendor research network test-bed on a European and international scale LUCIFER in the overall picture:  LUCIFER in the overall picture Test-beds Research Infrastructures NOBEL GridCC BB Network Layer Grid Layer NOBEL–II GÉANT, GÉANT2, EUMEDconnect, SEEREN2 MUPBED CBDF GLIF LUCIFER will interact with: GN2 (GN2 JRA3, JRA1 & JRA5) International activities: DRAGON, EnLIGHTened, UCLPv2 Possible relationships with other EU projects Focused on network layer technologies: NOBEL 1 & 2, EuQoS Focused on Grid layer: EGEE-II, GridCC Test-bed oriented: MUPBED The steps forward:  The steps forward Diversity of transport infrastructures (SDH, SONET, GE, dark fibre) network resource provisioning systems / control planes demanding requirements from advanced users (e.g. Grids Community) Need to find a leading overall architecture to address this EU-specific environment GN2 JRA3 & JRA5 work to provide an operational solution for this A pan-European umbrella for single control and management facility for network resources provisioning (Bandwidth-on-Demand, inter-domain operations, AAI – Authentication and Authorization Infrastructure) Tend to preserve the specificity of network resource provisioning systems within the various NRENs LUCIFER will define, implement and assess a new, integrated architecture for NRENs’ resource provisioning (based on both Control Plane and NRPSs)… …for Grid-specific network services involve additional choices of experimental network facilities (CBDF, GLIF) The System Chain:  The System Chain Phase I: Grid App.  Grid Middle Ware NRPS OUNI GMPLS  Optical Network Grid Resource Phase II: Grid App. Grid Middle Ware  NPRS G-OUNI  G²MPLS   Optical Network  Grid Resource This solution will be finalized progressively during the project: starting from existing Grid applications, middleware, NRPS & NCP, we will develop an e2e user-controlled environment over heterogeneous infrastructure deploying two mutually unaware layers (i.e. Grid and network) G²MPLS Control Plane is the evolution of the previous approach, making the NCP Grid-aware LUCIFER will provide GMPLS and G²MPLS Control Plane prototypes to be attached upon the commercial equipments at NRENs Technical scope and rationale:  Technical scope and rationale Network should support generic transport services for both Grids and ‘less’ demanding users... but with a special care for Grids Network & Grid-specific computational resources are controlled and set-up at the same time and with the same priority, with a set of seamlessly integrated procedures The Service and Control Planes (Grid middleware, NRPS, GMPLS / G2MPLS) will be integrated in a hierarchy of architectures that interwork to build the GNS ( GGF-GHPN) Optical test-bed @ Optical Network Layer (G²MPLS, inter-NRPS communications) @ Grid Layer (Middleware extensions, APIs & policies) EU wide, spanning to US and Canada 3-layers wide perspective: Application Service Plane (Grids) Network Resource Provisioning plane (DRAC, UCLP, ARGON) Network Control Plane (Grid-GMPLS – G2MPLS) The LUCIFER Project Key Features/Objective I:  The LUCIFER Project Key Features/Objective I Develop integration between application middleware and transport networks, based on three planes: Service plane: Middleware extensions and APIs to expose and reserve network and Grid resources Policy mechanisms (AAA) for networks participating in a global hybrid network infrastructure, allowing both network resource owners and applications to have a stake in the decision to allocate specific network resources Network Resource Provisioning plane: Adaptation of existing Network Resource Provisioning Systems (NRPS) to support the framework of the project (UCLP, DRAC, ARGON) Implementation of interfaces between different NRPS to allow multi-domain interoperability with LUCIFER’s resource reservation system Control plane: Enhancements of the GMPLS Control Plane (G²MPLS) to provide optical network resources as first-class Grid resource Interworking of GMPLS-controlled network domains with NRPS-based domains, i.e. interoperability between G2MPLS and UCLP, DRAC and ARGON The LUCIFER Project Key Features/Objective II:  The LUCIFER Project Key Features/Objective II Studies to investigate and evaluate further the project outcomes : Study resource management and job scheduling algorithms incorporating network-awareness, constraint based routing and advance reservation techniques Develop a simulation environment, supporting the LUCIFER network scenario Disseminate the project experience and outcomes, toolkits and middleware to EU NRENs and their users, such as Supercomputing centres Integrated Mechanism for Grid Resource Brokering - Phase 2 -:  Integrated Mechanism for Grid Resource Brokering - Phase 2 - The integrated approach: Network resources must be treated as “first class” Grid resource the same way as storage and processing resource New approach to control and network architectures GMPLS signalling which can be extended for Grid resources (G2MPLS): extension to GMPLS signalling is feasible to accommodate the Grid information in exchanging messages Provide New Mechanism for Grid Resource Brokering:  Provide New Mechanism for Grid Resource Brokering Assumptions: A direct connection between the Grid (applications and resources) and the optical network is done through the Grid Optical User Network Interface (G-OUNI), which is implemented on a Grid edge device. The Grid info system is integrated with the GMPLS control (G2MPLS) which contains information regarding the optical network resources. As a result, the discovery and selection process manages “traditional” compute, storage, etc. resources/services and optical network resources. The Grid edge device initiates and performs the co-ordinated establishment of the chosen optical path and the Grid cluster. Actions: The Grid client submits its service request to Grid middleware, which processes it and forwards it to the Grid edge device. The Grid edge device requests connection between the Grid client and a Grid cluster through the Optical Control Plane The Optical Control Plane performs discovery of Grid resources coupled together with optical network resources and returns the results with their associated costs to the Grid broker The Grid broker chooses the most suitable resource and a light-path is set-up using GMPLS signaling LUCIFER Architecture:  LUCIFER Architecture LUCIFER Architecture Integration & interoperation of architectures:  LUCIFER test-bed 2 LUCIFER test-bed 1 GMPLS (G-)GMPLS Integration & interoperation of architectures NRPS (G-)GMPLS MW services NRPS MW services MW services SNMP, TL1, CLI resource conf. (G.)E-NNI G.O-UNI O-UNI (G.)O-UNI E,W I/F N I/F N I/F (G-)GMPLS Layer Grid Middleware Layer NRPS Layer Grid Application Layer Optical Transport Layer An overview of the LUCIFER test-bed (EU scope):  An overview of the LUCIFER test-bed (EU scope) Internationals extensions:  Internationals extensions Initial Grid applications:  Initial Grid applications WISDOM - Wide In Silica Docking On Malaria: large scale molecular docking on malaria to compute million of compounds with different software and parameter settings (in silico experimentation) The Goal within LUCIFER is the deployment of a CPU-intensive application generating large data flows to test the Grid infrastructure, compute and network services KoDaVis - Distributed visualisation Adapt KoDaVis to the LUCIFER environment to make scheduled synchronous reservations of its resources via the UNICORE middleware Compute capacity on the data server and the visualisation clients Allocate network bandwidth and QoS between server and clients Streaming of Ultra High Resolution Data Sets over Lambda Networks (FHG, SARA) Distributed Data Storage Systems (PSNC, HEL, FZJ, FHG) Slide17:  For any further details, please feel free to contact: Sergi Figuerola Artur Binczewski (Project Leader) Thanks

Add a comment

Related presentations