bongi sw meetingpamela051006

50 %
50 %
Information about bongi sw meetingpamela051006

Published on October 31, 2007

Author: Elliott


Slide1:  Status of the software for PAMELA data reduction at CNAF Massimo Bongi 7th PAMELA Software meeting Rome, 06 Oct 2006 PAMELA data reduction software:  PAMELA data reduction software Software is installed on AFS: MySQL (≥ 4.1) cernlib (2005) ROOT (5.12.00e) RawReader log4cxx (0.9.7) yoda (6_3/11) YodaProfiler (CVS 03 Oct 2006) YodaExtractor + YodaCleaner DarthVader (CVS 03 Oct 2006) PamelaLevel2 (CVS 03 Oct 2006) eventviewer (v1r03) (JDK, ant, ant-contrib, cpptasks, QuickLook scripts) AFS: What Where Why:  AFS: What Where Why world wide file system you can access as it were a local file system all the software in a place which we have quite easy control on: /afs/ easy to be maintained visible from (almost) everywhere (it needs an AFS client!) also useful for data reduction (and analysis?) at home institutes better performance than NFS? (/opt/exp_software at CNAF) it adds (very) little complexity to the user’s life some help here: what about file permissions and access control lists? unavailable network? local backup installation? compile on the node? How the software is organized:  How the software is organized Scientific Linux CERN release 3.0.X (SL@CNAF) and SLC 4 at CNAF (but also others can work fine in principle) @sys trick to avoid compiling on the worker node “tars” (CVS) + “installed” directories source set_pam_env_sh to set up everything automagically (PAMELA software directory structure, PATH, LD_LIBRARY_PATH) …let’s have a look at the software Concerning databases:  Concerning databases (not ui01-lcg) pamelaprod database created ( GL_PARAM table filled ( users (standard password): (root) pamelaprod_rw pamelaprod_ro (just SELECT) accessible from outside, GRANTed access from: localhost * * (only pamelaprod_ro) define the best access policy? periodic backup? other databases? (to manage data reduction) About disk space and data:  About disk space and data CASTOR: destination for data coming from MEPHI it’s the only “Grid visible” disk space we have (now thanks to ui­pamela certificate we can make file transfer automatic) archive, not suitable for data reduction access with Grid tools (e.g. edg-gridftp-ls) from outside access with rf* or ns* tools (e.g. rfdir or nsls) from CNAF 2 TB: how much free space yet? buy more tapes? not very easy to be dealt with… “ONLINE” disk space: where data reduction takes place visible from (almost) every CNAF worker node only visible from inside CNAF 1.8 TB: not enough for the reduction (15 TB? reliable hardware?) …let’s have a look at the (present) directory structure How do we reduce data?:  How do we reduce data? do we start from RR? what about cln1? (at the moment we have part of the preRR and part of the post RR data at CNAF) till now I just did some tests on few files: start a (test) mass reduction (from yoda?) how do we deal with the limited disk space? copy a group of files from CASTOR to ONLINE reduce them with Grid jobs move the Level2 output files back to CASTOR Grid-copy them to home institutions for analysis how do we implement the (hopefully automatic) transfers and job submission? (bash) scripts + cron jobs + database / simple table? discuss with the software developers about how the software works, validation, reprocessing, different versions, etc users and groups: who reduces data? can everyone create/delete files? file permissions: CNAF vs Grid users Something about the Grid:  Something about the Grid Some help (actually just a summary of various web sources and my personal experience): INSTALLATION: get a user certificate: INFN CA… or ask Wolfgang or Ian register to PAMELA Virtual Organization: e-mail Francesco (or me??!) access a User Interface (install/ask an account/use AFS) copy there your certificate Please note that accessing a UI is very simple if you use AFS: source /afs/ turns your own machine into a Grid UI also in Bari! Deeper and deeper into the Grid…:  Deeper and deeper into the Grid… In order to run a Grid job you have to: generate a proxy-certificate [bongi]$ voms-proxy-init -voms pamela write a Job Description Language file, like: [bongi]$ more mytest.jdl Executable = "/bin/echo"; Arguments = "Ciriciao!"; StdOutput = "std.out"; StdError = "std.err"; InputSandbox = ""; OutputSandbox = {"std.out","std.err"}; submit it: [bongi]$ edg-job-submit -o jobId_list.txt mytest.jdl hope it does not crash and/or check it with: [bongi]$ edg-job-status -i jobId_list.txt when done, get the output: [bongi]$ edg-job-get-output -i jobId_list.txt …but of course there are also a lot of other commands!

Add a comment

Related presentations