Difference between revisions of "Mini Data Challenge Status, September 10, 2012"
From GlueXWiki
(→Farm Usage Snapshots) |
|||
Line 50: | Line 50: | ||
[[File:Farm usage 2012-09-10.png|thumb|Farm Usage Chart]] | [[File:Farm usage 2012-09-10.png|thumb|Farm Usage Chart]] | ||
|| | || | ||
− | [https://halldweb1.jlab.org/talks/2012-3Q/disk-usage.jsp.html Volatile Disk Usage] | + | [[File:Disk on fire.jpg|thumb|link=https://halldweb1.jlab.org/talks/2012-3Q/disk-usage.jsp.html|Volatile Disk Usage]] |
|} | |} | ||
Revision as of 13:16, 10 September 2012
Started submitting jobs Friday afternoon, August 24.
Jobs
- bggen
- no input file
- create 400 k events per job with bggen
- run mcsmear on hdgeant output
- write results to tape library
- about 7 hours of CPU time
- output file is about 14 GB (35 kB per event)
- hd_root
- get bggen data file from library
- run reconstruction using hd_root
- write resulting root file to tape library
- several dying due to exceeding self-imposed 4 GB memory limit
- several percent dying due to exception being thrown
Status
As of September 10.
bggen
mysql> select count(*), sum(submitted), sum(output), sum(jput_submitted), sum(silo) from bggen; +----------+----------------+-------------+---------------------+-----------+ | count(*) | sum(submitted) | sum(output) | sum(jput_submitted) | sum(silo) | +----------+----------------+-------------+---------------------+-----------+ | 1000 | 1000 | 895 | 1000 | 1000 | +----------+----------------+-------------+---------------------+-----------+
hd_root
mysql> select count(*), sum(submitted), sum(output), sum(jput_submitted), sum(silo) from hd_root; +----------+----------------+-------------+---------------------+-----------+ | count(*) | sum(submitted) | sum(output) | sum(jput_submitted) | sum(silo) | +----------+----------------+-------------+---------------------+-----------+ | 1000 | 1000 | 992 | 992 | 998 | +----------+----------------+-------------+---------------------+-----------+
Farm Usage Snapshots
To Do
- put staging disk and library location into config file
- finesse memory problem
- do large scale job with REST output
- analyze root files
- do REST output validation
- save farm output and error files
- check for job success
- error file check
- size of output data
- allow two output files
- add configuration lines for all steps
- add "output once existed" column