Difference between revisions of "Mini Data Challenge Status, September 10, 2012"
From GlueXWiki
(Created page with "Started submitting jobs Friday afternoon, August 24. =Jobs= # '''bggen''' #* no input file #* create 400 k events per job with bggen #* run mcsmear on hdgeant output #* write r...") |
m (Text replacement - "/halldweb1.jlab.org/" to "/halldweb.jlab.org/") |
||
(5 intermediate revisions by the same user not shown) | |||
Line 19: | Line 19: | ||
=Status= | =Status= | ||
− | As of September 10 | + | As of September 10. |
==bggen== | ==bggen== | ||
Line 28: | Line 28: | ||
| count(*) | sum(submitted) | sum(output) | sum(jput_submitted) | sum(silo) | | | count(*) | sum(submitted) | sum(output) | sum(jput_submitted) | sum(silo) | | ||
+----------+----------------+-------------+---------------------+-----------+ | +----------+----------------+-------------+---------------------+-----------+ | ||
− | | 1000 | 1000 | | + | | 1000 | 1000 | 895 | 1000 | 1000 | |
+----------+----------------+-------------+---------------------+-----------+ | +----------+----------------+-------------+---------------------+-----------+ | ||
− | |||
</pre> | </pre> | ||
Line 40: | Line 39: | ||
| count(*) | sum(submitted) | sum(output) | sum(jput_submitted) | sum(silo) | | | count(*) | sum(submitted) | sum(output) | sum(jput_submitted) | sum(silo) | | ||
+----------+----------------+-------------+---------------------+-----------+ | +----------+----------------+-------------+---------------------+-----------+ | ||
− | | | + | | 1000 | 1000 | 992 | 992 | 998 | |
+----------+----------------+-------------+---------------------+-----------+ | +----------+----------------+-------------+---------------------+-----------+ | ||
− | |||
</pre> | </pre> | ||
Line 50: | Line 48: | ||
|- | |- | ||
| | | | ||
− | [[File:Farm | + | [[File:Farm usage 2012-09-10.png|thumb|Farm Usage Chart]] |
|| | || | ||
− | [[File: | + | [[File:Disk on fire.jpg|thumb|link=https://halldweb.jlab.org/talks/2012-3Q/disk-usage.jsp.html|Volatile Disk Usage]] |
|} | |} | ||
+ | |||
+ | =To Do= | ||
+ | |||
+ | # put staging disk and library location into config file | ||
+ | # finesse memory problem | ||
+ | # do large scale job with REST output | ||
+ | # analyze root files | ||
+ | # do REST output validation | ||
+ | # save farm output and error files | ||
+ | # check for job success | ||
+ | #* error file check | ||
+ | #* size of output data | ||
+ | # allow two output files | ||
+ | # add configuration lines for all steps | ||
+ | # add "output once existed" column |
Latest revision as of 05:23, 1 April 2015
Started submitting jobs Friday afternoon, August 24.
Jobs
- bggen
- no input file
- create 400 k events per job with bggen
- run mcsmear on hdgeant output
- write results to tape library
- about 7 hours of CPU time
- output file is about 14 GB (35 kB per event)
- hd_root
- get bggen data file from library
- run reconstruction using hd_root
- write resulting root file to tape library
- several dying due to exceeding self-imposed 4 GB memory limit
- several percent dying due to exception being thrown
Status
As of September 10.
bggen
mysql> select count(*), sum(submitted), sum(output), sum(jput_submitted), sum(silo) from bggen; +----------+----------------+-------------+---------------------+-----------+ | count(*) | sum(submitted) | sum(output) | sum(jput_submitted) | sum(silo) | +----------+----------------+-------------+---------------------+-----------+ | 1000 | 1000 | 895 | 1000 | 1000 | +----------+----------------+-------------+---------------------+-----------+
hd_root
mysql> select count(*), sum(submitted), sum(output), sum(jput_submitted), sum(silo) from hd_root; +----------+----------------+-------------+---------------------+-----------+ | count(*) | sum(submitted) | sum(output) | sum(jput_submitted) | sum(silo) | +----------+----------------+-------------+---------------------+-----------+ | 1000 | 1000 | 992 | 992 | 998 | +----------+----------------+-------------+---------------------+-----------+
Farm Usage Snapshots
To Do
- put staging disk and library location into config file
- finesse memory problem
- do large scale job with REST output
- analyze root files
- do REST output validation
- save farm output and error files
- check for job success
- error file check
- size of output data
- allow two output files
- add configuration lines for all steps
- add "output once existed" column