Offline Monitoring Archived Data
Setup the Software & Environment
1) Update the version xml file to indicate which versions of the software will be used. For hdds & sim-recon, set the names to those of the tags that you will be about to create (in a later step below):
~/builds/version_monitoring_launch.xml
2) Source the environment. This will override the HDDS and sim-recon in the version*.xml file and will instead use the monitoring launch working-area builds. Call:
source ~/env_monitoring_launch
3) Updating & building hdds:
cd $HDDS_HOME git pull # Get latest software scons -c install # Clean out the old install: EXTREMELY IMPORTANT for cleaning out stale headers scons install -j4 # Rebuild and re-install with 4 threads
4) Updating & building sim-recon:
cd $HALLD_HOME/src git pull # Get latest software scons -c install # Clean out the old install: EXTREMELY IMPORTANT for cleaning out stale headers scons install -j4 # Rebuild and re-install with 4 threads
5) Updating RCDB (Used for run# queries during job submission):
cd $RCDB_HOME/ git pull
6) Create a new sqlite file containing the very latest calibration constants. Original documentation on creating sqlite files are here.
cd $GLUEX_MYTOP/../sqlite/ $CCDB_HOME/scripts/mysql2sqlite/mysql2sqlite.sh -hhallddb.jlab.org -uccdb_user ccdb | sqlite3 ccdb.sqlite mv ccdb.sqlite ccdb_monitoring_launch.sqlite
7) Tag the software, where "<type>" below is either "offmon" for offline monitoring launches, or "recon" for full reconstruction launches:
cd $HALLD_HOME/src/ git tag -a <type>-201Y_MM-verVV -m "Used for offline monitoring 201Y-MM verVV started on 201y/mm/dd" git push origin <type>-201Y_MM-verVV cd $HDDS_HOME/ git tag -a <type>-201Y_MM-verVV -m "Used for offline monitoring 201Y-MM verVV started on 201y/mm/dd" git push origin <type>-201Y_MM-verVV
8) Checkout (or svn update) the launch scripts if needed:
cd ~/ svn co https://halldsvn.jlab.org/repos/trunk/scripts/monitoring/
Prepare and Do the Launch
1) Update the appropriate job config file, depending on the type of launch (e.g. jobs_offmon.config for monitoring launches, jobs_recon.config for full reconstruction). Definitely be sure to update RUNPERIOD, VERSION, and BATCH where appropriate (if jobs submitted in batches).
~/monitoring/launch/jobs_offmon.config
2) Update the appropriate jana config file, depending on the type of launch (e.g. jana_offmon.config for monitoring launches, jana_recon.config for full reconstruction). This contains the command line arguments given to JANA. Definitely be sure to update REST:DATAVERSIONSTRING and JANA_CALIB_CONTEXT.
~/monitoring/launch/jana_offmon.config
3) Create the SWIF workflow. The workflow should have a name like "recon_2016-02_ver05" for monitoring launches and "recon_2016-02_ver01_batch1" for full reconstruction launches. It should also match the workflow name in the job config file (e.g. jobs_offmon.config).
swif create -workflow <my_workflow>
4) Backup the software versions & appropriate jana config script, using appropriate names for the launch, e.g.:
cp ~/monitoring/launch/jana_recon.config /group/halld/data_monitoring/run_conditions/RunPeriod-2016-02/jana_recon_2016_02_ver01.config cp ~/builds/version_monitoring_launch.xml /group/halld/data_monitoring/run_conditions/RunPeriod-2016-02/version_recon_2016_02_ver01.xml
5) Register jobs for the workflow, where <job_config_file> is (e.g.) "~/monitoring/launch/jobs_offmon.config":
~/monitoring/launch/launch.py <job_config_file> <run_min> <run_max>
You can optionally specify specific file numbers to use. For example, to submit jobs for the first 5 files of each run:
~/monitoring/launch/launch.py <job_config_file> <run_min> <run_max> -f '00[0-4]'