Calibration Train

From GlueXWiki
Revision as of 18:11, 19 May 2015 by Sdobbs (Talk | contribs) (Runs to Use)

Jump to: navigation, search

Proposal

  • Use case: Streamline tape library usage, provide common environment for production and development
    • Example: Users develop plugins on their favorite files/run, use this for running over larger data
  • Run every week (Wednesday to avoid conflict with monitoring?)
  • Uses subset of runs
  • Users provide:
    • DANA plugin
    • Optional post-processing scripts to be run after every/all runs
      • Curated in SVN
  • Results stored in standard location(s)
    • Possible results: ROOT files, images, calibration constants, web pages
  • Uses SWIF? (buzzword compliance)

Runs to Use

Several possibilities:

  1. Large/Popular Runs
    • Fall 2014: 1514, 1769, 1777, 1787, 1803, 1807, 1810, 1825, 1847, 1852, 1854, 1871, 1872, 2138, 2206, 2207, 2209, 2223, 2397
    • Spring 2015: 2931, 3079, 3179, 3180, 3183, 3185
  2. Other big sets of runs:
    • Fall 2014: 1516-1520, 1769-1777, 1900, 2179-2184, 2205, 2228, 2407-2409, 2416-2420
    • Look for groups of runs with similar conditions
  3. Others


This also raises the question: what are the good runs?

One proposal:

  • Tag each run in monitoring database with one of three values:
    1. 0 - non-production quality
    2. 1 - production quality
    3. 2 - production quality, used for calibrations
  • Idea is that > 1 means that the run is good to use for physics
    • Finer grained information can be stored in RCDB
  • To determine production quality, develop quality metrics for each subdetector, use combination of quality metrics and eye test