Difference between revisions of "Calibration Train"

From GlueXWiki
Jump to: navigation, search
(Runs to Use)
(Runs to Use)
Line 24: Line 24:
 
# Other big sets of runs:
 
# Other big sets of runs:
 
#* Fall 2014: 1516-1520, 1769-1777, 1900, 2179-2184, 2205, 2228, 2407-2409, 2416-2420
 
#* Fall 2014: 1516-1520, 1769-1777, 1900, 2179-2184, 2205, 2228, 2407-2409, 2416-2420
 +
#* Look for groups of runs with similar conditions
 
# Others
 
# Others
 +
 +
  
 
This also raises the question: what are the good runs?
 
This also raises the question: what are the good runs?
  
One
+
One proposal:
 +
* Tag each run in monitoring database with one of three values:
 +
*# 0 - non-production quality
 +
*# 1 - production quality
 +
*# 2 - production quality, used for calibrations
 +
* Idea is that > 1 means that the run is good to use for physics
 +
** Finer grained information can be stored in RCDB
 +
* To determine production quality, develop quality metrics for each subdetector, use combination of quality metrics and eye test

Revision as of 18:11, 19 May 2015

Proposal

  • Use case: Streamline tape library usage, provide common environment for production and development
    • Example: Users develop plugins on their favorite files/run, use this for running over larger data
  • Run every week (Wednesday to avoid conflict with monitoring?)
  • Uses subset of runs
  • Users provide:
    • DANA plugin
    • Optional post-processing scripts to be run after every/all runs
      • Curated in SVN
  • Results stored in standard location(s)
    • Possible results: ROOT files, images, calibration constants, web pages
  • Uses SWIF? (buzzword compliance)

Runs to Use

Several possibilities:

  1. Large/Popular Runs
    • Fall 2014: 1514, 1769, 1777, 1787, 1803, 1807, 1810, 1825, 1847, 1852, 1854, 1871, 1872, 2138, 2206, 2207, 2209, 2223, 2397
    • Spring 2015: 2931, 3079, 3179, 3180, 3183, 3185
  2. Other big sets of runs:
    • Fall 2014: 1516-1520, 1769-1777, 1900, 2179-2184, 2205, 2228, 2407-2409, 2416-2420
    • Look for groups of runs with similar conditions
  3. Others


This also raises the question: what are the good runs?

One proposal:

  • Tag each run in monitoring database with one of three values:
    1. 0 - non-production quality
    2. 1 - production quality
    3. 2 - production quality, used for calibrations
  • Idea is that > 1 means that the run is good to use for physics
    • Finer grained information can be stored in RCDB
  • To determine production quality, develop quality metrics for each subdetector, use combination of quality metrics and eye test