Difference between revisions of "Goals for Spring 2014 Data Challenge"

From GlueXWiki
Jump to: navigation, search
m
m
Line 33: Line 33:
 
#* transfer data files to silo using production mechanism involving dedicated tape unit.
 
#* transfer data files to silo using production mechanism involving dedicated tape unit.
 
#* measure data rates, cpu performance and other relevant parameters at all stages
 
#* measure data rates, cpu performance and other relevant parameters at all stages
+
<!-- -->     
 
# Test complete RootSpy detector monitoring system:
 
# Test complete RootSpy detector monitoring system:
 
#* implement plugins for all detectors and for L3 monitoring
 
#* implement plugins for all detectors and for L3 monitoring
Line 41: Line 41:
 
#* archive histograms at the end of each run
 
#* archive histograms at the end of each run
 
#* automatically produce web pages showing L3 and monitoring histograms
 
#* automatically produce web pages showing L3 and monitoring histograms
+
<!-- -->
 
# Test farm manager:
 
# Test farm manager:
 
#* implement codaObject communications in JANA-based L3 farm process
 
#* implement codaObject communications in JANA-based L3 farm process
Line 47: Line 47:
 
#* use farm manager to start/stop/monitor L3 farm processes at run start/stop
 
#* use farm manager to start/stop/monitor L3 farm processes at run start/stop
 
#* cripple farm so it fails to meet minimum requirements, ensure farm manager takes appropriate actions
 
#* cripple farm so it fails to meet minimum requirements, ensure farm manager takes appropriate actions
+
<!-- -->
 
# Monitor system performance
 
# Monitor system performance
 
#* use Ganglia to monitor all aspects of system performance
 
#* use Ganglia to monitor all aspects of system performance

Revision as of 17:16, 8 October 2013

Goals for the Dec GlueX online Data Challenge 2013

8-Oct-2013 E. Wolin


The primary goal of the Dec 2013 Online Data Challenge is to test the entire Trigger/DAQ/monitoring chain from the front-end ROC's to the tape silo using production computing and networking systems at relatively low rates using simulated event data.

We will use the full TS/TD/TI/SD system unless the trigger fibers are not ready, in which case polling on the TI's will be used instead. Triggers will come from a random pulser in the TS, i.e. the CTP/SSP/GTP system will not be used. CODA3 will be used to collect data from the ROCs (as many as are available, minimum 12), build them into full events in two stages, and pass them to the event recorder via the L3 farm. A preliminary version of the farm manager CODA component will manage L3 and monitoring farm processes. The CODA3 run control facility will be used to manage the DAQ system. We will further use the production RAID-to-silo transfer mechanism, initiated manually since the CC has not developed an automated system yet. Various L3 rejection algorithms, including no rejection, will be employed. RootSPY will be used to aggregate monitoring histograms, and all detectors will be monitored.

Input to the ROCs will come from simulated data converted to EVIO format via the MC2CODA package. The front-end ROC's will get the simulated data from EVIO files and forward the relevant data to CODA as if the ROC's had read the data out of the front-end digitizer boards. Note that only a subset of all ROCS will participate so the remaining simulated event data will be added at a separate stage downstream from the ROC's.


The goals of the Dec 2013 ODC are:

  1. Test DAQ stream from ROCS to silo, including TS/TD/TI/SD system:
    • generate simulated data with L1 rejection applied, create simulated EVIO data files via MC2CODA package
    • create appropriate COOL configuration files, use CODA run control to start and configure DAQ system
    • use low-rate random pulser in TS as trigger, interrupt ROC's at rate expected during production running
    • upon interrupt ROC reads MC2CODA data and forwards relevant information to event builders as if the data had been read out of the front end modules
    • build events using two event builder stages
    • add remaining ROC data after final event event builder stage using special program written for this purpose
    • send data to L3 farm, implement and evaluate various L3 rejection algorithms including no rejection
    • forward accepted data to ER to write to RAID storage disks
    • transfer data files to silo using production mechanism involving dedicated tape unit.
    • measure data rates, cpu performance and other relevant parameters at all stages
  2. Test complete RootSpy detector monitoring system:
    • implement plugins for all detectors and for L3 monitoring
    • deploy L3 and monitoring farm processes, ET systems and ET transfer facilities
    • use RootSpy to collect, aggregate and display histograms on multiple monitors in the counting house
    • compare histograms to archived reference set
    • archive histograms at the end of each run
    • automatically produce web pages showing L3 and monitoring histograms
  3. Test farm manager:
    • implement codaObject communications in JANA-based L3 farm process
    • develop farm manager CODA component
    • use farm manager to start/stop/monitor L3 farm processes at run start/stop
    • cripple farm so it fails to meet minimum requirements, ensure farm manager takes appropriate actions
  4. Monitor system performance
    • use Ganglia to monitor all aspects of system performance