Difference between revisions of "Goals for Spring 2014 Data Challenge"

From GlueXWiki
Jump to: navigation, search
m
m
Line 8: Line 8:
 
ROC's to the tape silo using production computing and networking systems at relatively low rates using simulated event data.
 
ROC's to the tape silo using production computing and networking systems at relatively low rates using simulated event data.
  
We will use the full TS/TD/SD/TI system unless the trigger fibers are not ready, in which case polling on the TI's will be used instead.  Triggers will come from a random pulser in the TS, i.e. the CTP/SSP/GTP system will not be used.  
+
We will use the full TS/TD/TI/SD system unless the trigger fibers are not ready, in which case polling on the TI's will be used instead.  Triggers will come from a random pulser in the TS, i.e. the CTP/SSP/GTP system will not be used.  
 
CODA3 will be used to collect data from the ROCs (as many as are available, minimum 12), build them into full events in two stages, and pass them to the event recorder via the L3 farm.  
 
CODA3 will be used to collect data from the ROCs (as many as are available, minimum 12), build them into full events in two stages, and pass them to the event recorder via the L3 farm.  
 
A preliminary version of the farm manager CODA component will manage L3 and monitoring farm processes.  The CODA3 run control facility will be used to manage the DAQ system.
 
A preliminary version of the farm manager CODA component will manage L3 and monitoring farm processes.  The CODA3 run control facility will be used to manage the DAQ system.
Line 19: Line 19:
  
  
(to be worked on...)
 
  
 
The goals of the Dec 2013 ODC are:
 
The goals of the Dec 2013 ODC are:
  
# Test DAQ stream from ROCS to silo:
+
# Test DAQ stream from ROCS to silo, including TS/TD/TI/SD system:
 
#* generate simulated data with L1 rejection applied, create simulated EVIO data files via MC2CODA package
 
#* generate simulated data with L1 rejection applied, create simulated EVIO data files via MC2CODA package
#* read MC2CODA data into each ROC, forward relevant information to event builders as if the data had been read out of the front end modules
+
#* create appropriate COOL configuration file, use CODA run control to start and configure DAQ system
#* build events in two stages
+
#* use low-rate random pulser in TS as trigger, interrupt ROC's at rate expected during production running
#* add remaining ROC data after final event event builder stage
+
#* upon interrupt ROC reads MC2CODA data and forwards relevant information to event builders as if the data had been read out of the front end modules
#* send data to L3 farm, evaluate various L3 rejection algorithms including no rejection
+
#* build events using two event builder stages
 +
#* add remaining ROC data after final event event builder stage using special program written for this purpose
 +
#* send data to L3 farm, implement and evaluate various L3 rejection algorithms including no rejection
 
#* forward accepted data to ER to write to RAID storage disks
 
#* forward accepted data to ER to write to RAID storage disks
 
#* transfer data files to silo using production mechanism involving dedicated tape unit.
 
#* transfer data files to silo using production mechanism involving dedicated tape unit.
 +
#* measure data rates, cpu performance and other relevant parameters at all stages
  
  
Line 49: Line 51:
  
  
# Test data rates of EVIO formatted raw data files:
+
# Monitor system performance
#* use bggen generated data files that have been passed through a L1 event filter and converted into EVIO format that represent what is expected for the real data.
+
#* use Ganglia to monitor all aspects of system performance
#* read events from a file and place them into ET system on a node designated as the Event Builder (EB) node.
+
#* read events from EB node's ET system into L3 client processes running on farm nodes, write them to Event Recorder node (ER) running on the RAID disk.
+
#* transfer events from ET note's ET system to ET system on remote monitoring server, serve to remote monitoring nodes.
+
#:
+
# Test prototype L3 rejection algorithm:
+
#* run rejection algorithm in farm nodes, mark accepted events based on algorithm results, reject some events.
+
#* create and view histograms that display L3 rejection results.
+
#:
+
# Test data rate to tape silo:
+
#* transport files written to RAID disk to tape silo while simultaneously recording data on the RAID disk.
+
#:
+
# Monitor health of farm and DAQ system nodes (counting house only) via farm monitoring software 9such as Ganglia).
+
 
+
 
+
 
+
 
+
 
+
 
+
Additional Goals
+
----------------
+
These goals will depend on the success of the goals listed above
+
and the state of the software development at the time of the ODC.
+
 
+
# Test deferred processing system for L3:
+
#* Investigate different monitoring/L3 architectures
+
#* Multiple nodes pulling events from EB node and writing to local files
+
#* Separate process on L3 nodes detect files, read them in and write ~10% of them to ER node where they are aggregated into single a file
+

Revision as of 17:11, 8 October 2013

Goals for the Dec GlueX online Data Challenge 2013

8-Oct-2013 E. Wolin


The primary goal of the Dec 2013 Online Data Challenge is to test the entire Trigger/DAQ/monitoring chain from the front-end ROC's to the tape silo using production computing and networking systems at relatively low rates using simulated event data.

We will use the full TS/TD/TI/SD system unless the trigger fibers are not ready, in which case polling on the TI's will be used instead. Triggers will come from a random pulser in the TS, i.e. the CTP/SSP/GTP system will not be used. CODA3 will be used to collect data from the ROCs (as many as are available, minimum 12), build them into full events in two stages, and pass them to the event recorder via the L3 farm. A preliminary version of the farm manager CODA component will manage L3 and monitoring farm processes. The CODA3 run control facility will be used to manage the DAQ system. We will further use the production RAID-to-silo transfer mechanism, initiated manually since the CC has not developed an automated system yet. Various L3 rejection algorithms, including no rejection, will be employed. RootSPY will be used to aggregate monitoring histograms, and all detectors will be monitored.

Input to the ROCs will come from simulated data converted to EVIO format via the MC2CODA package. The front-end ROC's will get the simulated data from EVIO files and forward the relevant data to CODA as if the ROC's had read the data out of the front-end digitizer boards. Note that only a subset of all ROCS will participate so the remaining simulated event data will be added at a separate stage downstream from the ROC's.


The goals of the Dec 2013 ODC are:

  1. Test DAQ stream from ROCS to silo, including TS/TD/TI/SD system:
    • generate simulated data with L1 rejection applied, create simulated EVIO data files via MC2CODA package
    • create appropriate COOL configuration file, use CODA run control to start and configure DAQ system
    • use low-rate random pulser in TS as trigger, interrupt ROC's at rate expected during production running
    • upon interrupt ROC reads MC2CODA data and forwards relevant information to event builders as if the data had been read out of the front end modules
    • build events using two event builder stages
    • add remaining ROC data after final event event builder stage using special program written for this purpose
    • send data to L3 farm, implement and evaluate various L3 rejection algorithms including no rejection
    • forward accepted data to ER to write to RAID storage disks
    • transfer data files to silo using production mechanism involving dedicated tape unit.
    • measure data rates, cpu performance and other relevant parameters at all stages


  1. Test complete RootSpy detector monitoring system:
    • implement plugins for all detectors and for L3 monitoring
    • deploy L3 and monitoring farm processes, ET systems and ET transfer facilities
    • use RootSpy to collect, aggregate and display histograms on multiple monitors in the counting house
    • compare histograms to archived reference set
    • archive histograms at the end of each run
    • automatically produce web pages showing L3 and monitoring histograms


  1. Test farm manager:
    • implement codaObject communications in JANA-based L3 farm process
    • develop farm manager CODA component
    • use farm manager to start/stop/monitor L3 farm processes at run start/stop
    • cripple farm so it fails to meet minimum requirements, ensure farm manager takes appropriate actions


  1. Monitor system performance
    • use Ganglia to monitor all aspects of system performance