Difference between revisions of "Goals for Spring 2014 Data Challenge"

From GlueXWiki
Jump to: navigation, search
m
m
Line 6: Line 6:
  
 
The primary goal of the Dec 2013 Online Data Challenge is to test the entire DAQ/monitoring chain from the front-end
 
The primary goal of the Dec 2013 Online Data Challenge is to test the entire DAQ/monitoring chain from the front-end
ROC's (as many as are available, minimum 12) to the tape silo using the production computing and networking systems being installed in Oct.  The latest
+
ROC's (as many as are available, minimum 12) to the tape silo using the production computing and networking systems being installed in Oct.   
installed version of CODA3 will be used.  Current plans are to use the full TS/TD/SD/TI system, assuming trigger fibers are ready (otherwise polling will be used).
+
The test will include the first version of the farm manager CODA component.
+
 
+
We further will use the final RAID-to-silo transfer mechanism (initiated manually, the CC has not written the automated
+
scripting system yet).
+
  
 +
We will use the full TS/TD/SD/TI system unless the trigger fibers are not ready, in which case polling on the TI's will be used instead.
 +
CODA3 will be used to collect data from the ROCs, build them into full events in two stages, and pass them to the event recorder.
 +
A prelimary version of the farm manager CODA component will manage L3 and monitoring farm processes. 
 +
We further will use the production RAID-to-silo transfer mechanism, initiated manually since the CC has not written the automated scripting system yet.
 +
Various L3 rejection algorithms, including no rejection, will be tested.
  
 +
Input to the ROCs will come from simulated data converted to EVIO format via the MC2CODA package.  The front-end ROC's will get the simulated readout data from EVIO files and forward the
 +
data to CODA as if the ROC's had read the data out of the front-end digitizer boards.  Note that only a subset of all ROCS will participate so the remaining simulated event data will be added at a separate stage downstream from the ROC's.
  
 
(to be worked on...)
 
(to be worked on...)
  
 
The goals of the Dec 2013 ODC are:
 
The goals of the Dec 2013 ODC are:
 +
  
 
#Test the RootSpy system in the counting house environment:
 
#Test the RootSpy system in the counting house environment:

Revision as of 13:45, 8 October 2013

Goals for the Dec GlueX online Data Challenge 2013

8-Oct-2013 E. Wolin


The primary goal of the Dec 2013 Online Data Challenge is to test the entire DAQ/monitoring chain from the front-end ROC's (as many as are available, minimum 12) to the tape silo using the production computing and networking systems being installed in Oct.

We will use the full TS/TD/SD/TI system unless the trigger fibers are not ready, in which case polling on the TI's will be used instead. CODA3 will be used to collect data from the ROCs, build them into full events in two stages, and pass them to the event recorder. A prelimary version of the farm manager CODA component will manage L3 and monitoring farm processes. We further will use the production RAID-to-silo transfer mechanism, initiated manually since the CC has not written the automated scripting system yet. Various L3 rejection algorithms, including no rejection, will be tested.

Input to the ROCs will come from simulated data converted to EVIO format via the MC2CODA package. The front-end ROC's will get the simulated readout data from EVIO files and forward the data to CODA as if the ROC's had read the data out of the front-end digitizer boards. Note that only a subset of all ROCS will participate so the remaining simulated event data will be added at a separate stage downstream from the ROC's.

(to be worked on...)

The goals of the Dec 2013 ODC are:


  1. Test the RootSpy system in the counting house environment:
    • run monitoring processes on multiple nodes that read events from a common ET system and create root histograms
    • view summed monitoring histograms on at least 4 separate monitors in the counting room simultaneously
    • create archive (ROOT file) of summed monitoring histograms
    • read in and compare archived histograms via overlay with "live" histograms
  2. Test data rates of EVIO formatted raw data files:
    • use bggen generated data files that have been passed through a L1 event filter and converted into EVIO format that represent what is expected for the real data.
    • read events from a file and place them into ET system on a node designated as the Event Builder (EB) node.
    • read events from EB node's ET system into L3 client processes running on farm nodes, write them to Event Recorder node (ER) running on the RAID disk.
    • transfer events from ET note's ET system to ET system on remote monitoring server, serve to remote monitoring nodes.
  3. Test prototype L3 rejection algorithm:
    • run rejection algorithm in farm nodes, mark accepted events based on algorithm results, reject some events.
    • create and view histograms that display L3 rejection results.
  4. Test data rate to tape silo:
    • transport files written to RAID disk to tape silo while simultaneously recording data on the RAID disk.
  5. Monitor health of farm and DAQ system nodes (counting house only) via farm monitoring software 9such as Ganglia).




Additional Goals


These goals will depend on the success of the goals listed above and the state of the software development at the time of the ODC.

  1. Test deferred processing system for L3:
    • Investigate different monitoring/L3 architectures
    • Multiple nodes pulling events from EB node and writing to local files
    • Separate process on L3 nodes detect files, read them in and write ~10% of them to ER node where they are aggregated into single a file