Difference between revisions of "Topics for the 2015 Software Review"

From GlueXWiki
Jump to: navigation, search
(Overall Theme of the Presentation(s))
(Overall Theme of the Presentation(s))
Line 5: Line 5:
 
* The level-one (hardware) trigger was implemented and expanded through the run.
 
* The level-one (hardware) trigger was implemented and expanded through the run.
 
* Beam was successfully delivered to the GlueX detector, and with the very first events, tracks were being reconstructed and clusters identified in the calorimeters.
 
* Beam was successfully delivered to the GlueX detector, and with the very first events, tracks were being reconstructed and clusters identified in the calorimeters.
 +
* Data taken at up to 600MB/s over one weekend. Expected maximum rate is 1/2 that. System ran well.
 
* Online vertex reconstruction from tracks was consistent with that expected from the beam tune.
 
* Online vertex reconstruction from tracks was consistent with that expected from the beam tune.
 
* dE/dx was useful almost immediately.
 
* dE/dx was useful almost immediately.
Line 21: Line 22:
 
* All events are being regularly processed with updated calibrations
 
* All events are being regularly processed with updated calibrations
 
* Data are regularly pushed offsite using globus-ftp.
 
* Data are regularly pushed offsite using globus-ftp.
 +
* The collaboration feels that we reached a number of milestones that we did not expect to see until we were well into the April 2015 run.
 +
* The system ran in FADC (raw pulse) modes for much larger data samples than we expected would be possible.
  
 
== Preferred Format ==
 
== Preferred Format ==

Revision as of 13:56, 17 January 2015

Overall Theme of the Presentation(s)

GlueX ran very successfully in from early November until late December of 2014. All aspects of the DAQ and offline software systems were stressed and no show-stoppers were identified. Many issues that could only be found and fixed under a real-data environment were identified and repaired, and based on the experience, a well-defined plan for moving forward to the April 2015 run is identified and being implemented.

  • Procedures for tuning the beam into the experiment were developed.
  • The level-one (hardware) trigger was implemented and expanded through the run.
  • Beam was successfully delivered to the GlueX detector, and with the very first events, tracks were being reconstructed and clusters identified in the calorimeters.
  • Data taken at up to 600MB/s over one weekend. Expected maximum rate is 1/2 that. System ran well.
  • Online vertex reconstruction from tracks was consistent with that expected from the beam tune.
  • dE/dx was useful almost immediately.
  • All detectors reported in the dame event.
  • pi-0 were identified in the forward calorimeter in the first largish data runs.
  • Online monitoring of all detectors ran very well.
  • Offline running of monitoring processes ran well and agreed with online.
  • Run cataloging via data base was implemented (did not expect enough data to merit this at first).
  • Photons were successfully tagged and correlated with the detector events.
  • Calibration updates were loaded into the data base.
  • Active calibration efforts are now ongoing for all detectors.
  • pi0s now easily seen in both calorimeters.
  • rho->pi+pi- seen, omega -> pi+ pi- p0 seen.
  • Skimming software for events is running.
  • Data production is working.
  • All events are being regularly processed with updated calibrations
  • Data are regularly pushed offsite using globus-ftp.
  • The collaboration feels that we reached a number of milestones that we did not expect to see until we were well into the April 2015 run.
  • The system ran in FADC (raw pulse) modes for much larger data samples than we expected would be possible.

Preferred Format

Given that this is the 3rd software review, we will that the entire committee would like to see where things stand. As such, we would prefer to only have a plenary presentation.

Topics to be Covered

  1. Report on Successful Data Challenges
    • DC1 - December 2012/ January 2013
      • 5 billion Events - OSG, JLab, CMU
      • 1200 Concurrent Jobs at JLab.
    • DC2 - March/April 2014
      • 10 billion events with EM backgrounds included - OSG, JLab, MIT, CMU, FSU
      • 4500 Concurrent Jobs at JLab
      • Well under 0.1% failure rate
    • DC3 - January/February 2015
      • Read data in raw-event format from tape and produce to DST (REST) files.
      • Load up as many JLab cores as possible.
      • Run Multi-threaded jobs
      • Already doing full reprocessing of the Fall 2014 data from tape every two weeks.
  2. Data Acquisition Successes - Running Fall 2014 (stealth data challenge).
    • Exceeded the 300MB/s transfer to tape bandwidth of experiment.
      • ~500 million events.
      • 7000 files, 120TB of data
    • Most data were taken in full pulse mode of the Flash ADCs
      • Need to get final processing algorithms on the FPGAs in the FADCs
      • Need to clean raw data of massive unused headers.
    • Event Rates of 2KHz for full experiment, much higher for individual components.
      • Need to move to block mode.
      • Need to move to FPGA processing to compress data.
    • Full DAQ chain to local raid disk, transfer to tape, and automatic processing from tape.
    • Robustness issues with the system
      • Handle corrupted evio data
      • Problems with some FADCs getting out of sync.
    • Stealth Online Data Challenge
  3. Revisit data and computing spreadsheets
    • Update based on current software performance.
    • Update with best estimates of raw data footprint.
  4. offline monitoring
    • browser
    • analyze data as it appears on the silo
    • reconstruction results
  5. calibration committee
    • bi-weekly meeting
    • preliminary list of constants compiled
    • calibration still needs to be regularized
    • calibration database training
  6. CCDB successes
    • command line interface
    • SQLite form of database
  7. analysis results
    • electron identification in the FCAL.
    • pi0 peak
    • proton id with tof
    • proton id with dE/dx
    • rho meson in pi+ pi-
    • omega meson in pi+ pi- pi0
  8. data transfer to CMU via globus Online
  9. data management: event store, etc.