Difference between revisions of "GlueX Software Meeting, August 20, 2019"

From GlueXWiki
Jump to: navigation, search
(minutes)
 
Line 32: Line 32:
  
 
Talks can be deposited in the directory <code>/group/halld/www/halldweb/html/talks/2019</code> on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2019/ .
 
Talks can be deposited in the directory <code>/group/halld/www/halldweb/html/talks/2019</code> on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2019/ .
 +
 +
== Minutes ==
 +
 +
Present:
 +
 +
* ''' CMU: ''' Naomi Jarvis
 +
* ''' JLab: ''' David Abbott, Stuart Fegan, Mark Ito (chair), David Lawrence, Simon Taylor, Carl Timmer, Beni Zihlmann
 +
 +
=== Announcements ===
 +
 +
# [https://mailman.jlab.org/pipermail/halld-offline/2019-August/003734.html /mss/halld/halld-scratch will be zero'ed]. The exact schedule has not been set.
 +
# [https://mailman.jlab.org/pipermail/halld-offline/2019-August/003735.html Cache files where the disk version was different from that on tape]. These files were lost.
 +
 +
=== Online Skims ===
 +
 +
David described not only the new system for performing skims of special triggers online, but also the new architecture for writing data to disk in the counting room as it comes out of CODA. See [https://docs.google.com/presentation/d/1NuU8b5qo6M_WnNfpc4F1w-vhaFkP7atXN_5DnzzVIxs/edit?usp=sharing his slides] for details.
 +
 +
* The basic idea is to do skims of special triggers (BCAL-LED triggers, random triggers, sync events, etc.) in the counting room while we take data. Those skim files are then immediately available for calibrations. This could speed up calibrations needed in advance of the first reconstruction pass on the data.
 +
* To speed up the process, blocks of triggers that do not contain the special triggers can be skipped. Time is saved by avoiding dis-entanglement of the events. This means that there will be a reduction of the number of PS triggers in the skimmed output; all blocks have PS triggers, but not all PS triggers are in blocks with special triggers.
 +
* Information from the block headers will be put into a relational database, including information on the number of each type of trigger, the first and last events in each output file, etc. These quantities can be migrated to the RCDB later.
 +
* The architecture of the new Hall D Data recording scheme will use fast copies using remote direct memory access (RDMA) to transfer data from IB interface to IB interface without involving the CPU. RAID servers will send and receive data from ramdisks and the data written from memory to arrays of traditional partitions on multiple RAID servers. From there the jmigrate system will look for data to be shipped to the Computer Center for storage on tape.
 +
* There are still some issues to deal with, including more thorough testing, and definition of a back-pressure mechanism (especially if the skim process cannot keep up).
 +
 +
=== Review of minutes from the last Software Meeting===
 +
 +
We went over [[GlueX Software Meeting, August 6, 2019#Minutes|the minutes from August 6]]. David reported that the reconstruction launch at NERSC got going again last week, but ran into problems over the weekend due to a change in how tape is handled at JLab.
 +
 +
=== Review of recent discussion on the GlueX Software Help List ===
 +
 +
We briefly discuss [https://groups.google.com/forum/#!topic/gluex-software/Krdl0FxwMGQ the issue with SQLite versions of the CCDB]. There is still no clear-cut, works-everywhere solution.

Latest revision as of 17:41, 20 August 2019

GlueX Software Meeting
Tuesday, August 20, 2019
3:00 pm EDT
JLab: CEBAF Center A110
BlueJeans: 968 592 007

Agenda

  1. Announcements
    1. /mss/halld/halld-scratch will be zero'ed
    2. Cache files where the disk version was different from that on tape
  2. Online Skims (David)
  3. Review of minutes from the last Software Meeting (all)
  4. Review of recent issues and pull requests:
    1. halld_recon
    2. halld_sim
    3. CCDB
    4. RCDB
  5. Review of recent discussion on the GlueX Software Help List (all)
  6. Action Item Review (all)

Slides

Talks can be deposited in the directory /group/halld/www/halldweb/html/talks/2019 on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2019/ .

Minutes

Present:

  • CMU: Naomi Jarvis
  • JLab: David Abbott, Stuart Fegan, Mark Ito (chair), David Lawrence, Simon Taylor, Carl Timmer, Beni Zihlmann

Announcements

  1. /mss/halld/halld-scratch will be zero'ed. The exact schedule has not been set.
  2. Cache files where the disk version was different from that on tape. These files were lost.

Online Skims

David described not only the new system for performing skims of special triggers online, but also the new architecture for writing data to disk in the counting room as it comes out of CODA. See his slides for details.

  • The basic idea is to do skims of special triggers (BCAL-LED triggers, random triggers, sync events, etc.) in the counting room while we take data. Those skim files are then immediately available for calibrations. This could speed up calibrations needed in advance of the first reconstruction pass on the data.
  • To speed up the process, blocks of triggers that do not contain the special triggers can be skipped. Time is saved by avoiding dis-entanglement of the events. This means that there will be a reduction of the number of PS triggers in the skimmed output; all blocks have PS triggers, but not all PS triggers are in blocks with special triggers.
  • Information from the block headers will be put into a relational database, including information on the number of each type of trigger, the first and last events in each output file, etc. These quantities can be migrated to the RCDB later.
  • The architecture of the new Hall D Data recording scheme will use fast copies using remote direct memory access (RDMA) to transfer data from IB interface to IB interface without involving the CPU. RAID servers will send and receive data from ramdisks and the data written from memory to arrays of traditional partitions on multiple RAID servers. From there the jmigrate system will look for data to be shipped to the Computer Center for storage on tape.
  • There are still some issues to deal with, including more thorough testing, and definition of a back-pressure mechanism (especially if the skim process cannot keep up).

Review of minutes from the last Software Meeting

We went over the minutes from August 6. David reported that the reconstruction launch at NERSC got going again last week, but ran into problems over the weekend due to a change in how tape is handled at JLab.

Review of recent discussion on the GlueX Software Help List

We briefly discuss the issue with SQLite versions of the CCDB. There is still no clear-cut, works-everywhere solution.