GlueX Software Meeting, September 17, 2019

From GlueXWiki
Revision as of 09:50, 15 October 2019 by Marki (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

GlueX Software Meeting
Tuesday, September 17, 2019
2:00 pm EDT
JLab: CEBAF Center A110
BlueJeans: 968 592 007

Agenda

  1. Announcements
    1. Collaboration Meeting
  2. Review of minutes from the last Software Meeting (all)
  3. Report from the last HDGeant4 Meeting (all)
  4. Recent software updates (Sean)
  5. Software versions & simulations (Sean)
  6. CCDB Ancestry (Mark, Dmitry)
  7. Review of recent issues and pull requests:
    1. halld_recon
    2. halld_sim
    3. CCDB
    4. RCDB
  8. Review of recent discussion on the GlueX Software Help List (all)
  9. Action Item Review (all)

Slides

Talks can be deposited in the directory /group/halld/www/halldweb/html/talks/2019 on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2019/ .

Minutes

Present:

  • CMU: Naomi Jarvis
  • FSU: Sean Dobbs
  • JLab: Alexander Austregesilo, Mark Ito (chair), David Lawrence, Simon Taylor, Beni Zihlmann

There is a recording of his meeting on the BlueJeans site. Use your JLab credentials to access it.

Announcements

  1. Collaboration Meeting: Sean has proposed a list of speakers for the Offline Session on Thursday. Alex will substitute for David and give a status of data processing.
  2. New DB Servers -- HALLDDB-A and HALLDDB-B Online: the new servers were stood up to relieve stress on halldb.jlab.org (our main database server) from farm jobs. Testing is still in progress but users are welcome to try it out.
  3. No online compression this Fall. David has discussed the issue with Graham and they agree that compression of raw data is not ready for the November run. In addition using ramdisk on the front end, improvements in the Data Transfer Node (for off-site transfers), and expansion of disk space at JLab all reduce the need for immediate relief on data size.

Review of minutes from the last Software Meeting

We went over the minutes from September 3.

David gave us an update on NERSC and PSC.

  • At NERSC, batch 3 of the Fall 2018 data reconstruction is finished. 80% of the output has been brought back to the Lab.
  • At the Pittsburgh Supercomputing Center (PSC) there is a steady rate of about 300 jobs a day, slower than NERSC, but with fewer job failures. It is not clear why the pace is so slow.
  • At NERSC, Perlmutter will be coming on line next year with an attendant large increase in computing capacity.
  • The XSEDE proposal at PSC has been approved with 5.9 million units. October 1 is the nominal start date. Note that our advance award was 850 thousand units.

Report from the last HDGeant4 Meeting

We forgot to go over the minutes from the September 10 meeting. Maybe next time.

Reconstruction Software for the upgraded Time-of-Flight

Sean went through and made the needed changes. The DGeometry class was modified to load in the geometry information. The new DTOFGeometry class was changed to present the info in a more reasonable way. There were places where geometry parameters where hard-coded. These were changed to use the information from the CCDB-resident HDDS files. The process benefited from the structure where the DGeometry class parses the HDDS XML and the individual detector geometry classes turn that information into useful parametrizations.

Right now hits are not showing up in the simulation (HDGeant4). Fixing this is the next task.

Fixing Crashes When Running over Data with Multiple Runs

Sean described his fix of a long standing problem, first reported by Elton Smith, where the ReactionFilter crashes when run over data that contains multiple runs. This closes halld_recon issue #111. In particular, the DParticleID class assumed the that run number never changes. Necessary refresh of constants from the CCDB on run number boundaries was thus never done.

Tagger Counter Energy Assignment Bug

Beni brought to our attention an issue that was discussed at the last Beamline Meeting. Currently, tagger energies are set as a fraction of the endpoint energy. But since the electron beam energy can change from run to run, albeit by a small amount, the reported energy of a particular tagger counter will change when the tagged electron energy bin is really determined by the strength of the tagger magnet field. Richard Jones is working on a proposal on how this should be fixed.

Software Versions and Calibration Constant Compatibility

Sean led us through an issue he described in an earlier email to the Offline List. The basic issue is that older versions of mcsmear are not compatible with recent constants used in smearing the FCAL. We discussed the issue and concluded that the problem was changing the meaning of columns in the table, rather than creating a new calibration type with the new interpretation. Because this situation, the software has to know which interpretation is correct for a given set of constants. Old software versions are not instrumented to do so, of course. If the constants are under a different type, the then the software will know which type is it using and do the right thing. And old software, only knowing about the old type will do the right thing as well.

Sean is thinking about how we will address this going forward.

CCDB Ancestry Control

Mark presented a set of issues that arise with CCDB 2.0 (coming soon). See his slides for all of the dirty details.

In CCDB 1.x we can "freeze" calibration constants in time by setting a "calib-time" for the system to use. All calibration changes made after that time will be ignored. Because of the hierarchical structure of calibration "variations" there is a valid use case where the user may want constants at the level of the named variation to float, but freeze the constants coming from variations higher in the hierarchy. This use case is not supported under CCDB 1.x, but is provided for in CCDB 2.0. The implementation provides a rich set of choices for freezing (or not freezing) variations in the hierarchy. Too rich in fact. The discussion was about how to limit the scope of what can be done so users are presented with an understandable, tractable set of options. There was a lot of discussion. See the recording if interested.

No final decision was made, but at least by the end of the meeting everyone was aware of the nature of the problem.