GlueX Software Meeting, August 31, 2021

From GlueXWiki
Jump to: navigation, search

GlueX Software Meeting
Tuesday, August 31, 2021
3:00 pm EDT
BlueJeans: 968 592 007

Agenda

  1. Announcements
    1. New version set (4.45.0) with new versions of Diracxx (2.0.0) and HDGeant4 (2.28.0) (Mark I.)
    2. /work/halld is back, /work/halld3 did not move (Mark I.)
    3. New required packages: python3-devel and boost-python36-devel (Mark I.)
    4. New build: complete GlueX software stack, GCC 5.3.0 via module, RHEL Workstation release 7.6 (gluons), as requested by A. Somov (Mark I.)
    5. /work/halld3: transition to new server on Thursday morning (Mark I.)
  2. Review of Minutes from the Last Software Meeting (all)
  3. Review of Minutes from the Last HDGeant4 Meeting (all)
  4. FAQ of the Fortnight: What is the scratch disk?
  5. Update: getting started with gluupy (Jon)
  6. Review of recent issues and pull requests:
    1. halld_recon
    2. halld_sim
    3. CCDB
    4. RCDB
    5. MCwrapper
    6. gluex_root_analysis
  7. Review of recent discussion on the GlueX Software Help List (all)
  8. Meeting time change? (all)
  9. Action Item Review (all)

Minutes

Present: Alex Austregesilo, Edmundo Barriga, Nathan Brei, Sergey Furletov, Nathaniel D. Hoffman, Mark Ito (chair), Igal Jaegle, Naomi Jarvis, David Lawrence, Simon Taylor, Jon Zarling

There is a recording of this meeting. Log into the BlueJeans site first to gain access (use your JLab credentials).

Announcements

  1. New version set (4.45.0) with new versions of Diracxx (2.0.0) and HDGeant4 (2.28.0) and default version set reverted: 4.45.0 -> 4.44.0 The new release from last week, which used a new cmake-enabled version of Diracxx, had to be pulled back due to a non-functioning hdgeant4 binary. See this discussion on the software help list.
  2. /work/halld is back, /work/halld3 did not move and /work/halld3: transition to new server on Thursday morning We are about to be fully moved to a new work disk server. The final step will be the morning of September 2.
  3. New required packages: python3-devel and boost-python36-devel The new Diracxx brings these in.
  4. New build: complete GlueX software stack, GCC 5.3.0 via module, RHEL Workstation release 7.6 (gluons), as requested by A. Somov.

Review of Minutes from the Last Software Meeting

We went over the minutes from the meeting on August 17th.

  • On halld_recon issue #537, Problems with photon energies in MC samples, Sean Dobbs has fixed many random trigger files and will be releasing them into the wild soon. He also thinks that we should backport the software fixes related to this issue to previous recon launch versions and is preparing those branches.
  • Mark reported that there is more work to be done on the GCC 8 access schemes before they are ready for general use.
  • Alex called the meeting on maintaining the online version of halld_recon. Mark was able to do complete builds (all packages) on the gluons using both GCC 4.8.5 and GCC 5.3.0. The system has not changed yet; there is more work to do, but we are maintaining the current system for the start of the run.

Review of Minutes from the Last HDGeant4 Meeting

We went over the minutes from the meeting on August 24th. Alex has closed Issue #181: G3/G4 Difference in FDC wire efficiency at the cell boundary. Thanks to Alex, Richard Jones, and Lubomir Pentchev for all the work that went into new functions for modeling FDC efficiency in mcsmear. If more work needs to be done on this we will open an issue against halld_sim.

FAQ of the Fortnight: What is the scratch disk?

Mark reviewed the FAQ. David asked why we need a volatile disk and a scratch disk. Mark pointed out that since volatile is on Lustre, it is suitable for large data files only. Also it is only available from the farm whereas scratch (or can be) mounted nearly everywhere at JLab.

Update: getting started with gluupy

Jon describe recent work making it easier to adapt gluupy to users' needs. He also clarified some requirements and behaviors. Please see his slides for the details.

Crashes with minimal DSelector upon writing output trees, probably memory leak

This gluex_root_analysis Issue #156. Naomi led us through this "long-standing problem with running DSelector jobs" on the CMU cluster. Please see the issue itself for a complete description. She has provided information so that others can try to reproduce the problem.

[Added in press: Alex was able to duplicate the crash on the ifarm at JLab. It seems intermittent there as well.]

Meeting Time

Mark received no objections to moving the meeting time to 2 pm. Stay tuned.