GlueX Offline Software Meeting, October 16, 2018

From GlueXWiki
Jump to: navigation, search

GlueX Offline Software Meeting
Tuesday, October 16, 2018
3:00 pm EDT
JLab: CEBAF Center A110
BlueJeans: 968 592 007

Agenda

  1. Announcements
    1. New version of MCwrapper: version 2.0.2
    2. New version of build_scripts: version 1.4.2
    3. New version of halld_recon: recon-ver03.2
    4. More new versions: version_3.7_jlab.xml feat. halld-recon 3.2.0, halld-sim 3.5.0, gluex_root_analysis 0.5
  2. Review of minutes from the October 2 meeting (all)
  3. Computing Review (David, Curtis, Mark)
  4. Getting to the ROOT of things... (Mark)
  5. Hardware needs, feedback to Chip (David, Curtis, Mark)
  6. Meta topics:
    1. Encouraging wider participation in these meetings.
    2. Rename this meeting?
  7. Review of Offline Work Packages
  8. Review of recent pull requests:
  9. Review of recent discussion on the GlueX Software Help List (all)
  10. Action Item Review (all)

Communication Information

Remote Connection

Slides

Talks can be deposited in the directory /group/halld/www/halldweb/html/talks/2018 on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2018/ .

Minutes

Present:

  • CMU: Naomi Jarvis
  • FIU: Mahmoud Kamel
  • FSU: Sean Dobbs
  • IU: Ahmed Foda
  • JLab: Alex Austregesilo, Thomas Britton, Mark Ito (chair), David Lawrence, Simon Taylor, Beni Zihlmann
  • W&M: Justin Stevens

There is a recording of this meeting on the BlueJeans site. Use your JLab credentials to access it.

Announcements

  1. New version of MCwrapper: version 2.0.2. The bot is now in beta!
  2. New version of build_scripts: version 1.4.2. A two-stage build process is supported.
  3. New version of halld_recon: recon-ver03.2 This version is being used in the monitoring launch at NERSC.
  4. More new versions: version_3.7_jlab.xml feat. halld-recon 3.2.0, halld-sim 3.5.0, gluex_root_analysis 0.5 A periodic-code-update version set.

Review of minutes from the October 2 meeting

We went over the minutes.

We spent some time discussing the the kinematic fitter issue reported by Hao Li and Mike McCracken. See the thread on the software help list. Beni pointed out that this is an issue that this working group should take on.

[Added in press: Sean created a GitHub issue to track progress on this problem. He assigned the issue to himself and Thomas.]

Computing Review

David and Mark reported that they met with Curtis two weeks ago to start planning for the review. Curtis has put together a wiki page to collect materials. They started a list of topics to address in the short time allowed, most importantly an updated estimate of future computing resource needs. They also want to highlight recent use of off-site computing resources.

Getting to the ROOT of things...

Mark reviewed the recent email from Graham Heyes on opening a communication channel between local ROOT users and the ROOT development team. Bob Michaels is organizing a meeting with Alex Naumann of CERN. The meeting will be announced when plans are set. Interested parties are welcome.

Hardware needs, feedback to Chip

We reviewed Chip's presentation (user=writer...(contact me for more info)) on options we have for spending on computing resources. Several of us (Mark, David, Alex, Sean, Thomas, Curtis, and Richard) met with him about two weeks ago where he solicited input in what our needs are vis-a-vis FY19 equipment purchases. Discussion points:

  • Alex told us that the 2017-01 data produced 11-0 TB of REST files. The associated root trees are about the same size.
  • The 2018-01 data should be about three times the size of 2017-01.
  • We have been running into problems with limited Lustre-based disk space for the past year or so.
  • David told us the the launches at NERSC would benefit from dedicated space to stage the raw data files so that that effort does not compete with others.

In the end we settled on rough proportions of where we would like our share of resources to go (on a dollar basis):

  • 50% Lustre-based disk space (about a petabyte of space)
  • 40% Computing nodes (16 40-core nodes (80 hyper-threads) or 5.6 million core hours per year)
  • 10% SSD disk space (about 25 TB on top of existing 25 TB dedicated to raw data input staging, could be much cheaper (i. e. more space) if other Halls want a like amount)

We would want to defer the purchase of the computing nodes under the assumption that a later purchase might get us more computing per dollar spent.

The need for SSD disk space is not certain, but several applications might potentially benefit. This small fraction would serve to gain experience to see if more high-speed disk would help us.

Mark will talk to Hall B about what they are planning.

Review of Offline Work Packages

We went over the list of Analysis Software Work Packages. Mark did a pass at marking up the list and Sean commented on the mark-ups on the corresponding "discussion" page.

If people have ideas about other work packages, please add them to the list.

Mark agreed to break the list into two categories (Analysis and General) and fill in names of those he knows will volunteer to supervise packages.

Meta topics

At the last meeting several topics were raised.

Encouraging wider participation in these meetings

  • The work packages may be a way to get new collaboration members to attend once they get volunteered to do the work.
  • Some of the expert-level discussion might not be of general interest.
  • Many of the topics discussed at other working group meetings, especially the Analysis Working Group, might be more appropriately hosted at this meeting. Those topics should be identified by people who attend these other meetings.

Rename this meeting

One of us (a.k.a. Naomi) argued that "Offline" in the title did not have an auspicious connotation, e. g., "off the main line" or "off topic", i. e., "irrelevant". We formed a consensus around "Hall D Software" rather than "Offline Software". We are reprinting the business cards now.

Tutorials and Workfests

Past gatherings have proved useful. There was a lot of discussion, but we arrived at

  1. having occasional workfests focused on specific work packages limited to those interested the specific topic.
  2. once a year, on the day before the Spring Collaboration Meeting, having a half-day software tutorial like we did last Spring.

Review of recent pull requests

David explained his recent pull request (#38). There is now an option whereby DANA applications can create a local copy of the CCDB SQLite file indicated in JANA_CALIB_URL. This will help with slow start up, due to multiple processes hitting a single file, on the monitoring farm and other similar applications.