GlueX Offline Meeting, January 10, 2018
GlueX Offline Software Meeting
Wednesday, January 10, 2018
11:00 am EST
JLab: CEBAF Center F326/327
BlueJeans: 968 592 007
- New simulation branch (Sean)
- MCwrapper 1.12 (Thomas)
- Review of minutes from the last meeting (all)
- Refresh ROOT version? (all)
- New track matching on the master branch (all)
- Docker Containers + GlueX (David L.)
- Review of recent pull requests (all)
- Review of recent discussion on the GlueX Software Help List (all)
- Action Item Review (all)
- The BlueJeans meeting number is 968 592 007 .
- Join the Meeting via BlueJeans
Talks can be deposited in the directory
/group/halld/www/halldweb/html/talks/2018 on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2018/ .
- CMU : Curtis Meyer
- FIU : Mahmoud Kamel, Joerg Reinhold
- Glasgow : Peter Pauli
- JLab: : Alex Austregesilo, Amber Boehnlein, Thomas Britton, Mark Ito (chair), David Lawrence, Simon Taylor
- Yerevan : Hrach Marukyan
There is a recording of this meeting on the BlueJeans site. Use your JLab credentials to access it.
- New simulation branch. Sean's email identifies the branch we should be using in simulation to used with the latest reconstruction launch.
- MCwrapper 1.12. Thomas has release a new version. It supports submission to the Open Science Grid. Changes coming in the next release:
- Fix to a problem identified by Jon Zarling having to do with RCDB on RHEL6/CentOS6.
- Fix to a problem pointed out by Nacer Hamdi having to do with amorphous radiator runs.
Review of minutes from the last meeting
We looked at the minutes of the meeting on December 13.
We noted that we still need a tagged version of CCDB.
Refresh ROOT version?
We looked at the list of releases of ROOT and noted that the version that we are using at present, 6.08.06, is already marked as "Old" on the site. Normally we would consider upgrading.
- David reported that the latest version changes the interface into the TMVA routines.
- Alex pointed out that it is right at the beginning of a run, not a great time to change the software.
- Others pointed out the next time we will not be running is several months from now (hopefully).
- No one present had an example of a new feature that we would benefit from.
We decided for now to do nothing. If collaborators have opinions about an upgrade, particularly if there are new features they want to take advantage of, please write to the offline list or contact Mark.
New track matching on the master branch
We went through Mark's email from before the holidays describing reduced efficiency for FCAL photons associated with a large change in the track matching code from Simon.
- Simon reported that he thinks the anomalies are due to a lack of tuning of matching parameters with the new algorithm. He will look into this.
- Alex noted that a significant change like this makes comparison with previous reconstruction results difficult when trying to monitor incoming data.
- Alex also pointed out some strangeness in the TOF occupancy for track-matched hits.
- We discussed options for maintaining availability of the old algorithm:
- The latest tag was applied before the change, so that can be used. Alex remarked that there are changes made after that tag that one might want to have.
- We discussed how hard it would be to reverse the changes on the master branch. There is some fear that it might not be easy.
- Another options is to create a parallel branch that has all changes except those brought in for the new algorithm. This faced difficulties similar to the previous option.
We decided to keep the changes on the master branch for now while Simon pursues his parameter setting studies. In the meantime Mark will look at the feasibility of implementing option (3).
We discussed how to merge in changes to the master when the number of changed lines of code is large and the effects potentially significant. The majority of pull requests are clearly not of this nature. We coalesced on a policy where, if there is concern about a large change, the proposed branch should be tested by someone other than the author, beyond the light testing we get with the pull-request auto-build. For example, the offline monitoring suite can be run against the branch. Collaborators should not blithely merge in a pull request that is large without some discussion in the pull-request conversation on GitHub. Here "large" is somewhat vague; we are hoping we will collectively recognize a large change when we see one.
Docker Containers + GlueX
David has been working on getting reconstructions jobs running at NERSC in the context of his LDRD grant for JANA2. Doing that involves using containers; a tool which is gaining widespread use in recent years. He described his recent experience and plans for further work. Please see his slides for all of the details.
Hall D Disk Usage
Alex brought our attention to the level of use of work, cache, and volatile to support recent reconstruction and analysis launches. We are near the upper limits on all of them. See the SciComp webpages for the status. Work especially has been a problem. With the move to the new fileserver, we run into hard limits when we exceed our allotment and that allotment is much smaller than we were using before the move. The following table was shown, reflecting work disk use as of December 31.Sum of all files owned by user
|Rank||Total Size (GB)||User|
David suggested sending out this table to the offline email list on a regular basis. In any case collaborators are encouraged to evaluate the amount of data that they need spinning. The rest should be archived to tape.