GlueX Software Meeting, December 11, 2018
GlueX Offline Software Meeting
Tuesday, December 11, 2018
3:00 pm EST
JLab: CEBAF Center A110
BlueJeans: 968 592 007
- 1 Agenda
- 2 Minutes
- 2.1 Announcements
- 2.2 Review of minutes from the November 13 meeting
- 2.3 Report from the December 4 HDGeant4 Meeting
- 2.4 Report on Computing Review on November 27-28
- 2.5 Review of recent issues, pull requests, and discussion on the help list
- 2.6 Record of per file event ranges
- Review of minutes from the November 13 meeting (all)
- Report from the December 4 HDGeant4 Meeting (all)
- Report on Computing Review on November 27-28
- Review of recent issues and pull requests:
- Review of recent discussion on the GlueX Software Help List (all)
- Action Item Review (all)
Talks can be deposited in the directory
/group/halld/www/halldweb/html/talks/2018 on the JLab CUE. This directory is accessible from the web at https://halldweb.jlab.org/talks/2018/ .
- CMU: Naomi Jarvis
- FSU: Sean Dobbs
- JLab: Alexander Austregesilo, Thomas Britton, Mark Ito (chair), David Lawrence, Justin Stevens, Simon Taylor, Beni Zihlmann
There is a recording of this meeting on the BlueJeans site. Use your JLab credentials to access it.
Review of minutes from the November 13 meeting
We went over the minutes.
Crashing Monitor Launches?
We had an extended is discussion on the report of 50% success rate for monitoring launches back in October. We attributed that to a as-yet-to-be-found problem in the code and were waiting to tag and use new versions pending finding and fixing. It has been tricky to reproduce the problem. Alex reports that a recent run with only the danarest and monitoring_histograms plug-ins included did not have a problem. The initial report came from a run using 50 plug-ins. We decided to wait no longer; a new version set will be released soon with the latest version of halld_recon. This will promote wider use of recent versions and may shed light on the problem, including on whether it resides in the main reconstruction code.
As far as which version to use on 2018 data, more testing will likely be necessary.
David reported that we received word back on our request for running time at NERSC for 2019. We were awarded 35 million units (a unit is roughly a core-hour) out of our request for 112 million units. Hall B received 30 M out of a 60 M request. The basis for the award is not known.
David also mentioned that he is working on using cycles from supercomputer centers as Indiana and Carnegie Mellon.
We may be starting a reconstruction launch soon.
For NERSC, in 2018, we used 10 M units from our allocation of 50 M. That award was made for the entire Lab.
David noted that if we have multiple projects going at the same time, we run the risk of stepping on ourselves as far as Lustre access is concerned.
Report from the December 4 HDGeant4 Meeting
We reviewed issues from the last HDGeant4 meeting.
We discussed at length our approach to merging in changes from the DIRC-enabled branches of hdds, halld_recon, halld_sim, and hdgeant4. A merge of the changes for one repository requires that the changes to the other repositories be merged as well to get a working system.
After that happens, if we want to use an older version of reconstruction with a modern, DIRC-enabled simulation, there is a problem since constructs in halld_sim using the DIRC require a DIRC-enabled halld_recon, a feature missing from the older halld_recon.
There are two possible solutions that we discussed:
- Add preprocessor directives in halld_sim to exclude DIRC-aware code to be consistent with old halld_recon releases.
- Add patches to selected old releases to enable the DIRC hits. These would only be needed to get halld_sim to build, DIRC hits would not have to be generated.
Both have drawbacks. More discussion is needed. Also this issue will come up again every time a new detector is added to the main development path.
Report on Computing Review on November 27-28
The review was held two weeks ago. We went over highlights from the committee's preliminary report as presented at review close-out. Some quotes from that report:
- Steps have been taken to reduce data processing burdens on analyzers through simple APIs/interfaces. This is strongly commended.
- Overall, GlueX plans and actual developments are excellent and appear to match what is needed to produce timely and important science.
- NERSC allocations are now an important resource to support ENP computing needs. This is a positive development. It is important to continue investigating additional offsite resources as part of future planning.
- More efficient data transfer mechanisms for OSG (e.g XrootD) would allow for running reconstruction at these sites.
and the two recommendations:
- Prepare to support increasing interest in machine learning and modern data science tools, possibly in collaboration with other labs to leverage existing solutions.
- Consider increasing the central support for offsite resource access, especially for OSG and data transfers, leveraging work already done by GlueX and CLAS12 and at other laboratories.
So generally favorable stuff.
Review of recent issues, pull requests, and discussion on the help list
- halld_recon pull request #65: Hdview2 primex. This change from David allows drawing of the CompCal. Can be turned on and off in the gui.
- halld_recon pull request #55: Tracking update oct18. Several small-ish changes to tracking from Simon including measures to preserve hits in the downstream FDC layers.
- halld_sim pull request #21 Gen amp baryons. Peter Pauli has enabled baryon resonance production at the lower vertex in certain cases.
Record of per file event ranges
Sean raised the idea of having a record of the first and last event present in each data file. This would allow us to know which file to interrogate for a particular event. David already has a program that will generate this information (along with a host of other items). He will look at running it online. The next question will be how to present the data to users.