Difference between revisions of "GlueX Data Challenge Meeting, January 31, 2014"

From GlueXWiki
Jump to: navigation, search
(Agenda)
m (Text replacement - "/halldweb1.jlab.org/" to "/halldweb.jlab.org/")
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
GlueX Data Challenge Meeting<br>
 
GlueX Data Challenge Meeting<br>
 
Monday, January 31, 2014<br>
 
Monday, January 31, 2014<br>
11:30 pm, EDT<br>
+
11:30 pm, EST<br>
JLab: CEBAF Center, F326/327
+
JLab: CEBAF Center, A110<br>
 +
ESNet: 8542553<br>
 +
ReadyTalk: (866)740-1260, access code: 1833622<br>
 +
ReadyTalk desktop: http://esnet.readytalk.com/ , access code: 1833622<br>
  
 
=Agenda=
 
=Agenda=
Line 20: Line 23:
 
# Proposed Schedule
 
# Proposed Schedule
 
# AOT
 
# AOT
 +
 +
=Minutes=
 +
 +
Present:
 +
* '''CMU''': Curtis Meyer, Paul Mattione
 +
* '''FSU''': Aristeidis Tsaris
 +
* '''IU''': Kei Moriya
 +
* '''JLab''': Mark Ito (chair), Simon Taylor
 +
* '''MIT''': Justin Stevens
 +
* '''NWU''': Sean Dobbs
 +
* '''UConn''': Alex Barnes[?], Richard Jones
 +
 +
This meeting was [https://halldweb.jlab.org/talks/2014-1Q/data_challenge_2014-01-31/lib/playback.html recorded].
 +
 +
==Agenda Items==
 +
 +
# Finish reversed magnetic field fixes.
 +
#* This has largely been addressed by recent changes checked in by Simon. There is still a lingering problem at &phi;=0. Simon is preparing a report for Richard; this seems like a vestige of the CDC-stereo-straw-geometry problem.
 +
# Do we have standard build/submission stuff to run the challenge?
 +
#* The standard version list has yet to be compiled. The structure we used for communicating the version and configuration information will be re-used, i.&nbsp;e., a Subversion directory with a web page and configuration files. Mark would like to enhance this with an XML-based version definition.
 +
#* We established (after some discussion) that the now-preferred system for staging common files on the OSG, a [http://fuse.sourceforge.net/ FUSE file system], and the new resource facility of JANA will work well together. Resources are fetched only once; the FUSE partition provides a convenient target disk and can in fact be pre-loaded so no network fetch is necessary.
 +
# Are we ready to produce the event sample?
 +
#* We need to document the mechanism for recording random number seeds in the produced output files and for regenerating identical data from those seeds.
 +
# Test jobs with EM background to make sure that data size is reasonable and we are not introducing crashes.
 +
#* Kei will study the feasibility of including EM background using the build in t mechanism in HDGeant.
 +
#* We discussed whether we want to simulate data with EM background at 10<sup>7</sup>, 10<sup>8</sup>, no EM background at all, or with multiple data sets each with different conditions. We decided to wait for Kei's study before deciding.
 +
#* Nominal goal: 10 billion events
 +
# Target Distribution
 +
#* The mechanism for distributing the events spatially in the target is already built into HDGeant.
 +
# Are we able to reconstruct event genealogy?
 +
#* Kei will take a look at the current scheme.
 +
# Test jobs in general with the updated REST format to check new data footprint.
 +
#* Sean reports that the event size looks about 50% larger than for data challenge 1. This is mainly due to the new matching information.
 +
# Pre-load sets of jobs with ~100 runs on the clusters participating in the data challenge to make sure that things run.
 +
#* We will try to do this as soon as possible.
 +
# Is the JLab CC ready for us?
 +
#* Mark will talk to Sandy about our plans.
 +
# What ability will we have for SRM at Jefferson Lab?
 +
#* Mark will talk to Sandy about the status of the system.
 +
 +
==Additional Items==
 +
 +
# Richard needs to report to OSG management about what will be different this time. He will take a look at Curtis's report on the last challenge.
 +
# We need to think about how to catalog the data. Sean will see if anything can be learned from the LHC experiments.
 +
# We decided on an photon energy range of 7.0 GeV to the endpoint. Last time we used 8.4 to 9.0 GeV (i. e., the coherent peak only).
 +
# We will need to do a survey of the disk space available for the output data. Space will come mainly from UConn and Northwestern. The estimate is that this comes to about 40 TB.
 +
# We discussed saving some of the "raw" data, the output of HDGeant. Mark thought that a small amount should be kept.
 +
 +
==Next Meeting==
 +
 +
We decided to meet weekly at this time.
 +
 +
==Action Items==
 +
 +
Richard:
 +
* look at DC1 document
 +
* respond to Simon's bug report
 +
Kei:
 +
* take a look at the current scheme for particle genealogy
 +
* understand issues with EM background
 +
Mark:
 +
* talk to Sandy about farm availability
 +
* talk to Sandy about SRM status
 +
* submit a few jobs
 +
Paul:
 +
* do some tests at CMU

Latest revision as of 11:12, 31 March 2015

GlueX Data Challenge Meeting
Monday, January 31, 2014
11:30 pm, EST
JLab: CEBAF Center, A110
ESNet: 8542553
ReadyTalk: (866)740-1260, access code: 1833622
ReadyTalk desktop: http://esnet.readytalk.com/ , access code: 1833622

Agenda

  1. Announcements
  2. Status of Preparations
    1. Finish reversed magnetic field fixes.
    2. Do we have standard build/submission stuff to run the challenge?
    3. Are we ready to produce the event sample?
    4. Test jobs with EM background to make sure that data size is reasonable and we are not introducing crashes.
    5. Target Distribution (where?).
    6. Are we able to reconstruct event genealogy?
    7. Test jobs in general with the updated REST format to check new data footprint.
    8. Pre-load sets of jobs with ~100 runs on the clusters participating in the data challenge to make sure that things run.
    9. Is the JLab CC ready for us?
    10. What ability will we have for SRM at Jefferson Lab?
  3. Proposed Schedule
  4. AOT

Minutes

Present:

  • CMU: Curtis Meyer, Paul Mattione
  • FSU: Aristeidis Tsaris
  • IU: Kei Moriya
  • JLab: Mark Ito (chair), Simon Taylor
  • MIT: Justin Stevens
  • NWU: Sean Dobbs
  • UConn: Alex Barnes[?], Richard Jones

This meeting was recorded.

Agenda Items

  1. Finish reversed magnetic field fixes.
    • This has largely been addressed by recent changes checked in by Simon. There is still a lingering problem at φ=0. Simon is preparing a report for Richard; this seems like a vestige of the CDC-stereo-straw-geometry problem.
  2. Do we have standard build/submission stuff to run the challenge?
    • The standard version list has yet to be compiled. The structure we used for communicating the version and configuration information will be re-used, i. e., a Subversion directory with a web page and configuration files. Mark would like to enhance this with an XML-based version definition.
    • We established (after some discussion) that the now-preferred system for staging common files on the OSG, a FUSE file system, and the new resource facility of JANA will work well together. Resources are fetched only once; the FUSE partition provides a convenient target disk and can in fact be pre-loaded so no network fetch is necessary.
  3. Are we ready to produce the event sample?
    • We need to document the mechanism for recording random number seeds in the produced output files and for regenerating identical data from those seeds.
  4. Test jobs with EM background to make sure that data size is reasonable and we are not introducing crashes.
    • Kei will study the feasibility of including EM background using the build in t mechanism in HDGeant.
    • We discussed whether we want to simulate data with EM background at 107, 108, no EM background at all, or with multiple data sets each with different conditions. We decided to wait for Kei's study before deciding.
    • Nominal goal: 10 billion events
  5. Target Distribution
    • The mechanism for distributing the events spatially in the target is already built into HDGeant.
  6. Are we able to reconstruct event genealogy?
    • Kei will take a look at the current scheme.
  7. Test jobs in general with the updated REST format to check new data footprint.
    • Sean reports that the event size looks about 50% larger than for data challenge 1. This is mainly due to the new matching information.
  8. Pre-load sets of jobs with ~100 runs on the clusters participating in the data challenge to make sure that things run.
    • We will try to do this as soon as possible.
  9. Is the JLab CC ready for us?
    • Mark will talk to Sandy about our plans.
  10. What ability will we have for SRM at Jefferson Lab?
    • Mark will talk to Sandy about the status of the system.

Additional Items

  1. Richard needs to report to OSG management about what will be different this time. He will take a look at Curtis's report on the last challenge.
  2. We need to think about how to catalog the data. Sean will see if anything can be learned from the LHC experiments.
  3. We decided on an photon energy range of 7.0 GeV to the endpoint. Last time we used 8.4 to 9.0 GeV (i. e., the coherent peak only).
  4. We will need to do a survey of the disk space available for the output data. Space will come mainly from UConn and Northwestern. The estimate is that this comes to about 40 TB.
  5. We discussed saving some of the "raw" data, the output of HDGeant. Mark thought that a small amount should be kept.

Next Meeting

We decided to meet weekly at this time.

Action Items

Richard:

  • look at DC1 document
  • respond to Simon's bug report

Kei:

  • take a look at the current scheme for particle genealogy
  • understand issues with EM background

Mark:

  • talk to Sandy about farm availability
  • talk to Sandy about SRM status
  • submit a few jobs

Paul:

  • do some tests at CMU