2016 mcsmear Tracking Updates

From GlueXWiki
Revision as of 14:29, 11 November 2016 by Sdobbs (Talk | contribs) (Reconstructed Tracks)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This page contains notes on how to progress with matching tracking performance between data and simulation.

The goals are to focus on changes we can implement in the next month and develop a plan for future studies.

Previous notes

We can proceed in two steps: update the detector hit objects, then study the performance of reconstructed tracks.

Detector hits

Two main issues to focus on: efficiency and resolution

  • Efficiency
    • Both the CDC and FDC thresholds should be correctly implemented in the simulation [see link above]
    • CDC
      • Include known dead/inefficient channels into the simulation. Mike Staib can send a list of these to Sean, who can include in them into the CCDB.
      • Ineffiency as a function of drift time: eventually we'll want to come up with some per-wire correction, but it would be useful to see what happens if we apply some average correction. From Mike's previous studies, the difference does not seem huge.
    • FDC
      • We have started to include information on dead wires in the CCDB. Alex A. and Sean should review this list.
      • What is the best way to parameterize the FDC inefficiency? Is the easiest way to do this at the psuedohit level?
  • Resolution
    • CDC
      • I believe the CDC residuals are pretty close from the last time Mike presented results. Perhaps there is a small change to the drift resolutions that can be made.
    • FDC
      • I am not sure where we are with FDC hit residuals. Again, should we look at the pseudohit level?


Reconstructed Tracks

Some benchmark final states should be chosen to compare the performance. p 2pi? p 3pi? p 4pi?

From Paul:

Reconstruction benchmarks: Means, resolutions, errors, & efficiencies must match. If they don’t match:

  1. Means: Apply correction factors to reconstructed data
  2. Sigmas: Change smearing as needed
  3. Errors: Apply calib constants when building covariance matrices as needed
  4. Efficiencies: Cut regions from analyses where they don’t match (e.g. edges of detector acceptance).