-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent flux for 2019-11 #217
Comments
I also noticed two related issues:
vs
In between, the reverse lookup was introduced for the ReactionFilter to figure out the counter number from the stored photon energy: JeffersonLab/halld_recon#536 |
Thanks @aaust. There are few issues to address here
These will each take some time to sort out and determine the best place to keep track of this information, either in a new DB table or hard coding the relevant transitions in this script. |
Thanks to @aaust for adding some additional tables to our dataVersion DB https://halldweb.jlab.org/cgi-bin/data_monitoring/monitoring/dataVersions.py, we now have calibration times available for the relevant analysis launches that use the improved tagger energy calibration. An updated version of plot_flux_ccdb.py is on a branch Note: this now requires the user to provide the REST and Analysis Launch version of the ROOT trees they're using for analysis since the assuming the defaults is no longer sufficient. |
The plot_flux_ccdb.py returns wrong values for the 2020 data if it is executed for the full run range 71350 - 73266. Only when running the script on individual runs or the 12 batches separately, one obtains the correct number of photons and the flux-normalized yields look smooth.
![image](https://private-user-images.githubusercontent.com/8807112/275653855-35703bba-c858-4481-a724-d0168c439fc5.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk2OTQ2NDIsIm5iZiI6MTczOTY5NDM0MiwicGF0aCI6Ii84ODA3MTEyLzI3NTY1Mzg1NS0zNTcwM2JiYS1jODU4LTQ0ODEtYTcyNC1kMDE2OGM0MzlmYzUucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIxNiUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMTZUMDgyNTQyWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9ZjEyMGUxYzkzNzVkNzY4Y2EzOTk0NWU4Mzk5ZThkNjcyNzhmYzAwMjRkYjgzYmZhZTU5NjMyZWYxM2I0MTdmZSZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.N6KwThyMGJ4gLfBNty8k_libvUNJmI5xHDznJih9LB8)
I assume the CCDB calib time has to be adapted while looping through the run range.
The text was updated successfully, but these errors were encountered: