You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As part of the PN-fix changes several years ago, I changed the algorithm in for how Bertini Cascade decides what "target" nucleon/dinucleon within the nucleon is hit by an incoming "bullet" particle. I didn't expect this change to affect hadronic interactions, but some testing revealed that it did change them in a subtle way and worsened agreement with data. The simplest way to fix this is to add more casework to the algorithm: use geant's algorithm for hadrons, and "fixed" algorithm only for photon bullets.
Here is a bit more detail on the two algorithms, why the fix was implemented, and why it goes wrong. Original Bertini algo for a given intra-nuclear step, in G4NucleiModel::generateParticleFate:
A1. assign a path length for interacting with each particle type, sampled from physical ('forced') distribution for hadrons (photons) respectively. These are stored in a member array thePartners() by G4NucleiModel:generateInteractionPartners().
A2. attempt to interact with the first target (or if boundary between nuclear 'zones' precedes first candidate interaction point, do that transition and go back to A1 in next zone).
A3. If interaction fails, repeat A2 on successive targets until interaction succeeds.
A4. Proceed to new step, propagating final-state particles originating from the location of successful interaction.
Note that the path lengths used in A1 for incident photons are 'forced', i.e. sampled from a nearly uniform distribution within a zone, with the same weight for different targets, irrespective of target density or cross-section. This is why fix was necessary in the first place --- dinucleon dissociation was over-generated by 1-2 orders of magnitude due to this bug.
Natalia's PN-Fix algo for a given intra-nuclear step:
B1. Assign a single path length, sampled based on total mean free path (or uniformly for incident photon). At the same time, select a reaction to attempt at this point along path, based always on physical inverse mean free paths
B2. Attempt the selected interaction (or, if boundary between zones comes first, pass through boundary and repeat B1 in new zone).
B3. If interaction fails, move the particle to the endpoint of the step and go back to B1. If interaction succeeds, proceed to new step, propagating final-state particles originating from the location of successful interaction.
This algorithm is almost equivalent to the "original Bertini" algorithm except for a subtlety in the A1/B1 path length code that discards interactions with path length less than a threshold value young_cut. In algorithm B, when the sampled path length is less than young_cut, the bullet just traverses the zone without interacting. In algorithm A, when the earliest sampled path length is less than young_cut, a second process is sampled instead before rejecting the event altogether (both of these seem like weird, ad hoc, unphysical behaviors to me -- but A has been validated to match data after tuning, and B hasn't been tuned or validated). This difference seems to matter enough to generate noticeable deviations from HARP hadron-nuclear scattering data, and performs worse than the original algorithm in a global fit to that data (though it actually performs better at some energies).
Short of improving and re-tuning the whole Bertini cascade treatment of young particles, the way to preserve the successes of both algorithms is to use both -- B for forced particles, and A for everything else.
The text was updated successfully, but these errors were encountered:
As part of the PN-fix changes several years ago, I changed the algorithm in for how Bertini Cascade decides what "target" nucleon/dinucleon within the nucleon is hit by an incoming "bullet" particle. I didn't expect this change to affect hadronic interactions, but some testing revealed that it did change them in a subtle way and worsened agreement with data. The simplest way to fix this is to add more casework to the algorithm: use geant's algorithm for hadrons, and "fixed" algorithm only for photon bullets.
Here is a bit more detail on the two algorithms, why the fix was implemented, and why it goes wrong.
Original Bertini algo for a given intra-nuclear step, in G4NucleiModel::generateParticleFate:
thePartners()
byG4NucleiModel:generateInteractionPartners()
.Note that the path lengths used in A1 for incident photons are 'forced', i.e. sampled from a nearly uniform distribution within a zone, with the same weight for different targets, irrespective of target density or cross-section. This is why fix was necessary in the first place --- dinucleon dissociation was over-generated by 1-2 orders of magnitude due to this bug.
Natalia's PN-Fix algo for a given intra-nuclear step:
This algorithm is almost equivalent to the "original Bertini" algorithm except for a subtlety in the A1/B1 path length code that discards interactions with path length less than a threshold value
young_cut
. In algorithm B, when the sampled path length is less than young_cut, the bullet just traverses the zone without interacting. In algorithm A, when the earliest sampled path length is less than young_cut, a second process is sampled instead before rejecting the event altogether (both of these seem like weird, ad hoc, unphysical behaviors to me -- but A has been validated to match data after tuning, and B hasn't been tuned or validated). This difference seems to matter enough to generate noticeable deviations from HARP hadron-nuclear scattering data, and performs worse than the original algorithm in a global fit to that data (though it actually performs better at some energies).Short of improving and re-tuning the whole Bertini cascade treatment of young particles, the way to preserve the successes of both algorithms is to use both -- B for forced particles, and A for everything else.
The text was updated successfully, but these errors were encountered: