diff --git a/chap2.tex b/chap2.tex index 3332fb7..b1c6be9 100644 --- a/chap2.tex +++ b/chap2.tex @@ -9,7 +9,7 @@ \section{Neutrino interactions at the GeV-scale} \nu_\mu n \rightarrow \mu^- p. \label{eq:CCQEInteraction} \end{equation} -For Neutral Current Elastic (NCE) interactions, the incident neutrino remains after the interaction has occurred and no nucleon conversion takes place. Because of this fact, the target nucleon in a NCE interaction need not be a neutron. So, for $\nu_\mu$ NCE interactions, there are two channels available +For \DomTC{Neutral Current Elastic (NCE) interactions}, the incident neutrino remains after the interaction has occurred and no nucleon conversion takes place. Because of this fact, the target nucleon in a NCE interaction need not be a neutron. So, for $\nu_\mu$ NCE interactions, there are two channels available \begin{equation} \nu_\mu n \rightarrow \nu_\mu n, \label{eq:NCEInteractionNeutronTarget} @@ -97,7 +97,7 @@ \section{Neutrino interactions with heavy nuclei} %As introduced above, consideration of nuclear effects in cross-section measurements is important. This is especially true for neutrino interactions on heavy target nuclei. As one can imagine, the presence of a nucleus can dramatically effect the interactions that are observed in a detector. A popular model for the nucleus is the Relativistic Fermi-Gas (RFG) model~\cite{Smith:1972xh}. The RFG model treats the nucleus as a collection of non-interacting nucleons sitting in a potential well. The nucleons are stacked in the potential well according to the Pauli exclusion principle. This leads to a uniform momentum distribution of the nucleons up to the Fermi momentum $p_F$. Importantly, the Pauli exclusion principle has a further effect. Because the final state nucleon is forbidden from occupying a state taken by another nucleon in the potential well, the energy transfer of the neutrino to the nucleon must result in a final state nucleon with a momentum above $p_F$, resulting in a reduction of the cross-section. \newline \newline -The RFG can only model the effect of the nucleus on the initial neutrino interaction which creates the final states. However, these final states are created within the nucleus and so additional interactions of the final states with the nucleus can occur. The Final-State Interactions (FSI) can significantly alter the momentum and direction of the final-state particles. As the final-state particles are used to infer neutrino properties, the FSI effects can alter the interpretation of the reconstructed events. In simulation, variations of the cascade model are typically used. This involves pushing the final-state particles through the nucleus in discreet steps and, at each step, probabilistically updating the particle properties. If at any point a final-state particles knocks out another nucleon, the additional nucleon is also pushed through the nucleus in parallel. The discreet stepping occurs until all relevant particles have escaped the nucleus. +The RFG can only model the effect of the nucleus on the initial neutrino interaction which creates the final states. However, these final states are created within the nucleus and so additional interactions of the final states with the nucleus can occur. The Final-State Interactions (FSI) can significantly alter the momentum and direction of the final-state particles. As the final-state particles are used to infer neutrino properties, the FSI effects can alter the interpretation of the reconstructed events. In simulation, variations of the cascade model are typically used. This involves pushing the final-state particles through the nucleus in discrete steps and, at each step, probabilistically updating the particle properties. If at any point a final-state particles knocks out another nucleon, the additional nucleon is also pushed through the nucleus in parallel. The discreet stepping occurs until all relevant particles have escaped the nucleus. \newline \newline To test such cross-section models, including nuclear effects, it is necessary to compare prediction with collected data. However, collected cross-section data for heavy nuclei is relatively sparse. In the case of lead, only two experiments have performed cross-section measurements. The first measurement was performed by the CHORUS~\cite{CHORUS_XSEC} experiment in 2003. The CHORUS detector, exposed to the CERN SPS beam with a wide-band $\nu_\mu$ beam of 27~GeV average energy, measured a cross-section for lead, iron, marble and polyethylene. However, because the absolute flux was not measured in the experiment, all of the cross-section measurements were normalised to a common constant. Their results are summarised in Fig.~\ref{fig:CHORUSXSec}\Yoshi{}{ADDRESSED - ``data/prediction'' is confusing because it looks like the ratio of the two. Say which experiment is, and when the data was taken, in the caption, not just the body text}. diff --git a/chap4.tex b/chap4.tex index 4989c2a..363ea9e 100644 --- a/chap4.tex +++ b/chap4.tex @@ -26,7 +26,7 @@ \subsection{Neutrino interaction simulation} The main interactions modes at T2K energies are quasi-elastic scattering (CCQE), single pion production (CC1$\pi$) and Deep Inelastic Scattering (DIS) all of which have models in NEUT~\cite{LlewellynSmith1972261,Rein198179,1126-6708-2006-05-026}. \newline \newline -After the initial interactions, the final step is to simulate the final state interactions within the nucleus. Each particle involved in the interaction is pushed through the nucleus in discreet steps with the probability of a final state interaction being calculated at each step. If an interaction occurs, the final states of that interaction are also included in the subsequent steps. This interactive procedure models the particle cascade until all the final states have reached the nucleus boundary. At this point, all final state particles are recorded along with all of the information that created those particles. This information is stored in a vector file and passed on to the ND280 detector MC package which handles the detector's response to these final state particles. +After the initial interactions, the final step is to simulate the final state interactions within the nucleus. Each particle involved in the interaction is pushed through the nucleus in discrete steps with the probability of a final state interaction being calculated at each step. If an interaction occurs, the final states of that interaction are also included in the subsequent steps. This interactive procedure models the particle cascade until all the final states have reached the nucleus boundary. At this point, all final state particles are recorded along with all of the information that created those particles. This information is stored in a vector file and passed on to the ND280 detector MC package which handles the detector's response to these final state particles. \newline \newline While the above description provides a general overview of the NEUT-based simulation of neutrino interactions, the above only describes simulation of NEUT events within the ND280 detector itself. In reality, many interactions occur in the pit which surrounds the near detector, some of which have final state muons which enter ND280. So, a separate NEUT-based simulation of neutrino interactions from the T2K beam in the ND280 pit and the surrounding substrate are also generated. This kind simulation will be referred to as sand MC (because the interaction target is largely sand in the surrounding pit) from now on. diff --git a/chap5.tex b/chap5.tex index b0a483b..bfe3cdc 100644 --- a/chap5.tex +++ b/chap5.tex @@ -138,7 +138,7 @@ \subsection{Discretisation of the parameter space} \section{ECal application of the Hough transform} \label{sec:ECalApplicationHoughTransform} -We must now address how the Hough transform can be used as a reconstruction tool in the ECal. To do this, \YoshiFinal{let us}{ADDRESS - was `let's'} consider a neutrino interaction which occurs in the ECal as illustrated in~\ref{fig:3StateInteractionNoReconstruction}. While the propagating neutrino is invisible to the ECal, the charged final state are definitely not. To first order, the final state particles propagate in straight lines depositing energy in the scintillator bars as they go. From this, we can infer that the hit bars arranged in straight lines should reveal the trajectory of the final states. As shown above, the Hough transform is capable of identifying straight lines from a set of coordinates. However, there are two complications in the ECal which the above sections have not addressed. We have only specifically discussed how to extract a single straight line from a pattern. As Fig.~\ref{fig:3StateInteractionNoReconstruction} shows, the number of final states can be, and is often, greater than one. This is merely a problem of computation which will be addressed in section~\ref{subsec:ParameterSpaceAnalysis}. A much more severe problem is that the above demonstrations only deal with patterns constructed from infinitesimal points. While the centre of a scintillator bar can be used as a point for parameter line generation, it is unlikely that a final state particle will pass through the central point of the scintillator bars that it propagates through. If this is not addressed, the Hough transform will be of little use in trajectory reconstruction. +We must now address how the Hough transform can be used as a reconstruction tool in the ECal. To do this, \YoshiFinal{let us}{ADDRESS - was `let's'} consider a neutrino interaction which occurs in the ECal as illustrated in Fig.~\ref{fig:3StateInteractionNoReconstruction}. While the propagating neutrino is invisible to the ECal, the charged final state are definitely not. To first order, the final state particles propagate in straight lines depositing energy in the scintillator bars as they go. From this, we can infer that the hit bars arranged in straight lines should reveal the trajectory of the final states. As shown above, the Hough transform is capable of identifying straight lines from a set of coordinates. However, there are two complications in the ECal which the above sections have not addressed. We have only specifically discussed how to extract a single straight line from a pattern. As Fig.~\ref{fig:3StateInteractionNoReconstruction} shows, the number of final states can be, and is often, greater than one. This is merely a problem of computation which will be addressed in section~\ref{subsec:ParameterSpaceAnalysis}. A much more severe problem is that the above demonstrations only deal with patterns constructed from infinitesimal points. While the centre of a scintillator bar can be used as a point for parameter line generation, it is unlikely that a final state particle will pass through the central point of the scintillator bars that it propagates through. If this is not addressed, the Hough transform will be of little use in trajectory reconstruction. \begin{figure} \centering \includegraphics[width=10cm]{images/hough_transform/3StateInteraction_SideLeftECal_NoReconstruction.eps} @@ -168,7 +168,7 @@ \subsection{Modelling the ECal bar} While the generated parameter line accurately represents every line which passes through the ECal bar, there are two problems with this approach. Firstly, the large number of points to be Hough-transformed is very large which results in a long CPU time. Secondly, there is a very high number of redundant calculations involved in the parameter line generation. Consider an exactly vertical line which passes through one of the points in the grid array. This line also passes through 10 other points in the same column of the grid. This means that when the parameter line is being generated, this vertical line is calculated 11 times for each column. Bearing this in mind, there are many points along the parameter line which are repeatedly calculated and provide no extra information. This would mean that any algorithm which uses this approach would be very CPU inefficient. \newline \newline -An alternative is to model the ECal bar as a set of points arranged in a cross as shown in Fig.~\ref{fig:ECalBarCrossRepresentation}. Assuming that the spacing between the points on each line of the cross is infinitesimal, any line which passes through the ECal bar would also have to pass through one of the points in the configuration. As the parameter space is discrete, the spacing between the points need not be infinitesimal but only small enough to ensure that no gaps appear in the parameter line. Using 45 points on each line of the cross, the ECal bar can be Hough-transformed by \YoshiFinal{Hough-}{ADDRESSED - was `Hough': There can be too many Houghs in a sentence}transforming each point in the cross configuration. An example of this result is shown in Fig.~\ref{fig:ECalBarHoughTransformCrossRepresentation} using the same ECal bar used to generate Fig.~\ref{fig:ECalBarHoughTransformGridRepresentation}. Clearly, Fig.~\ref{fig:ECalBarHoughTransformGridRepresentation} and Fig.~\ref{fig:ECalBarHoughTransformCrossRepresentation} are identical showing that the cross model achieves the same result as the grid model. Comparing the two, the cross model uses a 90 point representation whereas the grid model uses a 451 point representation. This should mean that an algorithm utilising the cross model would be a factor of five faster than one using a grid model. +An alternative is to model the ECal bar as a set of points arranged in a cross as shown in Fig.~\ref{fig:ECalBarCrossRepresentation}. Assuming that the spacing between the points on each line of the cross is infinitesimal, any line which passes through the ECal bar would also have to pass through one of the points in the configuration. As the parameter space is discrete, the spacing between the points need not be infinitesimal but only small enough to ensure that no gaps appear in the parameter line. Using \DomTC{41} points on each line of the cross, the ECal bar can be Hough-transformed by \YoshiFinal{Hough-}{ADDRESSED - was `Hough': There can be too many Houghs in a sentence}transforming each point in the cross configuration. An example of this result is shown in Fig.~\ref{fig:ECalBarHoughTransformCrossRepresentation} using the same ECal bar used to generate Fig.~\ref{fig:ECalBarHoughTransformGridRepresentation}. Clearly, Fig.~\ref{fig:ECalBarHoughTransformGridRepresentation} and Fig.~\ref{fig:ECalBarHoughTransformCrossRepresentation} are identical showing that the cross model achieves the same result as the grid model. Comparing the two, the cross model uses a 90 point representation whereas the grid model uses a 451 point representation. This should mean that an algorithm utilising the cross model would be a factor of five faster than one using a grid model. \begin{figure}% \centering \subfloat[Cross representation of an ECal bar.]{\includegraphics[width=7cm]{images/ecal_hough_transform/ecal_cross_array.eps} \label{fig:ECalBarCrossRepresentation}} @@ -245,7 +245,7 @@ \subsection{3D track reconstruction} p_i = \frac{s_i}{s_i + b_i} \label{eq:BinProbabilityPDF}, \end{equation} -where $s_i$ is the number of correctly matched tracks in bin $i$ and $b_i$ is the number of incorrectly matched tracks in bin $i$. A discreet probability density distribution for $Q_{\textrm{ratio}}$ can then be formed by calculating $p_i$ for every bin. The discreet probability density distribution is then interpolated with splines to create the final probability distribution. An example of this for the two track, barrel case is shown in Fig.~\ref{fig:3DMatchingBarrel2TrackQRatioPDF}. When a matching candidate pair is being considered, the value of $Q_{\textrm{ratio}}$ is calculated and used in the spline to retrieve $\mathcal{L}_{Q_{\textrm{ratio}}}$. +where $s_i$ is the number of correctly matched tracks in bin $i$ and $b_i$ is the number of incorrectly matched tracks in bin $i$. A discrete probability density distribution for $Q_{\textrm{ratio}}$ can then be formed by calculating $p_i$ for every bin. The discrete probability density distribution is then interpolated with splines to create the final probability distribution. An example of this for the two track, barrel case is shown in Fig.~\ref{fig:3DMatchingBarrel2TrackQRatioPDF}. When a matching candidate pair is being considered, the value of $Q_{\textrm{ratio}}$ is calculated and used in the spline to retrieve $\mathcal{L}_{Q_{\textrm{ratio}}}$. \newline \newline \begin{figure}% @@ -257,7 +257,7 @@ \subsection{3D track reconstruction} \caption{$\Delta_{\textrm{layer, first}}$ and its probability density distribution in the barrel ECal for the two track case. NEUT-based simulation of ND280 beam events were used to produce the distributions.} \label{fig:DFL} \end{figure} -The second input to the likelihood, called $\Delta_{\textrm{layer, first}}$, is the difference in the starting layer of each 2D track which forms the matching candidate pair, where starting layer refers to the layer closest to the ND280 Tracker. For 2D tracks which should be matched together, $\Delta_{\textrm{layer, first}}$ should be 1. The separation ability of this variable for the two track, barrel is shown in Fig.~\ref{fig:3DMatchingBarrel2TrackDFLSeparation}. The discreet probability density function was created using equation~\ref{eq:BinProbabilityPDF}. It was not necessary to interpolate using splines as $\Delta_{\textrm{layer, first}}$ is itself discreet. The probability density function for $\Delta_{\textrm{layer, first}}$ is shown in Fig.~\ref{fig:3DMatchingBarrel2TrackDFLPDF} for the two track, barrel case. For each matching candidate pair, the value of $\Delta_{\textrm{layer, first}}$ is calculated and the corresponding $\mathcal{L}_{\Delta_{\textrm{layer, first}}}$ is retrieved from the probability density function. +The second input to the likelihood, called $\Delta_{\textrm{layer, first}}$, is the difference in the starting layer of each 2D track which forms the matching candidate pair, where starting layer refers to the layer closest to the ND280 Tracker. For 2D tracks which should be matched together, $\Delta_{\textrm{layer, first}}$ should be 1. The separation ability of this variable for the two track, barrel is shown in Fig.~\ref{fig:3DMatchingBarrel2TrackDFLSeparation}. The discrete probability density function was created using equation~\ref{eq:BinProbabilityPDF}. It was not necessary to interpolate using splines as $\Delta_{\textrm{layer, first}}$ is itself discrete. The probability density function for $\Delta_{\textrm{layer, first}}$ is shown in Fig.~\ref{fig:3DMatchingBarrel2TrackDFLPDF} for the two track, barrel case. For each matching candidate pair, the value of $\Delta_{\textrm{layer, first}}$ is calculated and the corresponding $\mathcal{L}_{\Delta_{\textrm{layer, first}}}$ is retrieved from the probability density function. \newline \newline \begin{figure}% @@ -350,7 +350,7 @@ \section{Validation of the reconstruction} Because of the large scope of the reconstruction and the limited time available for the presented analysis, the validation of the reconstruction was done in parallel to the rest of the analysis and is still an ongoing effort. The validation that has been done can be split into two areas: validation of the performance of the algorithms purely using MC and comparisons of MC to data using control samples. \subsection{Validation of algorithm performance} \label{subsec:ValidationOfAlgorithmPerformance} -The first performance validation investigated the angular resolution using the enhanced reconstruction (by L. Pickering). This study calculated the angular resolution for MC muons fired into the side-left ECal for a range of entry angles. For each MC event, the cosine of the angular separation between the true particle angle and the reconstructed angle was calculated, $\cos\theta^{\textrm{Sep}}$. The values of $\cos\theta^{\textrm{Sep}}$ were then binned in a distribution. An outward scan from the peak of the distribution was then performed to find where the height decreased to $68\%$ of the peak. The value of $\theta^{\textrm{Sep}}$ and this point was taken as the angular resolution. These results are shown in Fig.~\ref{fig:MuonAngularResolutionDSECal}. Generally speaking, the found angular resolutions are very good. For long trajectories (200~mm), the angular resolution is within 15$^\circ$. It is only for short tracks (40~mm) that the angular resolution becomes large. +The first performance validation investigated the angular resolution using the enhanced reconstruction (by L. Pickering~\cite{LPickeringComm}). This study calculated the angular resolution for MC muons fired into the side-left ECal for a range of entry angles. For each MC event, the cosine of the angular separation between the true particle angle and the reconstructed angle was calculated, $\cos\theta^{\textrm{Sep}}$. The values of $\cos\theta^{\textrm{Sep}}$ were then binned in a distribution. An outward scan from the peak of the distribution was then performed to find where the height decreased to $68\%$ of the peak. The value of $\theta^{\textrm{Sep}}$ and this point was taken as the angular resolution. These results are shown in Fig.~\ref{fig:MuonAngularResolutionDSECal}. Generally speaking, the found angular resolutions are very good. For long trajectories (200~mm), the angular resolution is within 15$^\circ$. It is only for short tracks (40~mm) that the angular resolution becomes large. \begin{figure} \centering \includegraphics[width=12cm]{images/hough_validation/MuonAngularResolutionDSECal} diff --git a/chap7.tex b/chap7.tex index 124858b..3940639 100644 --- a/chap7.tex +++ b/chap7.tex @@ -191,7 +191,7 @@ \subsection{The vertex reconstruction algorithm} \caption{Parameters for the vertex reconstruction in the ECal.} \label{table:VertexReconParameters} \end{table} -The reconstruction now assesses the quality of the crossings and then attempts to cluster the good quality crossings together to form vertex candidates. The final step is to use the constituent tracks of each vertex candidate in a fit to estimate the position of the vertex. The following method was suggested by X. Lu. The position of the vertex, $\vec{P}$, is defined such that the sum of the squares of the distance of each track to $\vec{P}$ is minimised. An example setup of this is shown in Fig.~\ref{fig:VertexVectorDiagram} for three constituent tracks. By defining the square of the distance of a line, $l_i$, to $\vec{P}$ as $|\vec{r}_i|^2$, the function to minimise is +The reconstruction now assesses the quality of the crossings and then attempts to cluster the good quality crossings together to form vertex candidates. The final step is to use the constituent tracks of each vertex candidate in a fit to estimate the position of the vertex. The following method was suggested by X. Lu~\cite{XLuComm}. The position of the vertex, $\vec{P}$, is defined such that the sum of the squares of the distance of each track to $\vec{P}$ is minimised. An example setup of this is shown in Fig.~\ref{fig:VertexVectorDiagram} for three constituent tracks. By defining the square of the distance of a line, $l_i$, to $\vec{P}$ as $|\vec{r}_i|^2$, the function to minimise is \begin{figure}[!t] \centering \includegraphics[width=9cm]{images/selection/vertex_recon/vertex_vector_diagram} @@ -836,7 +836,7 @@ \subsection{Performance of the selection} \newline The selection efficiencies and purities for each prong topology are shown in table~\ref{table:SelEfficiency} and table~\ref{table:SelPurity} respectively. The final purities and efficiencies are generally good. %The only concern is the 1 prong topology efficiency in the barrel ECal which is notably lower than all other efficiencies. The main reason for the low efficiency is the strict fiducial volumes defined for each ECal module. Generally speaking, this low efficiency may cause a problem if there is a significant amount of signal migration from the 2 prong topology which is more efficient. It is important to address this when assessing systematic uncertainties for the analysis. -\begin{table}[b!] +\begin{table}[t!] \begin{tabular}{ c c c c c } ECal & 1 prong topology & 2 prong topology & 3 prong topology & 4+ prong topology \\ module & efficiency ($\%$)& efficiency ($\%$)& efficiency ($\%$)& efficiency ($\%$) \\ \hline \hline @@ -857,7 +857,6 @@ \subsection{Performance of the selection} \label{table:SelPurity} \end{table} \newline -\newline The selection efficiency is defined to be 100$\%$ when no cuts have been made. There are inevitably signal events which are not reconstructed, primarily because the energy is below reconstruction threshold. It is non-trivial to include these events in an efficiency calculations for a specific prong topology, but it is also unnecessary to do this as the prong topologies are to be summed for the CC-inclusive cross-section measurement. So, the topology combined efficiency and purity are presented in table~\ref{table:FinalEffPur}. It is these numbers, along with the sample itself, which are the final output of the Monte Carlo selection. \begin{table} \begin{tabular}{ c c c} diff --git a/chap8.tex b/chap8.tex index caa1c7a..22e3178 100644 --- a/chap8.tex +++ b/chap8.tex @@ -4,7 +4,7 @@ \chapter{Measurement of the $\nu_\mu$ charged current inclusive cross-section on \section{Measurement method} \label{sec:MeasurementMethod} -The chosen method fits a prediction to measured data using multiple data samples~\cite{PhysRevD.78.032003, PhysRevD.83.012005}\Yoshi{}{ADDRESSED - I put the references at the end of the sentence because the text doesn't flow well if the first sentence in the chapter is cut off after just ``The chosen method''.}. \Yoshi{Here, a ``sample'' refers to, for example, selected events in a particular ECal module.}{ADDRESSED - because ``sample could mean anything (e.g., different run periods or something), and without some idea of what it refers to, it is hard to follow the rest.} The core of the analysis method is a $\chi^2$ fit which tries to minimise the difference between the prediction and the data. The $\chi^2$ is defined as +The chosen method fits a prediction to measured data using multiple data samples~\cite{PhysRevD.78.032003, PhysRevD.83.012005, MScottThesis}\Yoshi{}{ADDRESSED - I put the references at the end of the sentence because the text doesn't flow well if the first sentence in the chapter is cut off after just ``The chosen method''.}. \Yoshi{Here, a ``sample'' refers to, for example, selected events in a particular ECal module.}{ADDRESSED - because ``sample could mean anything (e.g., different run periods or something), and without some idea of what it refers to, it is hard to follow the rest.} The core of the analysis method is a $\chi^2$ fit which tries to minimise the difference between the prediction and the data. The $\chi^2$ is defined as \begin{equation} \chi^2 = \Delta \vec{N}^{\textrm{T}} \left(\underline{\underline{V}}^{\textrm{syst}} + \underline{\underline{V}}^{\textrm{stat}} \right)^{-1} \Delta \vec{N}, \label{eqn:Chi2Def} @@ -335,10 +335,10 @@ \subsubsection{The charge resolution of the ECal scintillator bars} \label{fig:ECalChargeCovarianceMatrices} \end{figure} \subsubsection{Inherent noise in the ECal} -While the hit efficiency and charge systematic assessments are fairly all-encompassing in terms of ECal detector uncertainties, they can not address how the inherent noise in the ECal can affect the reconstruction and, by extension, the selected number of events in this analysis. There is only a need to address a noise systematic uncertainty if the simulated noise rate in the Monte Carlo is different to what actually happens in the ECal. +While the hit efficiency and charge systematic assessments are fairly all-encompassing in terms of ECal detector uncertainties, they can not address how the inherent noise in the ECal can affect the reconstruction and, by extension, the selected number of events in this analysis. There is only a need to address a noise systematic uncertainty if the simulated noise rate in the Monte Carlo is different to what actually happens in the ECal. \DomTC{The method chosen is taken from~\cite{MScottThesis}.} \newline \newline -To measure the noise rate, a control sample of cosmic rays (for the barrel) and through-going muons (for the DS ECal) were passed through the reconstruction. Before the clustering stages were initiated, the number of hits in the relevant ECal module were recorded. Then, after the reconstruction chain was completed, the number of hits which formed the final reconstructed objects were also counted. The difference between these two numbers forms the noise hit estimate. To test this estimator, the true number of noise hits in the Monte Carlo were also counted. +To measure the noise rate, a control sample of cosmic rays (for the barrel) and through-going muons (for the DS ECal) were passed through the reconstruction. Before the clustering stages were initiated, the number of hits in the relevant ECal module were recorded. Then, after the reconstruction chain was completed, the number of hits which formed the final reconstructed objects were also counted. The difference between these two numbers forms the noise hit estimate. To test this estimator, the true number of noise hits in the Monte Carlo were also counted. \begin{figure}% \centering \subfloat[Cosmic control sample in the barrel ECals.]{\includegraphics[width=7.5cm]{images/measurement/systematics/detector/noise/noise_mc_estimate_mc_true_barrel.eps} \label{fig:ECalNoiseMCEstimateMCTrueBarrel}} diff --git a/chap9.tex b/chap9.tex index c7d6cf7..7a53dba 100644 --- a/chap9.tex +++ b/chap9.tex @@ -29,7 +29,7 @@ \section{Conclusions} \section{The future} \label{sec:TheFuture} -The highest priority for future iterations of this analysis should be an in depth analysis of the problematic background events shown in Fig.~\ref{fig:ClusterNHitsCutLevelGT0}. \DomTC{The ideal approach would be to study this unforeseen background in great detail. It should then be possible to improve the ND280 simulation to more accurately model this background or enhance the selection cuts to remove the background from the analysis}. +The highest priority for future iterations of this analysis should be an in depth analysis of the problematic background events shown in Fig.~\ref{fig:ClusterNHitsCutLevelGT0}. \DomTC{The ideal approach would be to study this unforeseen background in great detail. It should then be possible to improve the ND280 simulation to more accurately model this background and enhance the selection cuts to remove the background from the analysis}. \newline \newline There is an immense amount of scope for the reconstruction algorithms. While the Hough transform has shown itself to be a very powerful method of reconstruction, analysis of the generated parameter space is actually very rudimentary. As a reminder, the current method looks for the maxima in the parameter space which actually represent the 2D lines that pass through the most hits. This analysis method needs improvement. For example, the hits which are selected as constituents of the track are removed from the parameter space which means any subsequent track candidate does not have that information available. A more advanced method of hit masking would allow hit sharing between tracks to take place. diff --git a/mythesis.bib b/mythesis.bib index ec6c861..b3ccb9d 100644 --- a/mythesis.bib +++ b/mythesis.bib @@ -1348,3 +1348,22 @@ @article{ND280CalibTN volume = {130}, year = {2012}, } + +@phdthesis{MScottThesis, + author = {Mark Scott}, + title = {Measuring charged current neutrino interactions in the electromagnetic calorimeters of the ND280 detector}, + school = {Imperial College London}, + year = 2013 +} + +@misc{LPickeringComm, + author = "Luke Pickering", + date = "2014", + howpublished = "Private communication" +} + +@misc{XLuComm, + author = "Xianguo Lu", + date = "2014", + howpublished = "Private communication" +} diff --git a/thesis.tex b/thesis.tex index 719db67..9457184 100644 --- a/thesis.tex +++ b/thesis.tex @@ -33,7 +33,9 @@ \DeclareRobustCommand{\YoshiFinal}[2]{{#1}} % believe everything I say version %\DeclareRobustCommand{\YoshiFinal}[2]{{\color[RGB]{111,0,158}{{#1}\footnote{\color[RGB]{111,0,158}#2}}}} % full words of wisdom version %\DeclareRobustCommand{\DomTC}[1]{{\color[RGB]{111,0,158}{{#1}}}} % believe everything I say version -\DeclareRobustCommand{\DomTC}[1]{\textcolor{red}{#1}} % believe everything I say version +%\DeclareRobustCommand{\DomTC}[1]{\textcolor{red}{#1}} % believe everything I say version +\DeclareRobustCommand{\DomTC}[1]{{#1}} % believe everything I say version + %\DeclareRobustCommand{\DomTC}[1]{{#1}} % believe everything I say version @@ -82,14 +84,14 @@ \setcounter{secnumdepth}{5} %% Actually, more semantic chapter filenames are better, like "chap-bgtheory.tex" \input{chap1} - %\input{chap2} - %\input{chap3} - %\input{chap4} - %\input{chap5} - %\input{chap6} + \input{chap2} + \input{chap3} + \input{chap4} + \input{chap5} + \input{chap6} \input{chap7} - %\input{chap8} - %\input{chap9} + \input{chap8} + \input{chap9} %% To ignore a specific chapter while working on another, making the build faster, comment it out: %\input{chap4}