From 799a6aba8dd1d85382a40b9e50a8a07c8bc7472e Mon Sep 17 00:00:00 2001 From: mdoucet Date: Thu, 9 Jan 2025 13:54:48 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20neutrons?= =?UTF-8?q?/LiquidsReflectometer@b379cb212f17e5c0fd8003f9bb2b5c4b8da32077?= =?UTF-8?q?=20=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/index.html | 2 +- docs/search/search_index.json | 2 +- docs/sitemap.xml.gz | Bin 127 -> 127 bytes docs/user/event_processing/index.html | 12 ++++++------ 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/index.html b/docs/index.html index 1b87272..bb493b9 100644 --- a/docs/index.html +++ b/docs/index.html @@ -204,5 +204,5 @@

Developer Guide

diff --git a/docs/search/search_index.json b/docs/search/search_index.json index 2d029aa..7ae79cf 100644 --- a/docs/search/search_index.json +++ b/docs/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Liquids Reflectometer Reduction User Guide Workflow overview Event processing Conda Environments Releases Contacting the Team The best mechanism for a user to request a change or report a bug is to contact the SANS CIS. Please email Mathieu Doucet with your request. A change needs to be in the form of a: Story for any enhancement request Defect for any bug fix request. API lr_reduction Developer Guide Contributing Guide Developer Documentation","title":"Liquids Reflectometer Reduction"},{"location":"#liquids-reflectometer-reduction","text":"","title":"Liquids Reflectometer Reduction"},{"location":"#user-guide","text":"Workflow overview Event processing Conda Environments Releases","title":"User Guide"},{"location":"#contacting-the-team","text":"The best mechanism for a user to request a change or report a bug is to contact the SANS CIS. Please email Mathieu Doucet with your request. A change needs to be in the form of a: Story for any enhancement request Defect for any bug fix request.","title":"Contacting the Team"},{"location":"#api","text":"lr_reduction","title":"API"},{"location":"#developer-guide","text":"Contributing Guide Developer Documentation","title":"Developer Guide"},{"location":"releases/","text":"Release Notes Notes for major or minor releases. Notes for patch releases are deferred. Release notes are written in reverse chronological order, with the most recent release at the top, using the following format: ## (date of release, format YYYY-MM-DD) **Of interest to the User**: - PR #XYZ one-liner description **Of interest to the Developer:** - PR #XYZ one-liner description 2.1.0 Of interest to the User : PR #33 enable dead time correction for runs with skipped pulses PR #26 add dead time correction to the computation of scaling factors PR #23 add dead time correction PR #19 Functionality to use two backgrounds PR #15 Ability to fit a background with a polynomial function Of interest to the Developer: PR #40 documentation to create a patch release PR #37 documentation conforming to that of the python project template PR #36 versioning with versioningit PR #25 Read in error events when computing correction PR #21 switch dependency from mantidworkbench to mantid PR #20 allow runtime initialization of new attributes for ReductionParameters PR #14 add first GitHub actions PR #12 switch from mantid to mantidworkbench conda package","title":"Release Notes"},{"location":"releases/#release-notes","text":"Notes for major or minor releases. Notes for patch releases are deferred. Release notes are written in reverse chronological order, with the most recent release at the top, using the following format: ## (date of release, format YYYY-MM-DD) **Of interest to the User**: - PR #XYZ one-liner description **Of interest to the Developer:** - PR #XYZ one-liner description","title":"Release Notes"},{"location":"releases/#210","text":"Of interest to the User : PR #33 enable dead time correction for runs with skipped pulses PR #26 add dead time correction to the computation of scaling factors PR #23 add dead time correction PR #19 Functionality to use two backgrounds PR #15 Ability to fit a background with a polynomial function Of interest to the Developer: PR #40 documentation to create a patch release PR #37 documentation conforming to that of the python project template PR #36 versioning with versioningit PR #25 Read in error events when computing correction PR #21 switch dependency from mantidworkbench to mantid PR #20 allow runtime initialization of new attributes for ReductionParameters PR #14 add first GitHub actions PR #12 switch from mantid to mantidworkbench conda package","title":"2.1.0"},{"location":"api/","text":"Overview lr_reduction.background lr_reduction.event_reduction lr_reduction.output lr_reduction.peak_finding lr_reduction.reduction_template_reader lr_reduction.template lr_reduction.time_resolved lr_reduction.utils lr_reduction.workflow","title":"Overview"},{"location":"api/#overview","text":"lr_reduction.background lr_reduction.event_reduction lr_reduction.output lr_reduction.peak_finding lr_reduction.reduction_template_reader lr_reduction.template lr_reduction.time_resolved lr_reduction.utils lr_reduction.workflow","title":"Overview"},{"location":"api/background/","text":"find_ranges_without_overlap Returns the part of r1 that does not contain r2 When summing pixels for reflectivity, include the full range, which means that for a range [a, b], b is included. The range that we return must always exclude the pixels included in r2. Parameters: r1 ( list ) \u2013 Range of pixels to consider r2 ( list ) \u2013 Range of pixels to exclude Returns: list \u2013 List of ranges that do not overlap with r2 functional_background Estimate background using a linear function over a background range that may include the specular peak. In the case where the peak is included in the background range, the peak is excluded from the background. Parameters: ws ( Mantid workspace ) \u2013 Workspace containing the data event_reflectivity ( EventReflectivity ) \u2013 EventReflectivity object peak ( list ) \u2013 Range of pixels that define the peak bck ( list ) \u2013 Range of pixels that define the background. It contains 4 pixels, defining up to two ranges. low_res ( list ) \u2013 Range in the x direction on the detector normalize_to_single_pixel ( bool , default: False ) \u2013 If True, the background is normalized to the number of pixels used to integrate the signal q_bins ( ndarray , default: None ) \u2013 Array of Q bins wl_dist ( ndarray , default: None ) \u2013 Wavelength distribution for the case where we use weighted events for normatization wl_bins ( ndarray , default: None ) \u2013 Array of wavelength bins for the case where we use weighted events for normatization q_summing ( bool , default: False ) \u2013 If True, sum the counts in Q bins Returns: ndarray \u2013 Reflectivity background ndarray \u2013 Reflectivity background error side_background Original background substration done using two pixels defining the area next to the specular peak that are considered background. Parameters: ws ( Mantid workspace ) \u2013 Workspace containing the data event_reflectivity ( EventReflectivity ) \u2013 EventReflectivity object peak ( list ) \u2013 Range of pixels that define the peak bck ( list ) \u2013 Range of pixels that define the background low_res ( list ) \u2013 Range in the x direction on the detector normalize_to_single_pixel ( bool , default: False ) \u2013 If True, the background is normalized to the number of pixels used to integrate the signal q_bins ( ndarray , default: None ) \u2013 Array of Q bins wl_dist ( ndarray , default: None ) \u2013 Wavelength distribution for the case where we use weighted events for normatization wl_bins ( ndarray , default: None ) \u2013 Array of wavelength bins for the case where we use weighted events for normatization q_summing ( bool , default: False ) \u2013 If True, sum the counts in Q bins Returns: ndarray \u2013 Reflectivity background ndarray \u2013 Reflectivity background error","title":"Background"},{"location":"api/background/#lr_reduction.background.find_ranges_without_overlap","text":"Returns the part of r1 that does not contain r2 When summing pixels for reflectivity, include the full range, which means that for a range [a, b], b is included. The range that we return must always exclude the pixels included in r2. Parameters: r1 ( list ) \u2013 Range of pixels to consider r2 ( list ) \u2013 Range of pixels to exclude Returns: list \u2013 List of ranges that do not overlap with r2","title":"find_ranges_without_overlap"},{"location":"api/background/#lr_reduction.background.functional_background","text":"Estimate background using a linear function over a background range that may include the specular peak. In the case where the peak is included in the background range, the peak is excluded from the background. Parameters: ws ( Mantid workspace ) \u2013 Workspace containing the data event_reflectivity ( EventReflectivity ) \u2013 EventReflectivity object peak ( list ) \u2013 Range of pixels that define the peak bck ( list ) \u2013 Range of pixels that define the background. It contains 4 pixels, defining up to two ranges. low_res ( list ) \u2013 Range in the x direction on the detector normalize_to_single_pixel ( bool , default: False ) \u2013 If True, the background is normalized to the number of pixels used to integrate the signal q_bins ( ndarray , default: None ) \u2013 Array of Q bins wl_dist ( ndarray , default: None ) \u2013 Wavelength distribution for the case where we use weighted events for normatization wl_bins ( ndarray , default: None ) \u2013 Array of wavelength bins for the case where we use weighted events for normatization q_summing ( bool , default: False ) \u2013 If True, sum the counts in Q bins Returns: ndarray \u2013 Reflectivity background ndarray \u2013 Reflectivity background error","title":"functional_background"},{"location":"api/background/#lr_reduction.background.side_background","text":"Original background substration done using two pixels defining the area next to the specular peak that are considered background. Parameters: ws ( Mantid workspace ) \u2013 Workspace containing the data event_reflectivity ( EventReflectivity ) \u2013 EventReflectivity object peak ( list ) \u2013 Range of pixels that define the peak bck ( list ) \u2013 Range of pixels that define the background low_res ( list ) \u2013 Range in the x direction on the detector normalize_to_single_pixel ( bool , default: False ) \u2013 If True, the background is normalized to the number of pixels used to integrate the signal q_bins ( ndarray , default: None ) \u2013 Array of Q bins wl_dist ( ndarray , default: None ) \u2013 Wavelength distribution for the case where we use weighted events for normatization wl_bins ( ndarray , default: None ) \u2013 Array of wavelength bins for the case where we use weighted events for normatization q_summing ( bool , default: False ) \u2013 If True, sum the counts in Q bins Returns: ndarray \u2013 Reflectivity background ndarray \u2013 Reflectivity background error","title":"side_background"},{"location":"api/event_reduction/","text":"Event based reduction for the Liquids Reflectometer EventReflectivity Data reduction for the Liquids Reflectometer. List of items to be taken care of outside this class: Edge points cropping Angle offset Putting runs together in one R(q) curve Scaling factors Pixel ranges include the min and max pixels. Parameters: scattering_workspace \u2013 Mantid workspace containing the reflected data direct_workspace \u2013 Mantid workspace containing the direct beam data [if None, normalization won't be applied] signal_peak ( list ) \u2013 Pixel min and max for the specular peak signal_bck ( list ) \u2013 Pixel range of the background [if None, the background won't be subtracted] norm_peak ( list ) \u2013 Pixel range of the direct beam peak norm_bck ( list ) \u2013 Direct background subtraction is not used [deprecated] specular_pixel ( float ) \u2013 Pixel of the specular peak signal_low_res ( list ) \u2013 Pixel range of the specular peak out of the scattering plane norm_low_res ( list ) \u2013 Pixel range of the direct beam out of the scattering plane q_min ( float , default: None ) \u2013 Value of lowest q point q_step ( float , default: -0.02 ) \u2013 Step size in Q. Enter a negative value to get a log scale q_min ( float , default: None ) \u2013 Value of largest q point tof_range ( ( list , None) , default: None ) \u2013 TOF range,or None theta ( float , default: 1.0 ) \u2013 Theta scattering angle in radians dead_time ( float , default: False ) \u2013 If not zero, dead time correction will be used paralyzable ( bool , default: True ) \u2013 If True, the dead time calculation will use the paralyzable approach dead_time_value ( float , default: 4.2 ) \u2013 value of the dead time in microsecond dead_time_tof_step ( float , default: 100 ) \u2013 TOF bin size in microsecond use_emmission_time ( bool ) \u2013 If True, the emission time delay will be computed __repr__ Generate a string representation of the reduction settings. Returns: str \u2013 String representation of the reduction settings bck_subtraction Perform background subtraction on the signal. This method provides a higher-level call for background subtraction, hiding the ranges needed to define the Region of Interest (ROI). Parameters: normalize_to_single_pixel ( bool , default: False ) \u2013 If True, normalize the background to a single pixel. q_bins \u2013 array of bins for the momentum transfer (q) values. wl_dist \u2013 Array of wavelength (wl) values. wl_bins \u2013 Array of bins for the wavelength (wl) values. q_summing ( bool , default: False ) \u2013 If True, sum the q values. Returns: Workspace \u2013 The workspace with the background subtracted. emission_time_correction Coorect TOF for emission time delay in the moderator. Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from tofs ( ndarray ) \u2013 Array of uncorrected TOF values Returns: ndarray \u2013 Array of corrected TOF values extract_meta_data Extract meta data from the loaded data file. extract_meta_data_4A 4A-specific meta data extract_meta_data_4B 4B-specific meta data Distance from source to sample was 13.63 meters prior to the source to detector distance being determined with Bragg edges to be 15.75 m. gravity_correction Gravity correction for each event Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from. wl_list ( ndarray ) \u2013 Array of wavelengths for each event. Returns: ndarray \u2013 Array of gravity-corrected theta values for each event, in radians. norm_bck_subtraction Higher-level call for background subtraction for the normalization run. off_specular Compute off-specular Parameters: x_axis ( int , default: None ) \u2013 Axis selection from QX_VS_QZ, KZI_VS_KZF, DELTA_KZ_VS_QZ x_min ( float , default: -0.015 ) \u2013 Min value on x-axis x_max ( float , default: 0.015 ) \u2013 Max value on x-axis x_npts ( int , default: 50 ) \u2013 Number of points in x (negative will produce a log scale) z_min ( float , default: None ) \u2013 Min value on z-axis (if none, default Qz will be used) z_max ( float , default: None ) \u2013 Max value on z-axis (if none, default Qz will be used) z_npts ( int , default: -120 ) \u2013 Number of points in z (negative will produce a log scale) slice Retrieve a slice from the off-specular data. specular Compute specular reflectivity. For constant-Q binning, it's preferred to use tof_weighted=True. Parameters: q_summing ( bool , default: False ) \u2013 Turns on constant-Q binning tof_weighted ( bool , default: False ) \u2013 If True, binning will be done by weighting each event to the DB distribution bck_in_q ( bool , default: False ) \u2013 If True, the background will be estimated in Q space using the constant-Q binning approach clean ( bool , default: False ) \u2013 If True, and Q summing is True, then leading artifact will be removed normalize ( bool , default: True ) \u2013 If True, and tof_weighted is False, normalization will be skipped Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values specular_unweighted Simple specular reflectivity calculation. This is the same approach as the original LR reduction, which sums up pixels without constant-Q binning. The original approach bins in TOF, then rebins the final results after transformation to Q. This approach bins directly to Q. Parameters: q_summing ( bool , default: False ) \u2013 If True, sum the data in Q-space. normalize ( bool , default: True ) \u2013 If True, normalize the reflectivity by the direct beam. Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values specular_weighted Compute reflectivity by weighting each event by flux. This allows for summing in Q and to estimate the background in either Q or pixels next to the peak. Parameters: q_summing ( bool , default: True ) \u2013 If True, sum the data in Q-space. bck_in_q ( bool , default: False ) \u2013 If True, subtract background along Q lines. Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values to_dict Returns meta-data to be used/stored. Returns: dict \u2013 Dictionary with meta-data apply_dead_time_correction Apply dead time correction, and ensure that it is done only once per workspace. Parameters: ws \u2013 Workspace with raw data to compute correction for template_data ( ReductionParameters ) \u2013 Reduction parameters Returns: Workspace \u2013 Workspace with dead time correction applied compute_resolution Compute the Q resolution from the meta data. Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from. theta ( float , default: None ) \u2013 Scattering angle in radians q_summing ( bool , default: False ) \u2013 If True, the pixel size will be used for the resolution Returns: float \u2013 The dQ/Q resolution (FWHM) get_attenuation_info Retrieve information about attenuation from a Mantid workspace. This function calculates the total thickness of all attenuators that are in the path of the beam by summing up the thicknesses of the attenuators specified in the global variable CD_ATTENUATORS . Parameters: ws \u2013 Mantid workspace from which to retrieve the attenuation information. Returns: float \u2013 The total thickness of the attenuators in the path of the beam. get_dead_time_correction Compute dead time correction to be applied to the reflectivity curve. The method will also try to load the error events from each of the data files to ensure that we properly estimate the dead time correction. Parameters: ws \u2013 Workspace with raw data to compute correction for template_data ( ReductionParameters ) \u2013 Reduction parameters Returns: Workspace \u2013 Workspace with dead time correction to apply get_q_binning Determine Q binning. This function calculates the binning for Q values based on the provided minimum, maximum, and step values. If the step value is positive, it generates a linear binning. If the step value is negative, it generates a logarithmic binning. Parameters: q_min ( float , default: 0.001 ) \u2013 The minimum Q value. q_max ( float , default: 0.15 ) \u2013 The maximum Q value. q_step ( float , default: -0.02 ) \u2013 The step size for Q binning. If positive, linear binning is used. If negative, logarithmic binning is used. Returns: ndarray \u2013 A numpy array of Q values based on the specified binning. get_wl_range Determine TOF range from the data Parameters: ws \u2013 Mantid workspace to work with Returns: list \u2013 [min, max] wavelength range process_attenuation Correct for absorption by assigning weight to each neutron event Parameters: ws \u2013 Mantid workspace to correct thickness \u2013 Attenuator thickness in cm (default is 0). Returns: Mantid workspace \u2013 Corrected Mantid workspace read_settings Read settings file and return values for the given timestamp Parameters: ws \u2013 Mantid workspace Returns: dict \u2013 Dictionary with settings","title":"Event reduction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity","text":"Data reduction for the Liquids Reflectometer. List of items to be taken care of outside this class: Edge points cropping Angle offset Putting runs together in one R(q) curve Scaling factors Pixel ranges include the min and max pixels. Parameters: scattering_workspace \u2013 Mantid workspace containing the reflected data direct_workspace \u2013 Mantid workspace containing the direct beam data [if None, normalization won't be applied] signal_peak ( list ) \u2013 Pixel min and max for the specular peak signal_bck ( list ) \u2013 Pixel range of the background [if None, the background won't be subtracted] norm_peak ( list ) \u2013 Pixel range of the direct beam peak norm_bck ( list ) \u2013 Direct background subtraction is not used [deprecated] specular_pixel ( float ) \u2013 Pixel of the specular peak signal_low_res ( list ) \u2013 Pixel range of the specular peak out of the scattering plane norm_low_res ( list ) \u2013 Pixel range of the direct beam out of the scattering plane q_min ( float , default: None ) \u2013 Value of lowest q point q_step ( float , default: -0.02 ) \u2013 Step size in Q. Enter a negative value to get a log scale q_min ( float , default: None ) \u2013 Value of largest q point tof_range ( ( list , None) , default: None ) \u2013 TOF range,or None theta ( float , default: 1.0 ) \u2013 Theta scattering angle in radians dead_time ( float , default: False ) \u2013 If not zero, dead time correction will be used paralyzable ( bool , default: True ) \u2013 If True, the dead time calculation will use the paralyzable approach dead_time_value ( float , default: 4.2 ) \u2013 value of the dead time in microsecond dead_time_tof_step ( float , default: 100 ) \u2013 TOF bin size in microsecond use_emmission_time ( bool ) \u2013 If True, the emission time delay will be computed","title":"EventReflectivity"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.__repr__","text":"Generate a string representation of the reduction settings. Returns: str \u2013 String representation of the reduction settings","title":"__repr__"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.bck_subtraction","text":"Perform background subtraction on the signal. This method provides a higher-level call for background subtraction, hiding the ranges needed to define the Region of Interest (ROI). Parameters: normalize_to_single_pixel ( bool , default: False ) \u2013 If True, normalize the background to a single pixel. q_bins \u2013 array of bins for the momentum transfer (q) values. wl_dist \u2013 Array of wavelength (wl) values. wl_bins \u2013 Array of bins for the wavelength (wl) values. q_summing ( bool , default: False ) \u2013 If True, sum the q values. Returns: Workspace \u2013 The workspace with the background subtracted.","title":"bck_subtraction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.emission_time_correction","text":"Coorect TOF for emission time delay in the moderator. Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from tofs ( ndarray ) \u2013 Array of uncorrected TOF values Returns: ndarray \u2013 Array of corrected TOF values","title":"emission_time_correction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.extract_meta_data","text":"Extract meta data from the loaded data file.","title":"extract_meta_data"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.extract_meta_data_4A","text":"4A-specific meta data","title":"extract_meta_data_4A"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.extract_meta_data_4B","text":"4B-specific meta data Distance from source to sample was 13.63 meters prior to the source to detector distance being determined with Bragg edges to be 15.75 m.","title":"extract_meta_data_4B"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.gravity_correction","text":"Gravity correction for each event Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from. wl_list ( ndarray ) \u2013 Array of wavelengths for each event. Returns: ndarray \u2013 Array of gravity-corrected theta values for each event, in radians.","title":"gravity_correction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.norm_bck_subtraction","text":"Higher-level call for background subtraction for the normalization run.","title":"norm_bck_subtraction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.off_specular","text":"Compute off-specular Parameters: x_axis ( int , default: None ) \u2013 Axis selection from QX_VS_QZ, KZI_VS_KZF, DELTA_KZ_VS_QZ x_min ( float , default: -0.015 ) \u2013 Min value on x-axis x_max ( float , default: 0.015 ) \u2013 Max value on x-axis x_npts ( int , default: 50 ) \u2013 Number of points in x (negative will produce a log scale) z_min ( float , default: None ) \u2013 Min value on z-axis (if none, default Qz will be used) z_max ( float , default: None ) \u2013 Max value on z-axis (if none, default Qz will be used) z_npts ( int , default: -120 ) \u2013 Number of points in z (negative will produce a log scale)","title":"off_specular"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.slice","text":"Retrieve a slice from the off-specular data.","title":"slice"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.specular","text":"Compute specular reflectivity. For constant-Q binning, it's preferred to use tof_weighted=True. Parameters: q_summing ( bool , default: False ) \u2013 Turns on constant-Q binning tof_weighted ( bool , default: False ) \u2013 If True, binning will be done by weighting each event to the DB distribution bck_in_q ( bool , default: False ) \u2013 If True, the background will be estimated in Q space using the constant-Q binning approach clean ( bool , default: False ) \u2013 If True, and Q summing is True, then leading artifact will be removed normalize ( bool , default: True ) \u2013 If True, and tof_weighted is False, normalization will be skipped Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values","title":"specular"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.specular_unweighted","text":"Simple specular reflectivity calculation. This is the same approach as the original LR reduction, which sums up pixels without constant-Q binning. The original approach bins in TOF, then rebins the final results after transformation to Q. This approach bins directly to Q. Parameters: q_summing ( bool , default: False ) \u2013 If True, sum the data in Q-space. normalize ( bool , default: True ) \u2013 If True, normalize the reflectivity by the direct beam. Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values","title":"specular_unweighted"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.specular_weighted","text":"Compute reflectivity by weighting each event by flux. This allows for summing in Q and to estimate the background in either Q or pixels next to the peak. Parameters: q_summing ( bool , default: True ) \u2013 If True, sum the data in Q-space. bck_in_q ( bool , default: False ) \u2013 If True, subtract background along Q lines. Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values","title":"specular_weighted"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.to_dict","text":"Returns meta-data to be used/stored. Returns: dict \u2013 Dictionary with meta-data","title":"to_dict"},{"location":"api/event_reduction/#lr_reduction.event_reduction.apply_dead_time_correction","text":"Apply dead time correction, and ensure that it is done only once per workspace. Parameters: ws \u2013 Workspace with raw data to compute correction for template_data ( ReductionParameters ) \u2013 Reduction parameters Returns: Workspace \u2013 Workspace with dead time correction applied","title":"apply_dead_time_correction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.compute_resolution","text":"Compute the Q resolution from the meta data. Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from. theta ( float , default: None ) \u2013 Scattering angle in radians q_summing ( bool , default: False ) \u2013 If True, the pixel size will be used for the resolution Returns: float \u2013 The dQ/Q resolution (FWHM)","title":"compute_resolution"},{"location":"api/event_reduction/#lr_reduction.event_reduction.get_attenuation_info","text":"Retrieve information about attenuation from a Mantid workspace. This function calculates the total thickness of all attenuators that are in the path of the beam by summing up the thicknesses of the attenuators specified in the global variable CD_ATTENUATORS . Parameters: ws \u2013 Mantid workspace from which to retrieve the attenuation information. Returns: float \u2013 The total thickness of the attenuators in the path of the beam.","title":"get_attenuation_info"},{"location":"api/event_reduction/#lr_reduction.event_reduction.get_dead_time_correction","text":"Compute dead time correction to be applied to the reflectivity curve. The method will also try to load the error events from each of the data files to ensure that we properly estimate the dead time correction. Parameters: ws \u2013 Workspace with raw data to compute correction for template_data ( ReductionParameters ) \u2013 Reduction parameters Returns: Workspace \u2013 Workspace with dead time correction to apply","title":"get_dead_time_correction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.get_q_binning","text":"Determine Q binning. This function calculates the binning for Q values based on the provided minimum, maximum, and step values. If the step value is positive, it generates a linear binning. If the step value is negative, it generates a logarithmic binning. Parameters: q_min ( float , default: 0.001 ) \u2013 The minimum Q value. q_max ( float , default: 0.15 ) \u2013 The maximum Q value. q_step ( float , default: -0.02 ) \u2013 The step size for Q binning. If positive, linear binning is used. If negative, logarithmic binning is used. Returns: ndarray \u2013 A numpy array of Q values based on the specified binning.","title":"get_q_binning"},{"location":"api/event_reduction/#lr_reduction.event_reduction.get_wl_range","text":"Determine TOF range from the data Parameters: ws \u2013 Mantid workspace to work with Returns: list \u2013 [min, max] wavelength range","title":"get_wl_range"},{"location":"api/event_reduction/#lr_reduction.event_reduction.process_attenuation","text":"Correct for absorption by assigning weight to each neutron event Parameters: ws \u2013 Mantid workspace to correct thickness \u2013 Attenuator thickness in cm (default is 0). Returns: Mantid workspace \u2013 Corrected Mantid workspace","title":"process_attenuation"},{"location":"api/event_reduction/#lr_reduction.event_reduction.read_settings","text":"Read settings file and return values for the given timestamp Parameters: ws \u2013 Mantid workspace Returns: dict \u2013 Dictionary with settings","title":"read_settings"},{"location":"api/output/","text":"Write R(q) output RunCollection A collection of runs to assemble into a single R(Q) add Add a partial R(q) to the collection Parameters: q ( array ) \u2013 Q values r ( array ) \u2013 R values dr ( array ) \u2013 Error in R values meta_data ( dict ) \u2013 Meta data for the run dq ( array , default: None ) \u2013 Q resolution add_from_file Read a partial result file and add it to the collection Parameters: file_path ( str ) \u2013 The path to the file to be read merge Merge the collection of runs save_ascii Save R(Q) in ASCII format. This function merges the data before saving. It writes metadata and R(Q) data to the specified file in ASCII format. The metadata includes experiment details, reduction version, run title, start time, reduction time, and other optional parameters. The R(Q) data includes Q, R, dR, and dQ values. Parameters: file_path ( str ) \u2013 The path to the file where the ASCII data will be saved. meta_as_json ( bool , default: False ) \u2013 If True, metadata will be written in JSON format. Default is False. read_file Read a data file and extract meta data Parameters: file_path ( str ) \u2013 The path to the file to be read","title":"Output"},{"location":"api/output/#lr_reduction.output.RunCollection","text":"A collection of runs to assemble into a single R(Q)","title":"RunCollection"},{"location":"api/output/#lr_reduction.output.RunCollection.add","text":"Add a partial R(q) to the collection Parameters: q ( array ) \u2013 Q values r ( array ) \u2013 R values dr ( array ) \u2013 Error in R values meta_data ( dict ) \u2013 Meta data for the run dq ( array , default: None ) \u2013 Q resolution","title":"add"},{"location":"api/output/#lr_reduction.output.RunCollection.add_from_file","text":"Read a partial result file and add it to the collection Parameters: file_path ( str ) \u2013 The path to the file to be read","title":"add_from_file"},{"location":"api/output/#lr_reduction.output.RunCollection.merge","text":"Merge the collection of runs","title":"merge"},{"location":"api/output/#lr_reduction.output.RunCollection.save_ascii","text":"Save R(Q) in ASCII format. This function merges the data before saving. It writes metadata and R(Q) data to the specified file in ASCII format. The metadata includes experiment details, reduction version, run title, start time, reduction time, and other optional parameters. The R(Q) data includes Q, R, dR, and dQ values. Parameters: file_path ( str ) \u2013 The path to the file where the ASCII data will be saved. meta_as_json ( bool , default: False ) \u2013 If True, metadata will be written in JSON format. Default is False.","title":"save_ascii"},{"location":"api/output/#lr_reduction.output.read_file","text":"Read a data file and extract meta data Parameters: file_path ( str ) \u2013 The path to the file to be read","title":"read_file"},{"location":"api/peak_finding/","text":"fit_signal_flat_bck Fit a Gaussian peak. Parameters: x ( list ) \u2013 List of x values. y ( list ) \u2013 List of y values. x_min ( int , default: 110 ) \u2013 Start index of the list of points, by default 110. x_max ( int , default: 170 ) \u2013 End index of the list of points, by default 170. center ( float , default: None ) \u2013 Estimated center position, by default None. sigma ( float , default: None ) \u2013 If provided, the sigma will be fixed to the given value, by default None. background ( float , default: None ) \u2013 If provided, the value will be subtracted from y, by default None. Returns: c ( float ) \u2013 Fitted center position of the Gaussian peak. width ( float ) \u2013 Fitted width (sigma) of the Gaussian peak. fit ( ModelResult ) \u2013 The result of the fit. process_data Process a Mantid workspace to extract counts vs pixel. Parameters: workspace ( Mantid workspace ) \u2013 The Mantid workspace to process. summed ( bool , default: True ) \u2013 If True, the x pixels will be summed (default is True). tof_step ( int , default: 200 ) \u2013 The TOF bin size (default is 200). Returns: tuple \u2013 A tuple containing: - tof : numpy.ndarray The time-of-flight values. - _x : numpy.ndarray The pixel indices. - _y : numpy.ndarray The summed counts for each pixel.","title":"Peak finding"},{"location":"api/peak_finding/#lr_reduction.peak_finding.fit_signal_flat_bck","text":"Fit a Gaussian peak. Parameters: x ( list ) \u2013 List of x values. y ( list ) \u2013 List of y values. x_min ( int , default: 110 ) \u2013 Start index of the list of points, by default 110. x_max ( int , default: 170 ) \u2013 End index of the list of points, by default 170. center ( float , default: None ) \u2013 Estimated center position, by default None. sigma ( float , default: None ) \u2013 If provided, the sigma will be fixed to the given value, by default None. background ( float , default: None ) \u2013 If provided, the value will be subtracted from y, by default None. Returns: c ( float ) \u2013 Fitted center position of the Gaussian peak. width ( float ) \u2013 Fitted width (sigma) of the Gaussian peak. fit ( ModelResult ) \u2013 The result of the fit.","title":"fit_signal_flat_bck"},{"location":"api/peak_finding/#lr_reduction.peak_finding.process_data","text":"Process a Mantid workspace to extract counts vs pixel. Parameters: workspace ( Mantid workspace ) \u2013 The Mantid workspace to process. summed ( bool , default: True ) \u2013 If True, the x pixels will be summed (default is True). tof_step ( int , default: 200 ) \u2013 The TOF bin size (default is 200). Returns: tuple \u2013 A tuple containing: - tof : numpy.ndarray The time-of-flight values. - _x : numpy.ndarray The pixel indices. - _y : numpy.ndarray The summed counts for each pixel.","title":"process_data"},{"location":"api/reduction_template_reader/","text":"RefRed template reader. Adapted from Mantid code. ReductionParameters Class that hold the parameters for the reduction of a single data set. from_dict Update object's attributes with a dictionary with entries of the type attribute_name: attribute_value. Parameters: permissible ( bool , default: True ) \u2013 allow keys in data_dict that are not attribute names of ReductionParameters instances. Reading from data_dict will result in this instance having new attributes not defined in __init__() Raises: ValueError \u2013 when permissible=False and one entry (or more) of the dictionary is not an attribute of this object from_xml_element Read in data from XML Parameters: instrument_dom ( Document ) \u2013 to_xml Create XML from the current data. from_xml Read in data from XML string Parameters: xml_str ( str ) \u2013 String representation of a list of ReductionParameters instances Returns: list \u2013 List of ReductionParameters instances getBoolElement Parse a boolean element from the dom object getContent Returns the content of a tag within a dom object getFloatElement Parse a float element from the dom object getFloatList Parse a list of floats from the dom object getIntElement Parse an integer element from the dom object getIntList Parse a list of integers from the dom object getStringElement Parse a string element from the dom object getStringList Parse a list of strings from the dom object getText Utility method to extract text out of an XML node to_xml Create XML from the current data. Parameters: data_sets ( list ) \u2013 List of ReductionParameters instances Returns: str \u2013 XML string","title":"Reduction template reader"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.ReductionParameters","text":"Class that hold the parameters for the reduction of a single data set.","title":"ReductionParameters"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.ReductionParameters.from_dict","text":"Update object's attributes with a dictionary with entries of the type attribute_name: attribute_value. Parameters: permissible ( bool , default: True ) \u2013 allow keys in data_dict that are not attribute names of ReductionParameters instances. Reading from data_dict will result in this instance having new attributes not defined in __init__() Raises: ValueError \u2013 when permissible=False and one entry (or more) of the dictionary is not an attribute of this object","title":"from_dict"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.ReductionParameters.from_xml_element","text":"Read in data from XML Parameters: instrument_dom ( Document ) \u2013","title":"from_xml_element"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.ReductionParameters.to_xml","text":"Create XML from the current data.","title":"to_xml"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.from_xml","text":"Read in data from XML string Parameters: xml_str ( str ) \u2013 String representation of a list of ReductionParameters instances Returns: list \u2013 List of ReductionParameters instances","title":"from_xml"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getBoolElement","text":"Parse a boolean element from the dom object","title":"getBoolElement"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getContent","text":"Returns the content of a tag within a dom object","title":"getContent"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getFloatElement","text":"Parse a float element from the dom object","title":"getFloatElement"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getFloatList","text":"Parse a list of floats from the dom object","title":"getFloatList"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getIntElement","text":"Parse an integer element from the dom object","title":"getIntElement"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getIntList","text":"Parse a list of integers from the dom object","title":"getIntList"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getStringElement","text":"Parse a string element from the dom object","title":"getStringElement"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getStringList","text":"Parse a list of strings from the dom object","title":"getStringList"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getText","text":"Utility method to extract text out of an XML node","title":"getText"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.to_xml","text":"Create XML from the current data. Parameters: data_sets ( list ) \u2013 List of ReductionParameters instances Returns: str \u2013 XML string","title":"to_xml"},{"location":"api/template/","text":"Reduce a data run using a template generated by RefRed process_from_template The clean option removes leading zeros and the drop when doing q-summing read_template Read template from file. @param sequence_number: the ID of the data set within the sequence of runs scaling_factor Apply scaling factor from reference scaling data @param workspace: Mantid workspace","title":"Template"},{"location":"api/template/#lr_reduction.template.process_from_template","text":"The clean option removes leading zeros and the drop when doing q-summing","title":"process_from_template"},{"location":"api/template/#lr_reduction.template.read_template","text":"Read template from file. @param sequence_number: the ID of the data set within the sequence of runs","title":"read_template"},{"location":"api/template/#lr_reduction.template.scaling_factor","text":"Apply scaling factor from reference scaling data @param workspace: Mantid workspace","title":"scaling_factor"},{"location":"api/time_resolved/","text":"Time-resolved data reduction reduce_30Hz_from_ws Perform 30Hz reduction @param meas_ws_30Hz: Mantid workspace of the data we want to reduce @param ref_ws_30Hz: Mantid workspace of the reference data, take with the same config @param data_60Hz: reduced reference data at 60Hz @param template_data: template data object (for 30Hz) @param scan_index: scan index to use within the template. reduce_30Hz_slices_ws Perform 30Hz reduction @param meas_ws_30Hz: workspace of the data we want to reduce @param ref_ws_30Hz: workspace of the reference data, take with the same config @param ref_data_60Hz: file path of the reduce data file at 60Hz @param template_30Hz: file path of the template file for 30Hz @param time_interval: time step in seconds @param scan_index: scan index to use within the template. reduce_slices_ws Perform time-resolved reduction :param meas_ws: workspace of the data we want to reduce :param template_file: autoreduction template file :param time_interval: time step in seconds :param scan_index: scan index to use within the template. :param theta_value: force theta value :param theta_offset: add a theta offset, defaults to zero","title":"Time resolved"},{"location":"api/time_resolved/#lr_reduction.time_resolved.reduce_30Hz_from_ws","text":"Perform 30Hz reduction @param meas_ws_30Hz: Mantid workspace of the data we want to reduce @param ref_ws_30Hz: Mantid workspace of the reference data, take with the same config @param data_60Hz: reduced reference data at 60Hz @param template_data: template data object (for 30Hz) @param scan_index: scan index to use within the template.","title":"reduce_30Hz_from_ws"},{"location":"api/time_resolved/#lr_reduction.time_resolved.reduce_30Hz_slices_ws","text":"Perform 30Hz reduction @param meas_ws_30Hz: workspace of the data we want to reduce @param ref_ws_30Hz: workspace of the reference data, take with the same config @param ref_data_60Hz: file path of the reduce data file at 60Hz @param template_30Hz: file path of the template file for 30Hz @param time_interval: time step in seconds @param scan_index: scan index to use within the template.","title":"reduce_30Hz_slices_ws"},{"location":"api/time_resolved/#lr_reduction.time_resolved.reduce_slices_ws","text":"Perform time-resolved reduction :param meas_ws: workspace of the data we want to reduce :param template_file: autoreduction template file :param time_interval: time step in seconds :param scan_index: scan index to use within the template. :param theta_value: force theta value :param theta_offset: add a theta offset, defaults to zero","title":"reduce_slices_ws"},{"location":"api/utils/","text":"amend_config Context manager to safely modify Mantid Configuration Service while the function is executed. Parameters: new_config ( dict , default: None ) \u2013 (key, value) pairs to substitute in the configuration service data_dir ( Union [ str , list ] , default: None ) \u2013 prepend one (when passing a string) or more (when passing a list) directories to the list of data search directories. Alternatively, replace instead of prepend. data_dir_insert_mode ( str , default: 'prepend' ) \u2013 How to insert the data directories. Options are: \"prepend\" (default) and \"replace\".","title":"Utils"},{"location":"api/utils/#lr_reduction.utils.amend_config","text":"Context manager to safely modify Mantid Configuration Service while the function is executed. Parameters: new_config ( dict , default: None ) \u2013 (key, value) pairs to substitute in the configuration service data_dir ( Union [ str , list ] , default: None ) \u2013 prepend one (when passing a string) or more (when passing a list) directories to the list of data search directories. Alternatively, replace instead of prepend. data_dir_insert_mode ( str , default: 'prepend' ) \u2013 How to insert the data directories. Options are: \"prepend\" (default) and \"replace\".","title":"amend_config"},{"location":"api/workflow/","text":"Autoreduction process for the Liquids Reflectometer assemble_results Find related runs and assemble them in one R(q) data set Parameters: first_run ( int ) \u2013 The first run number in the sequence output_dir ( str ) \u2013 Directory where the output files are saved average_overlap ( bool , default: False ) \u2013 If True, the overlapping points will be averaged is_live ( bool , default: False ) \u2013 If True, the data is live and will be saved in a separate file to avoid conflict with auto-reduction Returns: seq_list ( list ) \u2013 The sequence identifiers run_list ( list ) \u2013 The run numbers offset_from_first_run Find a theta offset by comparing the peak location of the reflected and direct beam compared to the theta value in the meta data. When processing the first run of a set, store that offset in a file so it can be used for later runs. Parameters: ws ( Mantid workspace ) \u2013 The workspace to process template_file ( str ) \u2013 Path to the template file output_dir ( str ) \u2013 Directory where the output files are saved Returns: float \u2013 The theta offset reduce Function called by reduce_REFL.py, which lives in /SNS/REF_L/shared/autoreduce and is called by the automated reduction workflow. If average_overlap is used, overlapping points will be averaged, otherwise they will be left in the final data file. Parameters: average_overlap ( bool , default: False ) \u2013 If True, the overlapping points will be averaged q_summing ( bool , default: False ) \u2013 If True, constant-Q binning will be used bck_in_q ( bool , default: False ) \u2013 If True, and constant-Q binning is used, the background will be estimated along constant-Q lines rather than along TOF/pixel boundaries. theta_offset ( float , default: 0 ) \u2013 Theta offset to apply. If None, the template value will be used. is_live ( bool , default: False ) \u2013 If True, the data is live and will be saved in a separate file to avoid conflict with auto-reduction output_dir ( str ) \u2013 Directory where the output files will be saved template_file ( str ) \u2013 Path to the template file containing the reduction parameters Returns: int \u2013 The sequence identifier for the run sequence reduce_explorer Very simple rough reduction for when playing around. Parameters: ws ( Mantid workspace ) \u2013 The workspace to process ws_db ( Mantid workspace ) \u2013 The workspace with the direct beam data theta_pv ( str , default: None ) \u2013 The PV name for the theta value center_pixel ( int , default: 145 ) \u2013 The pixel number for the center of the reflected beam db_center_pixel ( int , default: 145 ) \u2013 The pixel number for the center of the direct beam peak_width ( int , default: 10 ) \u2013 The width of the peak to use for the reflected beam Returns: qz_mid ( ndarray ) \u2013 The Q values refl ( ndarray ) \u2013 The reflectivity values d_refl ( ndarray ) \u2013 The uncertainty in the reflectivity write_template Read the appropriate entry in a template file and save an updated copy with the updated run number. Parameters: seq_list ( list ) \u2013 The sequence identifiers run_list ( list ) \u2013 The run numbers template_file ( str ) \u2013 Path to the template file output_dir ( str ) \u2013 Directory where the output files are saved","title":"Workflow"},{"location":"api/workflow/#lr_reduction.workflow.assemble_results","text":"Find related runs and assemble them in one R(q) data set Parameters: first_run ( int ) \u2013 The first run number in the sequence output_dir ( str ) \u2013 Directory where the output files are saved average_overlap ( bool , default: False ) \u2013 If True, the overlapping points will be averaged is_live ( bool , default: False ) \u2013 If True, the data is live and will be saved in a separate file to avoid conflict with auto-reduction Returns: seq_list ( list ) \u2013 The sequence identifiers run_list ( list ) \u2013 The run numbers","title":"assemble_results"},{"location":"api/workflow/#lr_reduction.workflow.offset_from_first_run","text":"Find a theta offset by comparing the peak location of the reflected and direct beam compared to the theta value in the meta data. When processing the first run of a set, store that offset in a file so it can be used for later runs. Parameters: ws ( Mantid workspace ) \u2013 The workspace to process template_file ( str ) \u2013 Path to the template file output_dir ( str ) \u2013 Directory where the output files are saved Returns: float \u2013 The theta offset","title":"offset_from_first_run"},{"location":"api/workflow/#lr_reduction.workflow.reduce","text":"Function called by reduce_REFL.py, which lives in /SNS/REF_L/shared/autoreduce and is called by the automated reduction workflow. If average_overlap is used, overlapping points will be averaged, otherwise they will be left in the final data file. Parameters: average_overlap ( bool , default: False ) \u2013 If True, the overlapping points will be averaged q_summing ( bool , default: False ) \u2013 If True, constant-Q binning will be used bck_in_q ( bool , default: False ) \u2013 If True, and constant-Q binning is used, the background will be estimated along constant-Q lines rather than along TOF/pixel boundaries. theta_offset ( float , default: 0 ) \u2013 Theta offset to apply. If None, the template value will be used. is_live ( bool , default: False ) \u2013 If True, the data is live and will be saved in a separate file to avoid conflict with auto-reduction output_dir ( str ) \u2013 Directory where the output files will be saved template_file ( str ) \u2013 Path to the template file containing the reduction parameters Returns: int \u2013 The sequence identifier for the run sequence","title":"reduce"},{"location":"api/workflow/#lr_reduction.workflow.reduce_explorer","text":"Very simple rough reduction for when playing around. Parameters: ws ( Mantid workspace ) \u2013 The workspace to process ws_db ( Mantid workspace ) \u2013 The workspace with the direct beam data theta_pv ( str , default: None ) \u2013 The PV name for the theta value center_pixel ( int , default: 145 ) \u2013 The pixel number for the center of the reflected beam db_center_pixel ( int , default: 145 ) \u2013 The pixel number for the center of the direct beam peak_width ( int , default: 10 ) \u2013 The width of the peak to use for the reflected beam Returns: qz_mid ( ndarray ) \u2013 The Q values refl ( ndarray ) \u2013 The reflectivity values d_refl ( ndarray ) \u2013 The uncertainty in the reflectivity","title":"reduce_explorer"},{"location":"api/workflow/#lr_reduction.workflow.write_template","text":"Read the appropriate entry in a template file and save an updated copy with the updated run number. Parameters: seq_list ( list ) \u2013 The sequence identifiers run_list ( list ) \u2013 The run numbers template_file ( str ) \u2013 Path to the template file output_dir ( str ) \u2013 Directory where the output files are saved","title":"write_template"},{"location":"developer/contributing/","text":"Contributing Guide Contributions to this project are welcome. All contributors agree to the following: It is assumed that the contributor is an ORNL employee and belongs to the development team. Thus the following instructions are specific to ORNL development team's process. You have permission and any required rights to submit your contribution. Your contribution is provided under the license of this project and may be redistributed as such. All contributions to this project are public. All contributions must be \"signed off\" in the commit log and by doing so you agree to the above. Getting access to the main project Direct commit access to the project is currently restricted to core developers. All other contributions should be done through pull requests.","title":"Contributing Guide"},{"location":"developer/contributing/#contributing-guide","text":"Contributions to this project are welcome. All contributors agree to the following: It is assumed that the contributor is an ORNL employee and belongs to the development team. Thus the following instructions are specific to ORNL development team's process. You have permission and any required rights to submit your contribution. Your contribution is provided under the license of this project and may be redistributed as such. All contributions to this project are public. All contributions must be \"signed off\" in the commit log and by doing so you agree to the above.","title":"Contributing Guide"},{"location":"developer/contributing/#getting-access-to-the-main-project","text":"Direct commit access to the project is currently restricted to core developers. All other contributions should be done through pull requests.","title":"Getting access to the main project"},{"location":"developer/developer/","text":"Developer Documentation Local Environment pre-commit Hooks Development procedure Updating mantid dependency Using the Data Repository Coverage reports Building the documentation Creating a stable release Local Environment For purposes of development, create conda environment lr_reduction with file environment.yml , and then install the package in development mode with pip : $ cd /path/to/lr_reduction/ $ conda create env --solver libmamba --file ./environment.yml $ conda activate lr_reduction (lr_reduction)$ pip install -e ./ By installing the package in development mode, one doesn't need to re-install package lr_reduction in conda environment lr_reduction after every change to the source code. pre-commit Hooks Activate the hooks by typing in the terminal: $ cd /path/to/mr_reduction/ $ conda activate mr_reduction (mr_reduction)$ pre-commit install Development procedure A developer is assigned with a task during neutron status meeting and changes the task's status to In Progress . The developer creates a branch off next and completes the task in this branch. The developer creates a pull request (PR) off next . Any new features or bugfixes must be covered by new and/or refactored automated tests. The developer asks for another developer as a reviewer to review the PR. A PR can only be approved and merged by the reviewer. The developer changes the task\u2019s status to Complete and closes the associated issue. Updating mantid dependency The mantid version and the mantid conda channel ( mantid/label/main or mantid/label/nightly ) must be synchronized across these files: environment.yml conda.recipe/meta.yml .github/workflows/package.yml Using the Data Repository To run the integration tests in your local environment, it is necessary first to download the data files. Because of their size, the files are stored in the Git LFS repository lr_reduction-data _. It is necessary to have package git-lfs installed in your machine. $ sudo apt install git-lfs After this step, initialize or update the data repository: $ cd /path/to/lr_reduction $ git submodule update --init This will either clone liquidsreflectometer-data into /path/to/lr_reduction/tests/liquidsreflectometer-data or bring the liquidsreflectometer-data 's refspec in sync with the refspec listed within file /path/to/liquidsreflectometer/.gitmodules . An intro to Git LFS in the context of the Neutron Data Project is found in the Confluence pages _ (login required). Coverage reports GitHuh actions create reports for unit and integration tests, then combine into one report and upload it to Codecov _. Building the documentation A repository webhook is setup to automatically trigger the latest documentation build by GitHub actions. To manually build the documentation: $ conda activate lr_reduction (lr_reduction)$ make docs After this, point your browser to file:///path/to/lr_reduction/docs/build/html/index.html Creating a stable release patch release, it may be allowed to bypass the creation of a candidate release. Still, we must update branch qa first, then create the release tag in branch main . For instance, to create patch version \"v2.1.1\": VERSION=\"v2.1.2\" # update the local repository git fetch --all --prune git fetch --prune --prune-tags origin # update branch qa from next, possibly bringing work done in qa missing in next git switch next git rebase -v origin/next git merge --no-edit origin/qa # commit message is automatically generated git push origin next # required to \"link\" qa to next, for future fast-forward git switch qa git rebase -v origin/qa git merge --ff-only origin/next # update branch main from qa git merge --no-edit origin/main # commit message is automatically generated git push origin qa # required to \"link\" main to qa, for future fast-forward git switch main git rebase -v origin/main git merge --ff-only origin/qa git tag $VERSION git push origin --tags main minor or major release, we create a stable release after we have created a Candidate release. For this customary procedure, follow: The Software Maturity Model for continous versioning as well as creating release candidates and stable releases. Update the :ref: Release Notes with major fixes, updates and additions since last stable release.","title":"Developer Documentation"},{"location":"developer/developer/#developer-documentation","text":"Local Environment pre-commit Hooks Development procedure Updating mantid dependency Using the Data Repository Coverage reports Building the documentation Creating a stable release","title":"Developer Documentation"},{"location":"developer/developer/#local-environment","text":"For purposes of development, create conda environment lr_reduction with file environment.yml , and then install the package in development mode with pip : $ cd /path/to/lr_reduction/ $ conda create env --solver libmamba --file ./environment.yml $ conda activate lr_reduction (lr_reduction)$ pip install -e ./ By installing the package in development mode, one doesn't need to re-install package lr_reduction in conda environment lr_reduction after every change to the source code.","title":"Local Environment"},{"location":"developer/developer/#pre-commit-hooks","text":"Activate the hooks by typing in the terminal: $ cd /path/to/mr_reduction/ $ conda activate mr_reduction (mr_reduction)$ pre-commit install","title":"pre-commit Hooks"},{"location":"developer/developer/#development-procedure","text":"A developer is assigned with a task during neutron status meeting and changes the task's status to In Progress . The developer creates a branch off next and completes the task in this branch. The developer creates a pull request (PR) off next . Any new features or bugfixes must be covered by new and/or refactored automated tests. The developer asks for another developer as a reviewer to review the PR. A PR can only be approved and merged by the reviewer. The developer changes the task\u2019s status to Complete and closes the associated issue.","title":"Development procedure"},{"location":"developer/developer/#updating-mantid-dependency","text":"The mantid version and the mantid conda channel ( mantid/label/main or mantid/label/nightly ) must be synchronized across these files: environment.yml conda.recipe/meta.yml .github/workflows/package.yml","title":"Updating mantid dependency"},{"location":"developer/developer/#using-the-data-repository","text":"To run the integration tests in your local environment, it is necessary first to download the data files. Because of their size, the files are stored in the Git LFS repository lr_reduction-data _. It is necessary to have package git-lfs installed in your machine. $ sudo apt install git-lfs After this step, initialize or update the data repository: $ cd /path/to/lr_reduction $ git submodule update --init This will either clone liquidsreflectometer-data into /path/to/lr_reduction/tests/liquidsreflectometer-data or bring the liquidsreflectometer-data 's refspec in sync with the refspec listed within file /path/to/liquidsreflectometer/.gitmodules . An intro to Git LFS in the context of the Neutron Data Project is found in the Confluence pages _ (login required).","title":"Using the Data Repository"},{"location":"developer/developer/#coverage-reports","text":"GitHuh actions create reports for unit and integration tests, then combine into one report and upload it to Codecov _.","title":"Coverage reports"},{"location":"developer/developer/#building-the-documentation","text":"A repository webhook is setup to automatically trigger the latest documentation build by GitHub actions. To manually build the documentation: $ conda activate lr_reduction (lr_reduction)$ make docs After this, point your browser to file:///path/to/lr_reduction/docs/build/html/index.html","title":"Building the documentation"},{"location":"developer/developer/#creating-a-stable-release","text":"patch release, it may be allowed to bypass the creation of a candidate release. Still, we must update branch qa first, then create the release tag in branch main . For instance, to create patch version \"v2.1.1\": VERSION=\"v2.1.2\" # update the local repository git fetch --all --prune git fetch --prune --prune-tags origin # update branch qa from next, possibly bringing work done in qa missing in next git switch next git rebase -v origin/next git merge --no-edit origin/qa # commit message is automatically generated git push origin next # required to \"link\" qa to next, for future fast-forward git switch qa git rebase -v origin/qa git merge --ff-only origin/next # update branch main from qa git merge --no-edit origin/main # commit message is automatically generated git push origin qa # required to \"link\" main to qa, for future fast-forward git switch main git rebase -v origin/main git merge --ff-only origin/qa git tag $VERSION git push origin --tags main minor or major release, we create a stable release after we have created a Candidate release. For this customary procedure, follow: The Software Maturity Model for continous versioning as well as creating release candidates and stable releases. Update the :ref: Release Notes with major fixes, updates and additions since last stable release.","title":"Creating a stable release"},{"location":"user/conda_environments/","text":"Conda Environments Three conda environments are available in the analysis nodes, beamline machines, as well as the jupyter notebook severs. On a terminal: $ conda activate where is one of lr_reduction , lr_reduction-qa , and lr_reduction-dev lr_reduction Environment Activates the latest stable release of lr_reduction . Typically users will reduce their data in this environment. lr_reduction-qa Environment Activates a release-candidate environment. Instrument scientists and computational instrument scientists will carry out testing on this environment to prevent bugs being introduce in the next stable release. lr_reduction-dev Environment Activates the environment corresponding to the latest changes in the source code. Instrument scientists and computational instrument scientists will test the latest changes to lr_reduction in this environment.","title":"Conda Environments"},{"location":"user/conda_environments/#conda-environments","text":"Three conda environments are available in the analysis nodes, beamline machines, as well as the jupyter notebook severs. On a terminal: $ conda activate where is one of lr_reduction , lr_reduction-qa , and lr_reduction-dev","title":"Conda Environments"},{"location":"user/conda_environments/#lr_reduction-environment","text":"Activates the latest stable release of lr_reduction . Typically users will reduce their data in this environment.","title":"lr_reduction Environment"},{"location":"user/conda_environments/#lr_reduction-qa-environment","text":"Activates a release-candidate environment. Instrument scientists and computational instrument scientists will carry out testing on this environment to prevent bugs being introduce in the next stable release.","title":"lr_reduction-qa Environment"},{"location":"user/conda_environments/#lr_reduction-dev-environment","text":"Activates the environment corresponding to the latest changes in the source code. Instrument scientists and computational instrument scientists will test the latest changes to lr_reduction in this environment.","title":"lr_reduction-dev Environment"},{"location":"user/event_processing/","text":"Event processing The BL4B instrument leverages the concept of weighted events for several aspects of the reduction process. Following this approach, each event is treated separately and is assigned a weigth \\(w\\) to accound for various corrections. Summing events then becomes the sum of the weights for all events. Loading events and dead time correction A dead time correction is available for rates above around 2000 counts/sec. Both paralyzing and non-paralyzing implementation are available. Paralyzing refers to a detector that extends its dead time period when events occur while the detector is already unavailable to process events, while non-paralyzing refers to a detector that always becomes available after the dead time period [1]. The dead time correction to be multiplied by the measured detector counts is given by the following for the paralyzing case: $$ C_{par} = -{\\cal Re}W_0(-R\\tau/\\Delta_{TOF}) \\Delta_{TOF}/R $$ where \\(R\\) is the number of triggers per accelerator pulse within a time-of-flight bin \\(\\Delta_{TOF}\\) . The dead time for the current BL4B detector is \\(\\tau=4.2\\) \\(\\mu s\\) . In the equation avove, \\({\\cal Re}W_0\\) referes to the principal branch of the Lambert W function. The following is used for the non-paralyzing case: $$ C_{non-par} = 1/(1-R\\tau/\\Delta_{TOF}) $$ By default, we use a paralyzing dead time correction with \\(\\Delta_{TOF}=100\\) \\(\\mu s\\) . These parameters can be changed. The BL4B detector is a wire chamber with a detector readout that includes digitization of the position of each event. For a number of reasons, like event pileup, it is possible for the electronics to be unable to assign a coordinate to a particular trigger event. These events are labelled as error events and stored along with the good events. While only good events are used to compute reflectivity, error events are included in the \\(R\\) value defined above. For clarity, we chose to define \\(R\\) in terms of number of triggers as opposed to events. Once the dead time correction as a function for time-of-flight is computed, each event in the run being processed is assigned a weight according to the correction. \\(w_i = C(t_i)\\) where \\(t_i\\) is the time-of-flight of event \\(i\\) . The value of \\(C\\) is interpolated from the computed dead time correction distribution. [1] V. B\u00e9cares, J. Bl\u00e1zquez, Detector Dead Time Determination and OptimalCounting Rate for a Detector Near a Spallation Source ora Subcritical Multiplying System, Science and Technology of Nuclear Installations, 2012, 240693, https://doi.org/10.1155/2012/240693 Correct for emission time Since neutrons of different wavelength will spend different amount of time on average within the moderator, a linear approximation is used by the data acquisition system to account for emission time when phasing choppers. The time of flight for each event \\(i\\) is corrected by an small value given by $\\Delta t_i = -t_{off} + \\frac{h L}{m_n} A t_i $ where \\(h\\) is Planck's constant, \\(m_n\\) is the mass of the neutron, and \\(L\\) is the distance between the moderator and the detector. The \\(t_{off}\\) , \\(A\\) , and \\(L\\) parameters are process variables that are stored in the data file and can be changed in the data acquisition system. Gravity correction The reflected angle of each neutron is corrected for the effect of gravity according to reference Campbell et al [2]. This correction is done individually for each neutron event according to its wavelength. [2] R.A. Campbell et al, Eur. Phys. J. Plus (2011) 126: 107. https://doi.org/10.1140/epjp/i2011-11107-8 Event selection Following the correction described above, we are left with a list of events, each having a detector position ( \\(p_x, p_y\\) ) and a wavelength \\(\\lambda\\) . As necessary, regions of interests can be defined to identify events to include in the specular reflectivity calculation, and which will be used to estimate and subtract background. Event selection is performed before computing the reflectivity as described in the following sections. Q calculation The reflectivity \\(R(q)\\) is computed by computing the \\(q\\) value for each even and histogramming in a predefined binning of the user's choice. This approach is slightly different from the traditional approach of binning events in TOF, and then converting the TOF axis to \\(q\\) . The event-based approach allows us to bin directly into a \\(q\\) binning of our choice and avoid the need for a final rebinning. The standard way of computing the reflected signal is simply to compute \\(q\\) for each event \\(i\\) using the following equation: \\(q_{z, i} = \\frac{4\\pi}{\\lambda_i}\\sin(\\theta - \\delta_{g,i})\\) where the \\(\\delta_{g,i}\\) refers to the angular offset caused by gravity. Once \\(q\\) is computed for each neutron, they can be histogrammed, taking into account the weight assigned to each event: \\(S(q_z) = \\frac{1}{Q} \\sum_{i \\in q_z \\pm \\Delta{q_z}/2} w_i\\) where the sum is over all event falling in the \\(q_z\\) bin or width \\(\\Delta q_z\\) , and \\(w_i\\) is the weight if the \\(i^{th}\\) event. At this point we have an unnormalized \\(S(q_z)\\) , which remains to be corrected for the neutron flux. The value of \\(Q\\) is the integrated proton charge for the Constant-Q binning When using a divergent beam, or when measuring a warped sample, it may be beneficial to take into accound where a neutron landed on the detector in order to recalculate its angle, and its \\(q\\) value. In this case, the \\(q_{z, i}\\) equation above becomes: \\(q_{z, i} = \\frac{4\\pi}{\\lambda_i}\\sin(\\theta + \\delta_{f,i} - \\delta_{g,i})\\) where \\(\\delta_{f,i}\\) is the angular offset between where the specular peak appears on the detector and where the neutron was detected: \\(\\delta_{f,i} = \\mathrm{sgn}(\\theta)\\arctan(d(p_i-p_{spec})/L_{det})/2\\) where \\(d\\) is the size of a pixel, \\(p_i\\) is the pixel where event \\(i\\) was detected, \\(p_{spec}\\) is the pixel at the center of the peak distribution, \\(L_{det}\\) is the distance between the sample and the detector. Care should be taken to asign the correct sign to the angle offset. For this reason, we add the sign the scattering angle \\(\\mathrm{sgn}(\\theta)\\) on from of the previous equation to account for when we reflect up or down. Normalization options The scattering signal computed above needs to be normalized by the incoming flux in order to produce \\(R(q_z)\\) . For the simplest case, we follow the same procedure as above for the relevant direct beam run, and simply compute the \\(S_1(q_z)\\) using the standard procedure above, using the same \\(q_z\\) binning, and replacing \\(\\theta\\) by the value at which the reflected beam was measured. We are then effectively computing what the measured signal would be if all neutron from the beam would reflect with a probability of 1. We refer this distribution at \\(S_1(q_z)\\) . The measured reflectivity then becomes \\[ R(q_z) = S(q_z) / S_1(q_z) \\] This approach is equivalent to predetermining the TOF binning that would be needed to produce the \\(q_z\\) binning we actually want, summing counts in TOF for both scattered and direct beam, taking the ratio of the two, and finally converting TOF to \\(q_z\\) . The only difference is that we don't bother with the TOF bins and assign events directly into the \\(q_z\\) we know they will contribute to the denominator of for normalization. Normalization using weighted events An alternative approach to the normalization described above is also implemented to BL4B. It leverages the weighted event approach. Using this approach, we can simply histogram the direct beam event in a wavelenth distribution. In such a histogram, each bin in wavelength will have a flux \\[\\phi(\\lambda) = N_{\\lambda} / Q / \\Delta_{\\lambda}\\] where \\(N_{\\lambda}\\) is the number of neutrons in the bin of center \\(\\lambda\\) , \\(Q\\) is the integrated proton charge, and \\(\\Delta(\\lambda)\\) is the wavelength bin width for the distribution. Coming back to the calculation of the reflected signal above, we now can add a new weight for each event according to the flux for its particular wavelength: \\[ w_i \\rightarrow w_i / \\phi(\\lambda_i) q_{z,i} / \\lambda_i \\] where \\(\\phi(\\lambda)\\) is interpolated from the distribution we measured above. The \\(q_z/\\lambda\\) term is the Jacobian to account for the transformation of wavelength to \\(q\\) . With this new weigth, we can compute reflectivity directly from the \\(S(q_z)\\) equation above: \\[ R(q_z) = \\frac{1}{Q} \\sum_{i \\in q_z \\pm \\Delta{q_z}/2} w_i / \\phi(\\lambda_i) q_{z,i} / \\lambda_i \\]","title":"Event processing"},{"location":"user/event_processing/#event-processing","text":"The BL4B instrument leverages the concept of weighted events for several aspects of the reduction process. Following this approach, each event is treated separately and is assigned a weigth \\(w\\) to accound for various corrections. Summing events then becomes the sum of the weights for all events.","title":"Event processing"},{"location":"user/event_processing/#loading-events-and-dead-time-correction","text":"A dead time correction is available for rates above around 2000 counts/sec. Both paralyzing and non-paralyzing implementation are available. Paralyzing refers to a detector that extends its dead time period when events occur while the detector is already unavailable to process events, while non-paralyzing refers to a detector that always becomes available after the dead time period [1]. The dead time correction to be multiplied by the measured detector counts is given by the following for the paralyzing case: $$ C_{par} = -{\\cal Re}W_0(-R\\tau/\\Delta_{TOF}) \\Delta_{TOF}/R $$ where \\(R\\) is the number of triggers per accelerator pulse within a time-of-flight bin \\(\\Delta_{TOF}\\) . The dead time for the current BL4B detector is \\(\\tau=4.2\\) \\(\\mu s\\) . In the equation avove, \\({\\cal Re}W_0\\) referes to the principal branch of the Lambert W function. The following is used for the non-paralyzing case: $$ C_{non-par} = 1/(1-R\\tau/\\Delta_{TOF}) $$ By default, we use a paralyzing dead time correction with \\(\\Delta_{TOF}=100\\) \\(\\mu s\\) . These parameters can be changed. The BL4B detector is a wire chamber with a detector readout that includes digitization of the position of each event. For a number of reasons, like event pileup, it is possible for the electronics to be unable to assign a coordinate to a particular trigger event. These events are labelled as error events and stored along with the good events. While only good events are used to compute reflectivity, error events are included in the \\(R\\) value defined above. For clarity, we chose to define \\(R\\) in terms of number of triggers as opposed to events. Once the dead time correction as a function for time-of-flight is computed, each event in the run being processed is assigned a weight according to the correction. \\(w_i = C(t_i)\\) where \\(t_i\\) is the time-of-flight of event \\(i\\) . The value of \\(C\\) is interpolated from the computed dead time correction distribution. [1] V. B\u00e9cares, J. Bl\u00e1zquez, Detector Dead Time Determination and OptimalCounting Rate for a Detector Near a Spallation Source ora Subcritical Multiplying System, Science and Technology of Nuclear Installations, 2012, 240693, https://doi.org/10.1155/2012/240693","title":"Loading events and dead time correction"},{"location":"user/event_processing/#correct-for-emission-time","text":"Since neutrons of different wavelength will spend different amount of time on average within the moderator, a linear approximation is used by the data acquisition system to account for emission time when phasing choppers. The time of flight for each event \\(i\\) is corrected by an small value given by $\\Delta t_i = -t_{off} + \\frac{h L}{m_n} A t_i $ where \\(h\\) is Planck's constant, \\(m_n\\) is the mass of the neutron, and \\(L\\) is the distance between the moderator and the detector. The \\(t_{off}\\) , \\(A\\) , and \\(L\\) parameters are process variables that are stored in the data file and can be changed in the data acquisition system.","title":"Correct for emission time"},{"location":"user/event_processing/#gravity-correction","text":"The reflected angle of each neutron is corrected for the effect of gravity according to reference Campbell et al [2]. This correction is done individually for each neutron event according to its wavelength. [2] R.A. Campbell et al, Eur. Phys. J. Plus (2011) 126: 107. https://doi.org/10.1140/epjp/i2011-11107-8","title":"Gravity correction"},{"location":"user/event_processing/#event-selection","text":"Following the correction described above, we are left with a list of events, each having a detector position ( \\(p_x, p_y\\) ) and a wavelength \\(\\lambda\\) . As necessary, regions of interests can be defined to identify events to include in the specular reflectivity calculation, and which will be used to estimate and subtract background. Event selection is performed before computing the reflectivity as described in the following sections.","title":"Event selection"},{"location":"user/event_processing/#q-calculation","text":"The reflectivity \\(R(q)\\) is computed by computing the \\(q\\) value for each even and histogramming in a predefined binning of the user's choice. This approach is slightly different from the traditional approach of binning events in TOF, and then converting the TOF axis to \\(q\\) . The event-based approach allows us to bin directly into a \\(q\\) binning of our choice and avoid the need for a final rebinning. The standard way of computing the reflected signal is simply to compute \\(q\\) for each event \\(i\\) using the following equation: \\(q_{z, i} = \\frac{4\\pi}{\\lambda_i}\\sin(\\theta - \\delta_{g,i})\\) where the \\(\\delta_{g,i}\\) refers to the angular offset caused by gravity. Once \\(q\\) is computed for each neutron, they can be histogrammed, taking into account the weight assigned to each event: \\(S(q_z) = \\frac{1}{Q} \\sum_{i \\in q_z \\pm \\Delta{q_z}/2} w_i\\) where the sum is over all event falling in the \\(q_z\\) bin or width \\(\\Delta q_z\\) , and \\(w_i\\) is the weight if the \\(i^{th}\\) event. At this point we have an unnormalized \\(S(q_z)\\) , which remains to be corrected for the neutron flux. The value of \\(Q\\) is the integrated proton charge for the","title":"Q calculation"},{"location":"user/event_processing/#constant-q-binning","text":"When using a divergent beam, or when measuring a warped sample, it may be beneficial to take into accound where a neutron landed on the detector in order to recalculate its angle, and its \\(q\\) value. In this case, the \\(q_{z, i}\\) equation above becomes: \\(q_{z, i} = \\frac{4\\pi}{\\lambda_i}\\sin(\\theta + \\delta_{f,i} - \\delta_{g,i})\\) where \\(\\delta_{f,i}\\) is the angular offset between where the specular peak appears on the detector and where the neutron was detected: \\(\\delta_{f,i} = \\mathrm{sgn}(\\theta)\\arctan(d(p_i-p_{spec})/L_{det})/2\\) where \\(d\\) is the size of a pixel, \\(p_i\\) is the pixel where event \\(i\\) was detected, \\(p_{spec}\\) is the pixel at the center of the peak distribution, \\(L_{det}\\) is the distance between the sample and the detector. Care should be taken to asign the correct sign to the angle offset. For this reason, we add the sign the scattering angle \\(\\mathrm{sgn}(\\theta)\\) on from of the previous equation to account for when we reflect up or down.","title":"Constant-Q binning"},{"location":"user/event_processing/#normalization-options","text":"The scattering signal computed above needs to be normalized by the incoming flux in order to produce \\(R(q_z)\\) . For the simplest case, we follow the same procedure as above for the relevant direct beam run, and simply compute the \\(S_1(q_z)\\) using the standard procedure above, using the same \\(q_z\\) binning, and replacing \\(\\theta\\) by the value at which the reflected beam was measured. We are then effectively computing what the measured signal would be if all neutron from the beam would reflect with a probability of 1. We refer this distribution at \\(S_1(q_z)\\) . The measured reflectivity then becomes \\[ R(q_z) = S(q_z) / S_1(q_z) \\] This approach is equivalent to predetermining the TOF binning that would be needed to produce the \\(q_z\\) binning we actually want, summing counts in TOF for both scattered and direct beam, taking the ratio of the two, and finally converting TOF to \\(q_z\\) . The only difference is that we don't bother with the TOF bins and assign events directly into the \\(q_z\\) we know they will contribute to the denominator of for normalization.","title":"Normalization options"},{"location":"user/event_processing/#normalization-using-weighted-events","text":"An alternative approach to the normalization described above is also implemented to BL4B. It leverages the weighted event approach. Using this approach, we can simply histogram the direct beam event in a wavelenth distribution. In such a histogram, each bin in wavelength will have a flux \\[\\phi(\\lambda) = N_{\\lambda} / Q / \\Delta_{\\lambda}\\] where \\(N_{\\lambda}\\) is the number of neutrons in the bin of center \\(\\lambda\\) , \\(Q\\) is the integrated proton charge, and \\(\\Delta(\\lambda)\\) is the wavelength bin width for the distribution. Coming back to the calculation of the reflected signal above, we now can add a new weight for each event according to the flux for its particular wavelength: \\[ w_i \\rightarrow w_i / \\phi(\\lambda_i) q_{z,i} / \\lambda_i \\] where \\(\\phi(\\lambda)\\) is interpolated from the distribution we measured above. The \\(q_z/\\lambda\\) term is the Jacobian to account for the transformation of wavelength to \\(q\\) . With this new weigth, we can compute reflectivity directly from the \\(S(q_z)\\) equation above: \\[ R(q_z) = \\frac{1}{Q} \\sum_{i \\in q_z \\pm \\Delta{q_z}/2} w_i / \\phi(\\lambda_i) q_{z,i} / \\lambda_i \\]","title":"Normalization using weighted events"},{"location":"user/workflow/","text":"Specular reflectivity reduction workflow The specular reflectivity data reduction is build around the event_reduction.EventReflectivity class, which performs the reduction. A number of useful modules are available to handle parts of the workflow around the actual reduction. Data sets Specular reflectivity measurements at BL4B are done by combining several runs, taken at different scattering angle and wavelength band. To allow for the automation of the reduction process, several meta data entries are stored in the data files. To be able to know which data files belong together in a single reflectivity measurement, two important log entries are used: sequence_id : The sequence ID identifies a unique reflectivity curve. All data runs with a matching sequence_id are put together to create a single reflectivity curve. sequence_number : The sequence number identifies the location of a given run in the list of runs that define a full sequence. All sequences start at 1. For instance, a sequence number of 3 means that this run is the third of the complete set. This becomes important for storing reduction parameters. Reduction parameters and templates The reduction parameters are managed using the reduction_template_reader.ReductionParameters class. This class allows users to define and store the parameters required for the reduction process. By using this class, you can easily save, load, and modify the parameters, ensuring consistency and reproducibility in your data reduction workflow. Compatibility with RefRed Refred is the user interface that helps users define reduction parameters by selecting the data to process, peak and background regions, etc. A complete reflectivity curve is generally comprised of multiple runs, and RefRed allows one to save a so-called template file that contains all the information needs to reduce each run in the set. The reduction backend (this package) has utilities to read and write such templates, which are stored in XML format. A template consist of an ordered list of ReductionParameters objects, which corresponding to a specific sequence_number . To read a templates and obtains a list of ReductionParameters objects: from lr_reduction import reduction_template_reader with open(template_file, \"r\") as fd: xml_str = fd.read() data_sets = reduction_template_reader.from_xml(xml_str) From a list of ReductionParameters objects: xml_str = reduction_template_reader.to_xml(data_sets) with open(os.path.join(output_dir, \"template.xml\"), \"w\") as fd: fd.write(xml_str) Reduction workflow The main reduction workflow, which will extract specular reflectivity from a data file given a reduction template, is found in the workflow module. This workflow will is the one performed by the automated reduction system at BL4B: It will extract the correct reduction parameters from the provided template Perform the reduction and compute the reflectivity curve for that data Combine the reflectivity curve segment with other runs belonging to the same set Write out the complete reflectivity curve in an output file Write out a copy of the template by replacing the run numbers in the template by those that were used Once you have a template, you can simply do: from lr_reduction import workflow from mantid.simpleapi import LoadEventNexus # Load the data from disk ws = LoadEventNexus(Filename='/SNS/REF_L/IPTS-XXXX/nexus/REFL_YYYY.h5') # The template file you want to use template_file = '/SNS/REF_L/IPTS-XXXX/autoreduce/template.xml' # The folder where you want your output output_dir = '/tmp' workflow.reduce(ws, template_file, output_dir) Thie will produce output files in the speficied output directory.","title":"Specular reflectivity reduction workflow"},{"location":"user/workflow/#specular-reflectivity-reduction-workflow","text":"The specular reflectivity data reduction is build around the event_reduction.EventReflectivity class, which performs the reduction. A number of useful modules are available to handle parts of the workflow around the actual reduction.","title":"Specular reflectivity reduction workflow"},{"location":"user/workflow/#data-sets","text":"Specular reflectivity measurements at BL4B are done by combining several runs, taken at different scattering angle and wavelength band. To allow for the automation of the reduction process, several meta data entries are stored in the data files. To be able to know which data files belong together in a single reflectivity measurement, two important log entries are used: sequence_id : The sequence ID identifies a unique reflectivity curve. All data runs with a matching sequence_id are put together to create a single reflectivity curve. sequence_number : The sequence number identifies the location of a given run in the list of runs that define a full sequence. All sequences start at 1. For instance, a sequence number of 3 means that this run is the third of the complete set. This becomes important for storing reduction parameters.","title":"Data sets"},{"location":"user/workflow/#reduction-parameters-and-templates","text":"The reduction parameters are managed using the reduction_template_reader.ReductionParameters class. This class allows users to define and store the parameters required for the reduction process. By using this class, you can easily save, load, and modify the parameters, ensuring consistency and reproducibility in your data reduction workflow.","title":"Reduction parameters and templates"},{"location":"user/workflow/#compatibility-with-refred","text":"Refred is the user interface that helps users define reduction parameters by selecting the data to process, peak and background regions, etc. A complete reflectivity curve is generally comprised of multiple runs, and RefRed allows one to save a so-called template file that contains all the information needs to reduce each run in the set. The reduction backend (this package) has utilities to read and write such templates, which are stored in XML format. A template consist of an ordered list of ReductionParameters objects, which corresponding to a specific sequence_number . To read a templates and obtains a list of ReductionParameters objects: from lr_reduction import reduction_template_reader with open(template_file, \"r\") as fd: xml_str = fd.read() data_sets = reduction_template_reader.from_xml(xml_str) From a list of ReductionParameters objects: xml_str = reduction_template_reader.to_xml(data_sets) with open(os.path.join(output_dir, \"template.xml\"), \"w\") as fd: fd.write(xml_str)","title":"Compatibility with RefRed"},{"location":"user/workflow/#reduction-workflow","text":"The main reduction workflow, which will extract specular reflectivity from a data file given a reduction template, is found in the workflow module. This workflow will is the one performed by the automated reduction system at BL4B: It will extract the correct reduction parameters from the provided template Perform the reduction and compute the reflectivity curve for that data Combine the reflectivity curve segment with other runs belonging to the same set Write out the complete reflectivity curve in an output file Write out a copy of the template by replacing the run numbers in the template by those that were used Once you have a template, you can simply do: from lr_reduction import workflow from mantid.simpleapi import LoadEventNexus # Load the data from disk ws = LoadEventNexus(Filename='/SNS/REF_L/IPTS-XXXX/nexus/REFL_YYYY.h5') # The template file you want to use template_file = '/SNS/REF_L/IPTS-XXXX/autoreduce/template.xml' # The folder where you want your output output_dir = '/tmp' workflow.reduce(ws, template_file, output_dir) Thie will produce output files in the speficied output directory.","title":"Reduction workflow"}]} \ No newline at end of file +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Liquids Reflectometer Reduction User Guide Workflow overview Event processing Conda Environments Releases Contacting the Team The best mechanism for a user to request a change or report a bug is to contact the SANS CIS. Please email Mathieu Doucet with your request. A change needs to be in the form of a: Story for any enhancement request Defect for any bug fix request. API lr_reduction Developer Guide Contributing Guide Developer Documentation","title":"Liquids Reflectometer Reduction"},{"location":"#liquids-reflectometer-reduction","text":"","title":"Liquids Reflectometer Reduction"},{"location":"#user-guide","text":"Workflow overview Event processing Conda Environments Releases","title":"User Guide"},{"location":"#contacting-the-team","text":"The best mechanism for a user to request a change or report a bug is to contact the SANS CIS. Please email Mathieu Doucet with your request. A change needs to be in the form of a: Story for any enhancement request Defect for any bug fix request.","title":"Contacting the Team"},{"location":"#api","text":"lr_reduction","title":"API"},{"location":"#developer-guide","text":"Contributing Guide Developer Documentation","title":"Developer Guide"},{"location":"releases/","text":"Release Notes Notes for major or minor releases. Notes for patch releases are deferred. Release notes are written in reverse chronological order, with the most recent release at the top, using the following format: ## (date of release, format YYYY-MM-DD) **Of interest to the User**: - PR #XYZ one-liner description **Of interest to the Developer:** - PR #XYZ one-liner description 2.1.0 Of interest to the User : PR #33 enable dead time correction for runs with skipped pulses PR #26 add dead time correction to the computation of scaling factors PR #23 add dead time correction PR #19 Functionality to use two backgrounds PR #15 Ability to fit a background with a polynomial function Of interest to the Developer: PR #40 documentation to create a patch release PR #37 documentation conforming to that of the python project template PR #36 versioning with versioningit PR #25 Read in error events when computing correction PR #21 switch dependency from mantidworkbench to mantid PR #20 allow runtime initialization of new attributes for ReductionParameters PR #14 add first GitHub actions PR #12 switch from mantid to mantidworkbench conda package","title":"Release Notes"},{"location":"releases/#release-notes","text":"Notes for major or minor releases. Notes for patch releases are deferred. Release notes are written in reverse chronological order, with the most recent release at the top, using the following format: ## (date of release, format YYYY-MM-DD) **Of interest to the User**: - PR #XYZ one-liner description **Of interest to the Developer:** - PR #XYZ one-liner description","title":"Release Notes"},{"location":"releases/#210","text":"Of interest to the User : PR #33 enable dead time correction for runs with skipped pulses PR #26 add dead time correction to the computation of scaling factors PR #23 add dead time correction PR #19 Functionality to use two backgrounds PR #15 Ability to fit a background with a polynomial function Of interest to the Developer: PR #40 documentation to create a patch release PR #37 documentation conforming to that of the python project template PR #36 versioning with versioningit PR #25 Read in error events when computing correction PR #21 switch dependency from mantidworkbench to mantid PR #20 allow runtime initialization of new attributes for ReductionParameters PR #14 add first GitHub actions PR #12 switch from mantid to mantidworkbench conda package","title":"2.1.0"},{"location":"api/","text":"Overview lr_reduction.background lr_reduction.event_reduction lr_reduction.output lr_reduction.peak_finding lr_reduction.reduction_template_reader lr_reduction.template lr_reduction.time_resolved lr_reduction.utils lr_reduction.workflow","title":"Overview"},{"location":"api/#overview","text":"lr_reduction.background lr_reduction.event_reduction lr_reduction.output lr_reduction.peak_finding lr_reduction.reduction_template_reader lr_reduction.template lr_reduction.time_resolved lr_reduction.utils lr_reduction.workflow","title":"Overview"},{"location":"api/background/","text":"find_ranges_without_overlap Returns the part of r1 that does not contain r2 When summing pixels for reflectivity, include the full range, which means that for a range [a, b], b is included. The range that we return must always exclude the pixels included in r2. Parameters: r1 ( list ) \u2013 Range of pixels to consider r2 ( list ) \u2013 Range of pixels to exclude Returns: list \u2013 List of ranges that do not overlap with r2 functional_background Estimate background using a linear function over a background range that may include the specular peak. In the case where the peak is included in the background range, the peak is excluded from the background. Parameters: ws ( Mantid workspace ) \u2013 Workspace containing the data event_reflectivity ( EventReflectivity ) \u2013 EventReflectivity object peak ( list ) \u2013 Range of pixels that define the peak bck ( list ) \u2013 Range of pixels that define the background. It contains 4 pixels, defining up to two ranges. low_res ( list ) \u2013 Range in the x direction on the detector normalize_to_single_pixel ( bool , default: False ) \u2013 If True, the background is normalized to the number of pixels used to integrate the signal q_bins ( ndarray , default: None ) \u2013 Array of Q bins wl_dist ( ndarray , default: None ) \u2013 Wavelength distribution for the case where we use weighted events for normatization wl_bins ( ndarray , default: None ) \u2013 Array of wavelength bins for the case where we use weighted events for normatization q_summing ( bool , default: False ) \u2013 If True, sum the counts in Q bins Returns: ndarray \u2013 Reflectivity background ndarray \u2013 Reflectivity background error side_background Original background substration done using two pixels defining the area next to the specular peak that are considered background. Parameters: ws ( Mantid workspace ) \u2013 Workspace containing the data event_reflectivity ( EventReflectivity ) \u2013 EventReflectivity object peak ( list ) \u2013 Range of pixels that define the peak bck ( list ) \u2013 Range of pixels that define the background low_res ( list ) \u2013 Range in the x direction on the detector normalize_to_single_pixel ( bool , default: False ) \u2013 If True, the background is normalized to the number of pixels used to integrate the signal q_bins ( ndarray , default: None ) \u2013 Array of Q bins wl_dist ( ndarray , default: None ) \u2013 Wavelength distribution for the case where we use weighted events for normatization wl_bins ( ndarray , default: None ) \u2013 Array of wavelength bins for the case where we use weighted events for normatization q_summing ( bool , default: False ) \u2013 If True, sum the counts in Q bins Returns: ndarray \u2013 Reflectivity background ndarray \u2013 Reflectivity background error","title":"Background"},{"location":"api/background/#lr_reduction.background.find_ranges_without_overlap","text":"Returns the part of r1 that does not contain r2 When summing pixels for reflectivity, include the full range, which means that for a range [a, b], b is included. The range that we return must always exclude the pixels included in r2. Parameters: r1 ( list ) \u2013 Range of pixels to consider r2 ( list ) \u2013 Range of pixels to exclude Returns: list \u2013 List of ranges that do not overlap with r2","title":"find_ranges_without_overlap"},{"location":"api/background/#lr_reduction.background.functional_background","text":"Estimate background using a linear function over a background range that may include the specular peak. In the case where the peak is included in the background range, the peak is excluded from the background. Parameters: ws ( Mantid workspace ) \u2013 Workspace containing the data event_reflectivity ( EventReflectivity ) \u2013 EventReflectivity object peak ( list ) \u2013 Range of pixels that define the peak bck ( list ) \u2013 Range of pixels that define the background. It contains 4 pixels, defining up to two ranges. low_res ( list ) \u2013 Range in the x direction on the detector normalize_to_single_pixel ( bool , default: False ) \u2013 If True, the background is normalized to the number of pixels used to integrate the signal q_bins ( ndarray , default: None ) \u2013 Array of Q bins wl_dist ( ndarray , default: None ) \u2013 Wavelength distribution for the case where we use weighted events for normatization wl_bins ( ndarray , default: None ) \u2013 Array of wavelength bins for the case where we use weighted events for normatization q_summing ( bool , default: False ) \u2013 If True, sum the counts in Q bins Returns: ndarray \u2013 Reflectivity background ndarray \u2013 Reflectivity background error","title":"functional_background"},{"location":"api/background/#lr_reduction.background.side_background","text":"Original background substration done using two pixels defining the area next to the specular peak that are considered background. Parameters: ws ( Mantid workspace ) \u2013 Workspace containing the data event_reflectivity ( EventReflectivity ) \u2013 EventReflectivity object peak ( list ) \u2013 Range of pixels that define the peak bck ( list ) \u2013 Range of pixels that define the background low_res ( list ) \u2013 Range in the x direction on the detector normalize_to_single_pixel ( bool , default: False ) \u2013 If True, the background is normalized to the number of pixels used to integrate the signal q_bins ( ndarray , default: None ) \u2013 Array of Q bins wl_dist ( ndarray , default: None ) \u2013 Wavelength distribution for the case where we use weighted events for normatization wl_bins ( ndarray , default: None ) \u2013 Array of wavelength bins for the case where we use weighted events for normatization q_summing ( bool , default: False ) \u2013 If True, sum the counts in Q bins Returns: ndarray \u2013 Reflectivity background ndarray \u2013 Reflectivity background error","title":"side_background"},{"location":"api/event_reduction/","text":"Event based reduction for the Liquids Reflectometer EventReflectivity Data reduction for the Liquids Reflectometer. List of items to be taken care of outside this class: Edge points cropping Angle offset Putting runs together in one R(q) curve Scaling factors Pixel ranges include the min and max pixels. Parameters: scattering_workspace \u2013 Mantid workspace containing the reflected data direct_workspace \u2013 Mantid workspace containing the direct beam data [if None, normalization won't be applied] signal_peak ( list ) \u2013 Pixel min and max for the specular peak signal_bck ( list ) \u2013 Pixel range of the background [if None, the background won't be subtracted] norm_peak ( list ) \u2013 Pixel range of the direct beam peak norm_bck ( list ) \u2013 Direct background subtraction is not used [deprecated] specular_pixel ( float ) \u2013 Pixel of the specular peak signal_low_res ( list ) \u2013 Pixel range of the specular peak out of the scattering plane norm_low_res ( list ) \u2013 Pixel range of the direct beam out of the scattering plane q_min ( float , default: None ) \u2013 Value of lowest q point q_step ( float , default: -0.02 ) \u2013 Step size in Q. Enter a negative value to get a log scale q_min ( float , default: None ) \u2013 Value of largest q point tof_range ( ( list , None) , default: None ) \u2013 TOF range,or None theta ( float , default: 1.0 ) \u2013 Theta scattering angle in radians dead_time ( float , default: False ) \u2013 If not zero, dead time correction will be used paralyzable ( bool , default: True ) \u2013 If True, the dead time calculation will use the paralyzable approach dead_time_value ( float , default: 4.2 ) \u2013 value of the dead time in microsecond dead_time_tof_step ( float , default: 100 ) \u2013 TOF bin size in microsecond use_emmission_time ( bool ) \u2013 If True, the emission time delay will be computed __repr__ Generate a string representation of the reduction settings. Returns: str \u2013 String representation of the reduction settings bck_subtraction Perform background subtraction on the signal. This method provides a higher-level call for background subtraction, hiding the ranges needed to define the Region of Interest (ROI). Parameters: normalize_to_single_pixel ( bool , default: False ) \u2013 If True, normalize the background to a single pixel. q_bins \u2013 array of bins for the momentum transfer (q) values. wl_dist \u2013 Array of wavelength (wl) values. wl_bins \u2013 Array of bins for the wavelength (wl) values. q_summing ( bool , default: False ) \u2013 If True, sum the q values. Returns: Workspace \u2013 The workspace with the background subtracted. emission_time_correction Coorect TOF for emission time delay in the moderator. Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from tofs ( ndarray ) \u2013 Array of uncorrected TOF values Returns: ndarray \u2013 Array of corrected TOF values extract_meta_data Extract meta data from the loaded data file. extract_meta_data_4A 4A-specific meta data extract_meta_data_4B 4B-specific meta data Distance from source to sample was 13.63 meters prior to the source to detector distance being determined with Bragg edges to be 15.75 m. gravity_correction Gravity correction for each event Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from. wl_list ( ndarray ) \u2013 Array of wavelengths for each event. Returns: ndarray \u2013 Array of gravity-corrected theta values for each event, in radians. norm_bck_subtraction Higher-level call for background subtraction for the normalization run. off_specular Compute off-specular Parameters: x_axis ( int , default: None ) \u2013 Axis selection from QX_VS_QZ, KZI_VS_KZF, DELTA_KZ_VS_QZ x_min ( float , default: -0.015 ) \u2013 Min value on x-axis x_max ( float , default: 0.015 ) \u2013 Max value on x-axis x_npts ( int , default: 50 ) \u2013 Number of points in x (negative will produce a log scale) z_min ( float , default: None ) \u2013 Min value on z-axis (if none, default Qz will be used) z_max ( float , default: None ) \u2013 Max value on z-axis (if none, default Qz will be used) z_npts ( int , default: -120 ) \u2013 Number of points in z (negative will produce a log scale) slice Retrieve a slice from the off-specular data. specular Compute specular reflectivity. For constant-Q binning, it's preferred to use tof_weighted=True. Parameters: q_summing ( bool , default: False ) \u2013 Turns on constant-Q binning tof_weighted ( bool , default: False ) \u2013 If True, binning will be done by weighting each event to the DB distribution bck_in_q ( bool , default: False ) \u2013 If True, the background will be estimated in Q space using the constant-Q binning approach clean ( bool , default: False ) \u2013 If True, and Q summing is True, then leading artifact will be removed normalize ( bool , default: True ) \u2013 If True, and tof_weighted is False, normalization will be skipped Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values specular_unweighted Simple specular reflectivity calculation. This is the same approach as the original LR reduction, which sums up pixels without constant-Q binning. The original approach bins in TOF, then rebins the final results after transformation to Q. This approach bins directly to Q. Parameters: q_summing ( bool , default: False ) \u2013 If True, sum the data in Q-space. normalize ( bool , default: True ) \u2013 If True, normalize the reflectivity by the direct beam. Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values specular_weighted Compute reflectivity by weighting each event by flux. This allows for summing in Q and to estimate the background in either Q or pixels next to the peak. Parameters: q_summing ( bool , default: True ) \u2013 If True, sum the data in Q-space. bck_in_q ( bool , default: False ) \u2013 If True, subtract background along Q lines. Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values to_dict Returns meta-data to be used/stored. Returns: dict \u2013 Dictionary with meta-data apply_dead_time_correction Apply dead time correction, and ensure that it is done only once per workspace. Parameters: ws \u2013 Workspace with raw data to compute correction for template_data ( ReductionParameters ) \u2013 Reduction parameters Returns: Workspace \u2013 Workspace with dead time correction applied compute_resolution Compute the Q resolution from the meta data. Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from. theta ( float , default: None ) \u2013 Scattering angle in radians q_summing ( bool , default: False ) \u2013 If True, the pixel size will be used for the resolution Returns: float \u2013 The dQ/Q resolution (FWHM) get_attenuation_info Retrieve information about attenuation from a Mantid workspace. This function calculates the total thickness of all attenuators that are in the path of the beam by summing up the thicknesses of the attenuators specified in the global variable CD_ATTENUATORS . Parameters: ws \u2013 Mantid workspace from which to retrieve the attenuation information. Returns: float \u2013 The total thickness of the attenuators in the path of the beam. get_dead_time_correction Compute dead time correction to be applied to the reflectivity curve. The method will also try to load the error events from each of the data files to ensure that we properly estimate the dead time correction. Parameters: ws \u2013 Workspace with raw data to compute correction for template_data ( ReductionParameters ) \u2013 Reduction parameters Returns: Workspace \u2013 Workspace with dead time correction to apply get_q_binning Determine Q binning. This function calculates the binning for Q values based on the provided minimum, maximum, and step values. If the step value is positive, it generates a linear binning. If the step value is negative, it generates a logarithmic binning. Parameters: q_min ( float , default: 0.001 ) \u2013 The minimum Q value. q_max ( float , default: 0.15 ) \u2013 The maximum Q value. q_step ( float , default: -0.02 ) \u2013 The step size for Q binning. If positive, linear binning is used. If negative, logarithmic binning is used. Returns: ndarray \u2013 A numpy array of Q values based on the specified binning. get_wl_range Determine TOF range from the data Parameters: ws \u2013 Mantid workspace to work with Returns: list \u2013 [min, max] wavelength range process_attenuation Correct for absorption by assigning weight to each neutron event Parameters: ws \u2013 Mantid workspace to correct thickness \u2013 Attenuator thickness in cm (default is 0). Returns: Mantid workspace \u2013 Corrected Mantid workspace read_settings Read settings file and return values for the given timestamp Parameters: ws \u2013 Mantid workspace Returns: dict \u2013 Dictionary with settings","title":"Event reduction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity","text":"Data reduction for the Liquids Reflectometer. List of items to be taken care of outside this class: Edge points cropping Angle offset Putting runs together in one R(q) curve Scaling factors Pixel ranges include the min and max pixels. Parameters: scattering_workspace \u2013 Mantid workspace containing the reflected data direct_workspace \u2013 Mantid workspace containing the direct beam data [if None, normalization won't be applied] signal_peak ( list ) \u2013 Pixel min and max for the specular peak signal_bck ( list ) \u2013 Pixel range of the background [if None, the background won't be subtracted] norm_peak ( list ) \u2013 Pixel range of the direct beam peak norm_bck ( list ) \u2013 Direct background subtraction is not used [deprecated] specular_pixel ( float ) \u2013 Pixel of the specular peak signal_low_res ( list ) \u2013 Pixel range of the specular peak out of the scattering plane norm_low_res ( list ) \u2013 Pixel range of the direct beam out of the scattering plane q_min ( float , default: None ) \u2013 Value of lowest q point q_step ( float , default: -0.02 ) \u2013 Step size in Q. Enter a negative value to get a log scale q_min ( float , default: None ) \u2013 Value of largest q point tof_range ( ( list , None) , default: None ) \u2013 TOF range,or None theta ( float , default: 1.0 ) \u2013 Theta scattering angle in radians dead_time ( float , default: False ) \u2013 If not zero, dead time correction will be used paralyzable ( bool , default: True ) \u2013 If True, the dead time calculation will use the paralyzable approach dead_time_value ( float , default: 4.2 ) \u2013 value of the dead time in microsecond dead_time_tof_step ( float , default: 100 ) \u2013 TOF bin size in microsecond use_emmission_time ( bool ) \u2013 If True, the emission time delay will be computed","title":"EventReflectivity"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.__repr__","text":"Generate a string representation of the reduction settings. Returns: str \u2013 String representation of the reduction settings","title":"__repr__"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.bck_subtraction","text":"Perform background subtraction on the signal. This method provides a higher-level call for background subtraction, hiding the ranges needed to define the Region of Interest (ROI). Parameters: normalize_to_single_pixel ( bool , default: False ) \u2013 If True, normalize the background to a single pixel. q_bins \u2013 array of bins for the momentum transfer (q) values. wl_dist \u2013 Array of wavelength (wl) values. wl_bins \u2013 Array of bins for the wavelength (wl) values. q_summing ( bool , default: False ) \u2013 If True, sum the q values. Returns: Workspace \u2013 The workspace with the background subtracted.","title":"bck_subtraction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.emission_time_correction","text":"Coorect TOF for emission time delay in the moderator. Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from tofs ( ndarray ) \u2013 Array of uncorrected TOF values Returns: ndarray \u2013 Array of corrected TOF values","title":"emission_time_correction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.extract_meta_data","text":"Extract meta data from the loaded data file.","title":"extract_meta_data"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.extract_meta_data_4A","text":"4A-specific meta data","title":"extract_meta_data_4A"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.extract_meta_data_4B","text":"4B-specific meta data Distance from source to sample was 13.63 meters prior to the source to detector distance being determined with Bragg edges to be 15.75 m.","title":"extract_meta_data_4B"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.gravity_correction","text":"Gravity correction for each event Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from. wl_list ( ndarray ) \u2013 Array of wavelengths for each event. Returns: ndarray \u2013 Array of gravity-corrected theta values for each event, in radians.","title":"gravity_correction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.norm_bck_subtraction","text":"Higher-level call for background subtraction for the normalization run.","title":"norm_bck_subtraction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.off_specular","text":"Compute off-specular Parameters: x_axis ( int , default: None ) \u2013 Axis selection from QX_VS_QZ, KZI_VS_KZF, DELTA_KZ_VS_QZ x_min ( float , default: -0.015 ) \u2013 Min value on x-axis x_max ( float , default: 0.015 ) \u2013 Max value on x-axis x_npts ( int , default: 50 ) \u2013 Number of points in x (negative will produce a log scale) z_min ( float , default: None ) \u2013 Min value on z-axis (if none, default Qz will be used) z_max ( float , default: None ) \u2013 Max value on z-axis (if none, default Qz will be used) z_npts ( int , default: -120 ) \u2013 Number of points in z (negative will produce a log scale)","title":"off_specular"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.slice","text":"Retrieve a slice from the off-specular data.","title":"slice"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.specular","text":"Compute specular reflectivity. For constant-Q binning, it's preferred to use tof_weighted=True. Parameters: q_summing ( bool , default: False ) \u2013 Turns on constant-Q binning tof_weighted ( bool , default: False ) \u2013 If True, binning will be done by weighting each event to the DB distribution bck_in_q ( bool , default: False ) \u2013 If True, the background will be estimated in Q space using the constant-Q binning approach clean ( bool , default: False ) \u2013 If True, and Q summing is True, then leading artifact will be removed normalize ( bool , default: True ) \u2013 If True, and tof_weighted is False, normalization will be skipped Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values","title":"specular"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.specular_unweighted","text":"Simple specular reflectivity calculation. This is the same approach as the original LR reduction, which sums up pixels without constant-Q binning. The original approach bins in TOF, then rebins the final results after transformation to Q. This approach bins directly to Q. Parameters: q_summing ( bool , default: False ) \u2013 If True, sum the data in Q-space. normalize ( bool , default: True ) \u2013 If True, normalize the reflectivity by the direct beam. Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values","title":"specular_unweighted"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.specular_weighted","text":"Compute reflectivity by weighting each event by flux. This allows for summing in Q and to estimate the background in either Q or pixels next to the peak. Parameters: q_summing ( bool , default: True ) \u2013 If True, sum the data in Q-space. bck_in_q ( bool , default: False ) \u2013 If True, subtract background along Q lines. Returns: q_bins \u2013 The Q bin boundaries refl \u2013 The reflectivity values d_refl \u2013 The uncertainties in the reflectivity values","title":"specular_weighted"},{"location":"api/event_reduction/#lr_reduction.event_reduction.EventReflectivity.to_dict","text":"Returns meta-data to be used/stored. Returns: dict \u2013 Dictionary with meta-data","title":"to_dict"},{"location":"api/event_reduction/#lr_reduction.event_reduction.apply_dead_time_correction","text":"Apply dead time correction, and ensure that it is done only once per workspace. Parameters: ws \u2013 Workspace with raw data to compute correction for template_data ( ReductionParameters ) \u2013 Reduction parameters Returns: Workspace \u2013 Workspace with dead time correction applied","title":"apply_dead_time_correction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.compute_resolution","text":"Compute the Q resolution from the meta data. Parameters: ws ( Workspace ) \u2013 Mantid workspace to extract correction meta-data from. theta ( float , default: None ) \u2013 Scattering angle in radians q_summing ( bool , default: False ) \u2013 If True, the pixel size will be used for the resolution Returns: float \u2013 The dQ/Q resolution (FWHM)","title":"compute_resolution"},{"location":"api/event_reduction/#lr_reduction.event_reduction.get_attenuation_info","text":"Retrieve information about attenuation from a Mantid workspace. This function calculates the total thickness of all attenuators that are in the path of the beam by summing up the thicknesses of the attenuators specified in the global variable CD_ATTENUATORS . Parameters: ws \u2013 Mantid workspace from which to retrieve the attenuation information. Returns: float \u2013 The total thickness of the attenuators in the path of the beam.","title":"get_attenuation_info"},{"location":"api/event_reduction/#lr_reduction.event_reduction.get_dead_time_correction","text":"Compute dead time correction to be applied to the reflectivity curve. The method will also try to load the error events from each of the data files to ensure that we properly estimate the dead time correction. Parameters: ws \u2013 Workspace with raw data to compute correction for template_data ( ReductionParameters ) \u2013 Reduction parameters Returns: Workspace \u2013 Workspace with dead time correction to apply","title":"get_dead_time_correction"},{"location":"api/event_reduction/#lr_reduction.event_reduction.get_q_binning","text":"Determine Q binning. This function calculates the binning for Q values based on the provided minimum, maximum, and step values. If the step value is positive, it generates a linear binning. If the step value is negative, it generates a logarithmic binning. Parameters: q_min ( float , default: 0.001 ) \u2013 The minimum Q value. q_max ( float , default: 0.15 ) \u2013 The maximum Q value. q_step ( float , default: -0.02 ) \u2013 The step size for Q binning. If positive, linear binning is used. If negative, logarithmic binning is used. Returns: ndarray \u2013 A numpy array of Q values based on the specified binning.","title":"get_q_binning"},{"location":"api/event_reduction/#lr_reduction.event_reduction.get_wl_range","text":"Determine TOF range from the data Parameters: ws \u2013 Mantid workspace to work with Returns: list \u2013 [min, max] wavelength range","title":"get_wl_range"},{"location":"api/event_reduction/#lr_reduction.event_reduction.process_attenuation","text":"Correct for absorption by assigning weight to each neutron event Parameters: ws \u2013 Mantid workspace to correct thickness \u2013 Attenuator thickness in cm (default is 0). Returns: Mantid workspace \u2013 Corrected Mantid workspace","title":"process_attenuation"},{"location":"api/event_reduction/#lr_reduction.event_reduction.read_settings","text":"Read settings file and return values for the given timestamp Parameters: ws \u2013 Mantid workspace Returns: dict \u2013 Dictionary with settings","title":"read_settings"},{"location":"api/output/","text":"Write R(q) output RunCollection A collection of runs to assemble into a single R(Q) add Add a partial R(q) to the collection Parameters: q ( array ) \u2013 Q values r ( array ) \u2013 R values dr ( array ) \u2013 Error in R values meta_data ( dict ) \u2013 Meta data for the run dq ( array , default: None ) \u2013 Q resolution add_from_file Read a partial result file and add it to the collection Parameters: file_path ( str ) \u2013 The path to the file to be read merge Merge the collection of runs save_ascii Save R(Q) in ASCII format. This function merges the data before saving. It writes metadata and R(Q) data to the specified file in ASCII format. The metadata includes experiment details, reduction version, run title, start time, reduction time, and other optional parameters. The R(Q) data includes Q, R, dR, and dQ values. Parameters: file_path ( str ) \u2013 The path to the file where the ASCII data will be saved. meta_as_json ( bool , default: False ) \u2013 If True, metadata will be written in JSON format. Default is False. read_file Read a data file and extract meta data Parameters: file_path ( str ) \u2013 The path to the file to be read","title":"Output"},{"location":"api/output/#lr_reduction.output.RunCollection","text":"A collection of runs to assemble into a single R(Q)","title":"RunCollection"},{"location":"api/output/#lr_reduction.output.RunCollection.add","text":"Add a partial R(q) to the collection Parameters: q ( array ) \u2013 Q values r ( array ) \u2013 R values dr ( array ) \u2013 Error in R values meta_data ( dict ) \u2013 Meta data for the run dq ( array , default: None ) \u2013 Q resolution","title":"add"},{"location":"api/output/#lr_reduction.output.RunCollection.add_from_file","text":"Read a partial result file and add it to the collection Parameters: file_path ( str ) \u2013 The path to the file to be read","title":"add_from_file"},{"location":"api/output/#lr_reduction.output.RunCollection.merge","text":"Merge the collection of runs","title":"merge"},{"location":"api/output/#lr_reduction.output.RunCollection.save_ascii","text":"Save R(Q) in ASCII format. This function merges the data before saving. It writes metadata and R(Q) data to the specified file in ASCII format. The metadata includes experiment details, reduction version, run title, start time, reduction time, and other optional parameters. The R(Q) data includes Q, R, dR, and dQ values. Parameters: file_path ( str ) \u2013 The path to the file where the ASCII data will be saved. meta_as_json ( bool , default: False ) \u2013 If True, metadata will be written in JSON format. Default is False.","title":"save_ascii"},{"location":"api/output/#lr_reduction.output.read_file","text":"Read a data file and extract meta data Parameters: file_path ( str ) \u2013 The path to the file to be read","title":"read_file"},{"location":"api/peak_finding/","text":"fit_signal_flat_bck Fit a Gaussian peak. Parameters: x ( list ) \u2013 List of x values. y ( list ) \u2013 List of y values. x_min ( int , default: 110 ) \u2013 Start index of the list of points, by default 110. x_max ( int , default: 170 ) \u2013 End index of the list of points, by default 170. center ( float , default: None ) \u2013 Estimated center position, by default None. sigma ( float , default: None ) \u2013 If provided, the sigma will be fixed to the given value, by default None. background ( float , default: None ) \u2013 If provided, the value will be subtracted from y, by default None. Returns: c ( float ) \u2013 Fitted center position of the Gaussian peak. width ( float ) \u2013 Fitted width (sigma) of the Gaussian peak. fit ( ModelResult ) \u2013 The result of the fit. process_data Process a Mantid workspace to extract counts vs pixel. Parameters: workspace ( Mantid workspace ) \u2013 The Mantid workspace to process. summed ( bool , default: True ) \u2013 If True, the x pixels will be summed (default is True). tof_step ( int , default: 200 ) \u2013 The TOF bin size (default is 200). Returns: tuple \u2013 A tuple containing: - tof : numpy.ndarray The time-of-flight values. - _x : numpy.ndarray The pixel indices. - _y : numpy.ndarray The summed counts for each pixel.","title":"Peak finding"},{"location":"api/peak_finding/#lr_reduction.peak_finding.fit_signal_flat_bck","text":"Fit a Gaussian peak. Parameters: x ( list ) \u2013 List of x values. y ( list ) \u2013 List of y values. x_min ( int , default: 110 ) \u2013 Start index of the list of points, by default 110. x_max ( int , default: 170 ) \u2013 End index of the list of points, by default 170. center ( float , default: None ) \u2013 Estimated center position, by default None. sigma ( float , default: None ) \u2013 If provided, the sigma will be fixed to the given value, by default None. background ( float , default: None ) \u2013 If provided, the value will be subtracted from y, by default None. Returns: c ( float ) \u2013 Fitted center position of the Gaussian peak. width ( float ) \u2013 Fitted width (sigma) of the Gaussian peak. fit ( ModelResult ) \u2013 The result of the fit.","title":"fit_signal_flat_bck"},{"location":"api/peak_finding/#lr_reduction.peak_finding.process_data","text":"Process a Mantid workspace to extract counts vs pixel. Parameters: workspace ( Mantid workspace ) \u2013 The Mantid workspace to process. summed ( bool , default: True ) \u2013 If True, the x pixels will be summed (default is True). tof_step ( int , default: 200 ) \u2013 The TOF bin size (default is 200). Returns: tuple \u2013 A tuple containing: - tof : numpy.ndarray The time-of-flight values. - _x : numpy.ndarray The pixel indices. - _y : numpy.ndarray The summed counts for each pixel.","title":"process_data"},{"location":"api/reduction_template_reader/","text":"RefRed template reader. Adapted from Mantid code. ReductionParameters Class that hold the parameters for the reduction of a single data set. from_dict Update object's attributes with a dictionary with entries of the type attribute_name: attribute_value. Parameters: permissible ( bool , default: True ) \u2013 allow keys in data_dict that are not attribute names of ReductionParameters instances. Reading from data_dict will result in this instance having new attributes not defined in __init__() Raises: ValueError \u2013 when permissible=False and one entry (or more) of the dictionary is not an attribute of this object from_xml_element Read in data from XML Parameters: instrument_dom ( Document ) \u2013 to_xml Create XML from the current data. from_xml Read in data from XML string Parameters: xml_str ( str ) \u2013 String representation of a list of ReductionParameters instances Returns: list \u2013 List of ReductionParameters instances getBoolElement Parse a boolean element from the dom object getContent Returns the content of a tag within a dom object getFloatElement Parse a float element from the dom object getFloatList Parse a list of floats from the dom object getIntElement Parse an integer element from the dom object getIntList Parse a list of integers from the dom object getStringElement Parse a string element from the dom object getStringList Parse a list of strings from the dom object getText Utility method to extract text out of an XML node to_xml Create XML from the current data. Parameters: data_sets ( list ) \u2013 List of ReductionParameters instances Returns: str \u2013 XML string","title":"Reduction template reader"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.ReductionParameters","text":"Class that hold the parameters for the reduction of a single data set.","title":"ReductionParameters"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.ReductionParameters.from_dict","text":"Update object's attributes with a dictionary with entries of the type attribute_name: attribute_value. Parameters: permissible ( bool , default: True ) \u2013 allow keys in data_dict that are not attribute names of ReductionParameters instances. Reading from data_dict will result in this instance having new attributes not defined in __init__() Raises: ValueError \u2013 when permissible=False and one entry (or more) of the dictionary is not an attribute of this object","title":"from_dict"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.ReductionParameters.from_xml_element","text":"Read in data from XML Parameters: instrument_dom ( Document ) \u2013","title":"from_xml_element"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.ReductionParameters.to_xml","text":"Create XML from the current data.","title":"to_xml"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.from_xml","text":"Read in data from XML string Parameters: xml_str ( str ) \u2013 String representation of a list of ReductionParameters instances Returns: list \u2013 List of ReductionParameters instances","title":"from_xml"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getBoolElement","text":"Parse a boolean element from the dom object","title":"getBoolElement"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getContent","text":"Returns the content of a tag within a dom object","title":"getContent"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getFloatElement","text":"Parse a float element from the dom object","title":"getFloatElement"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getFloatList","text":"Parse a list of floats from the dom object","title":"getFloatList"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getIntElement","text":"Parse an integer element from the dom object","title":"getIntElement"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getIntList","text":"Parse a list of integers from the dom object","title":"getIntList"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getStringElement","text":"Parse a string element from the dom object","title":"getStringElement"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getStringList","text":"Parse a list of strings from the dom object","title":"getStringList"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.getText","text":"Utility method to extract text out of an XML node","title":"getText"},{"location":"api/reduction_template_reader/#lr_reduction.reduction_template_reader.to_xml","text":"Create XML from the current data. Parameters: data_sets ( list ) \u2013 List of ReductionParameters instances Returns: str \u2013 XML string","title":"to_xml"},{"location":"api/template/","text":"Reduce a data run using a template generated by RefRed process_from_template The clean option removes leading zeros and the drop when doing q-summing read_template Read template from file. @param sequence_number: the ID of the data set within the sequence of runs scaling_factor Apply scaling factor from reference scaling data @param workspace: Mantid workspace","title":"Template"},{"location":"api/template/#lr_reduction.template.process_from_template","text":"The clean option removes leading zeros and the drop when doing q-summing","title":"process_from_template"},{"location":"api/template/#lr_reduction.template.read_template","text":"Read template from file. @param sequence_number: the ID of the data set within the sequence of runs","title":"read_template"},{"location":"api/template/#lr_reduction.template.scaling_factor","text":"Apply scaling factor from reference scaling data @param workspace: Mantid workspace","title":"scaling_factor"},{"location":"api/time_resolved/","text":"Time-resolved data reduction reduce_30Hz_from_ws Perform 30Hz reduction @param meas_ws_30Hz: Mantid workspace of the data we want to reduce @param ref_ws_30Hz: Mantid workspace of the reference data, take with the same config @param data_60Hz: reduced reference data at 60Hz @param template_data: template data object (for 30Hz) @param scan_index: scan index to use within the template. reduce_30Hz_slices_ws Perform 30Hz reduction @param meas_ws_30Hz: workspace of the data we want to reduce @param ref_ws_30Hz: workspace of the reference data, take with the same config @param ref_data_60Hz: file path of the reduce data file at 60Hz @param template_30Hz: file path of the template file for 30Hz @param time_interval: time step in seconds @param scan_index: scan index to use within the template. reduce_slices_ws Perform time-resolved reduction :param meas_ws: workspace of the data we want to reduce :param template_file: autoreduction template file :param time_interval: time step in seconds :param scan_index: scan index to use within the template. :param theta_value: force theta value :param theta_offset: add a theta offset, defaults to zero","title":"Time resolved"},{"location":"api/time_resolved/#lr_reduction.time_resolved.reduce_30Hz_from_ws","text":"Perform 30Hz reduction @param meas_ws_30Hz: Mantid workspace of the data we want to reduce @param ref_ws_30Hz: Mantid workspace of the reference data, take with the same config @param data_60Hz: reduced reference data at 60Hz @param template_data: template data object (for 30Hz) @param scan_index: scan index to use within the template.","title":"reduce_30Hz_from_ws"},{"location":"api/time_resolved/#lr_reduction.time_resolved.reduce_30Hz_slices_ws","text":"Perform 30Hz reduction @param meas_ws_30Hz: workspace of the data we want to reduce @param ref_ws_30Hz: workspace of the reference data, take with the same config @param ref_data_60Hz: file path of the reduce data file at 60Hz @param template_30Hz: file path of the template file for 30Hz @param time_interval: time step in seconds @param scan_index: scan index to use within the template.","title":"reduce_30Hz_slices_ws"},{"location":"api/time_resolved/#lr_reduction.time_resolved.reduce_slices_ws","text":"Perform time-resolved reduction :param meas_ws: workspace of the data we want to reduce :param template_file: autoreduction template file :param time_interval: time step in seconds :param scan_index: scan index to use within the template. :param theta_value: force theta value :param theta_offset: add a theta offset, defaults to zero","title":"reduce_slices_ws"},{"location":"api/utils/","text":"amend_config Context manager to safely modify Mantid Configuration Service while the function is executed. Parameters: new_config ( dict , default: None ) \u2013 (key, value) pairs to substitute in the configuration service data_dir ( Union [ str , list ] , default: None ) \u2013 prepend one (when passing a string) or more (when passing a list) directories to the list of data search directories. Alternatively, replace instead of prepend. data_dir_insert_mode ( str , default: 'prepend' ) \u2013 How to insert the data directories. Options are: \"prepend\" (default) and \"replace\".","title":"Utils"},{"location":"api/utils/#lr_reduction.utils.amend_config","text":"Context manager to safely modify Mantid Configuration Service while the function is executed. Parameters: new_config ( dict , default: None ) \u2013 (key, value) pairs to substitute in the configuration service data_dir ( Union [ str , list ] , default: None ) \u2013 prepend one (when passing a string) or more (when passing a list) directories to the list of data search directories. Alternatively, replace instead of prepend. data_dir_insert_mode ( str , default: 'prepend' ) \u2013 How to insert the data directories. Options are: \"prepend\" (default) and \"replace\".","title":"amend_config"},{"location":"api/workflow/","text":"Autoreduction process for the Liquids Reflectometer assemble_results Find related runs and assemble them in one R(q) data set Parameters: first_run ( int ) \u2013 The first run number in the sequence output_dir ( str ) \u2013 Directory where the output files are saved average_overlap ( bool , default: False ) \u2013 If True, the overlapping points will be averaged is_live ( bool , default: False ) \u2013 If True, the data is live and will be saved in a separate file to avoid conflict with auto-reduction Returns: seq_list ( list ) \u2013 The sequence identifiers run_list ( list ) \u2013 The run numbers offset_from_first_run Find a theta offset by comparing the peak location of the reflected and direct beam compared to the theta value in the meta data. When processing the first run of a set, store that offset in a file so it can be used for later runs. Parameters: ws ( Mantid workspace ) \u2013 The workspace to process template_file ( str ) \u2013 Path to the template file output_dir ( str ) \u2013 Directory where the output files are saved Returns: float \u2013 The theta offset reduce Function called by reduce_REFL.py, which lives in /SNS/REF_L/shared/autoreduce and is called by the automated reduction workflow. If average_overlap is used, overlapping points will be averaged, otherwise they will be left in the final data file. Parameters: average_overlap ( bool , default: False ) \u2013 If True, the overlapping points will be averaged q_summing ( bool , default: False ) \u2013 If True, constant-Q binning will be used bck_in_q ( bool , default: False ) \u2013 If True, and constant-Q binning is used, the background will be estimated along constant-Q lines rather than along TOF/pixel boundaries. theta_offset ( float , default: 0 ) \u2013 Theta offset to apply. If None, the template value will be used. is_live ( bool , default: False ) \u2013 If True, the data is live and will be saved in a separate file to avoid conflict with auto-reduction output_dir ( str ) \u2013 Directory where the output files will be saved template_file ( str ) \u2013 Path to the template file containing the reduction parameters Returns: int \u2013 The sequence identifier for the run sequence reduce_explorer Very simple rough reduction for when playing around. Parameters: ws ( Mantid workspace ) \u2013 The workspace to process ws_db ( Mantid workspace ) \u2013 The workspace with the direct beam data theta_pv ( str , default: None ) \u2013 The PV name for the theta value center_pixel ( int , default: 145 ) \u2013 The pixel number for the center of the reflected beam db_center_pixel ( int , default: 145 ) \u2013 The pixel number for the center of the direct beam peak_width ( int , default: 10 ) \u2013 The width of the peak to use for the reflected beam Returns: qz_mid ( ndarray ) \u2013 The Q values refl ( ndarray ) \u2013 The reflectivity values d_refl ( ndarray ) \u2013 The uncertainty in the reflectivity write_template Read the appropriate entry in a template file and save an updated copy with the updated run number. Parameters: seq_list ( list ) \u2013 The sequence identifiers run_list ( list ) \u2013 The run numbers template_file ( str ) \u2013 Path to the template file output_dir ( str ) \u2013 Directory where the output files are saved","title":"Workflow"},{"location":"api/workflow/#lr_reduction.workflow.assemble_results","text":"Find related runs and assemble them in one R(q) data set Parameters: first_run ( int ) \u2013 The first run number in the sequence output_dir ( str ) \u2013 Directory where the output files are saved average_overlap ( bool , default: False ) \u2013 If True, the overlapping points will be averaged is_live ( bool , default: False ) \u2013 If True, the data is live and will be saved in a separate file to avoid conflict with auto-reduction Returns: seq_list ( list ) \u2013 The sequence identifiers run_list ( list ) \u2013 The run numbers","title":"assemble_results"},{"location":"api/workflow/#lr_reduction.workflow.offset_from_first_run","text":"Find a theta offset by comparing the peak location of the reflected and direct beam compared to the theta value in the meta data. When processing the first run of a set, store that offset in a file so it can be used for later runs. Parameters: ws ( Mantid workspace ) \u2013 The workspace to process template_file ( str ) \u2013 Path to the template file output_dir ( str ) \u2013 Directory where the output files are saved Returns: float \u2013 The theta offset","title":"offset_from_first_run"},{"location":"api/workflow/#lr_reduction.workflow.reduce","text":"Function called by reduce_REFL.py, which lives in /SNS/REF_L/shared/autoreduce and is called by the automated reduction workflow. If average_overlap is used, overlapping points will be averaged, otherwise they will be left in the final data file. Parameters: average_overlap ( bool , default: False ) \u2013 If True, the overlapping points will be averaged q_summing ( bool , default: False ) \u2013 If True, constant-Q binning will be used bck_in_q ( bool , default: False ) \u2013 If True, and constant-Q binning is used, the background will be estimated along constant-Q lines rather than along TOF/pixel boundaries. theta_offset ( float , default: 0 ) \u2013 Theta offset to apply. If None, the template value will be used. is_live ( bool , default: False ) \u2013 If True, the data is live and will be saved in a separate file to avoid conflict with auto-reduction output_dir ( str ) \u2013 Directory where the output files will be saved template_file ( str ) \u2013 Path to the template file containing the reduction parameters Returns: int \u2013 The sequence identifier for the run sequence","title":"reduce"},{"location":"api/workflow/#lr_reduction.workflow.reduce_explorer","text":"Very simple rough reduction for when playing around. Parameters: ws ( Mantid workspace ) \u2013 The workspace to process ws_db ( Mantid workspace ) \u2013 The workspace with the direct beam data theta_pv ( str , default: None ) \u2013 The PV name for the theta value center_pixel ( int , default: 145 ) \u2013 The pixel number for the center of the reflected beam db_center_pixel ( int , default: 145 ) \u2013 The pixel number for the center of the direct beam peak_width ( int , default: 10 ) \u2013 The width of the peak to use for the reflected beam Returns: qz_mid ( ndarray ) \u2013 The Q values refl ( ndarray ) \u2013 The reflectivity values d_refl ( ndarray ) \u2013 The uncertainty in the reflectivity","title":"reduce_explorer"},{"location":"api/workflow/#lr_reduction.workflow.write_template","text":"Read the appropriate entry in a template file and save an updated copy with the updated run number. Parameters: seq_list ( list ) \u2013 The sequence identifiers run_list ( list ) \u2013 The run numbers template_file ( str ) \u2013 Path to the template file output_dir ( str ) \u2013 Directory where the output files are saved","title":"write_template"},{"location":"developer/contributing/","text":"Contributing Guide Contributions to this project are welcome. All contributors agree to the following: It is assumed that the contributor is an ORNL employee and belongs to the development team. Thus the following instructions are specific to ORNL development team's process. You have permission and any required rights to submit your contribution. Your contribution is provided under the license of this project and may be redistributed as such. All contributions to this project are public. All contributions must be \"signed off\" in the commit log and by doing so you agree to the above. Getting access to the main project Direct commit access to the project is currently restricted to core developers. All other contributions should be done through pull requests.","title":"Contributing Guide"},{"location":"developer/contributing/#contributing-guide","text":"Contributions to this project are welcome. All contributors agree to the following: It is assumed that the contributor is an ORNL employee and belongs to the development team. Thus the following instructions are specific to ORNL development team's process. You have permission and any required rights to submit your contribution. Your contribution is provided under the license of this project and may be redistributed as such. All contributions to this project are public. All contributions must be \"signed off\" in the commit log and by doing so you agree to the above.","title":"Contributing Guide"},{"location":"developer/contributing/#getting-access-to-the-main-project","text":"Direct commit access to the project is currently restricted to core developers. All other contributions should be done through pull requests.","title":"Getting access to the main project"},{"location":"developer/developer/","text":"Developer Documentation Local Environment pre-commit Hooks Development procedure Updating mantid dependency Using the Data Repository Coverage reports Building the documentation Creating a stable release Local Environment For purposes of development, create conda environment lr_reduction with file environment.yml , and then install the package in development mode with pip : $ cd /path/to/lr_reduction/ $ conda create env --solver libmamba --file ./environment.yml $ conda activate lr_reduction (lr_reduction)$ pip install -e ./ By installing the package in development mode, one doesn't need to re-install package lr_reduction in conda environment lr_reduction after every change to the source code. pre-commit Hooks Activate the hooks by typing in the terminal: $ cd /path/to/mr_reduction/ $ conda activate mr_reduction (mr_reduction)$ pre-commit install Development procedure A developer is assigned with a task during neutron status meeting and changes the task's status to In Progress . The developer creates a branch off next and completes the task in this branch. The developer creates a pull request (PR) off next . Any new features or bugfixes must be covered by new and/or refactored automated tests. The developer asks for another developer as a reviewer to review the PR. A PR can only be approved and merged by the reviewer. The developer changes the task\u2019s status to Complete and closes the associated issue. Updating mantid dependency The mantid version and the mantid conda channel ( mantid/label/main or mantid/label/nightly ) must be synchronized across these files: environment.yml conda.recipe/meta.yml .github/workflows/package.yml Using the Data Repository To run the integration tests in your local environment, it is necessary first to download the data files. Because of their size, the files are stored in the Git LFS repository lr_reduction-data _. It is necessary to have package git-lfs installed in your machine. $ sudo apt install git-lfs After this step, initialize or update the data repository: $ cd /path/to/lr_reduction $ git submodule update --init This will either clone liquidsreflectometer-data into /path/to/lr_reduction/tests/liquidsreflectometer-data or bring the liquidsreflectometer-data 's refspec in sync with the refspec listed within file /path/to/liquidsreflectometer/.gitmodules . An intro to Git LFS in the context of the Neutron Data Project is found in the Confluence pages _ (login required). Coverage reports GitHuh actions create reports for unit and integration tests, then combine into one report and upload it to Codecov _. Building the documentation A repository webhook is setup to automatically trigger the latest documentation build by GitHub actions. To manually build the documentation: $ conda activate lr_reduction (lr_reduction)$ make docs After this, point your browser to file:///path/to/lr_reduction/docs/build/html/index.html Creating a stable release patch release, it may be allowed to bypass the creation of a candidate release. Still, we must update branch qa first, then create the release tag in branch main . For instance, to create patch version \"v2.1.1\": VERSION=\"v2.1.2\" # update the local repository git fetch --all --prune git fetch --prune --prune-tags origin # update branch qa from next, possibly bringing work done in qa missing in next git switch next git rebase -v origin/next git merge --no-edit origin/qa # commit message is automatically generated git push origin next # required to \"link\" qa to next, for future fast-forward git switch qa git rebase -v origin/qa git merge --ff-only origin/next # update branch main from qa git merge --no-edit origin/main # commit message is automatically generated git push origin qa # required to \"link\" main to qa, for future fast-forward git switch main git rebase -v origin/main git merge --ff-only origin/qa git tag $VERSION git push origin --tags main minor or major release, we create a stable release after we have created a Candidate release. For this customary procedure, follow: The Software Maturity Model for continous versioning as well as creating release candidates and stable releases. Update the :ref: Release Notes with major fixes, updates and additions since last stable release.","title":"Developer Documentation"},{"location":"developer/developer/#developer-documentation","text":"Local Environment pre-commit Hooks Development procedure Updating mantid dependency Using the Data Repository Coverage reports Building the documentation Creating a stable release","title":"Developer Documentation"},{"location":"developer/developer/#local-environment","text":"For purposes of development, create conda environment lr_reduction with file environment.yml , and then install the package in development mode with pip : $ cd /path/to/lr_reduction/ $ conda create env --solver libmamba --file ./environment.yml $ conda activate lr_reduction (lr_reduction)$ pip install -e ./ By installing the package in development mode, one doesn't need to re-install package lr_reduction in conda environment lr_reduction after every change to the source code.","title":"Local Environment"},{"location":"developer/developer/#pre-commit-hooks","text":"Activate the hooks by typing in the terminal: $ cd /path/to/mr_reduction/ $ conda activate mr_reduction (mr_reduction)$ pre-commit install","title":"pre-commit Hooks"},{"location":"developer/developer/#development-procedure","text":"A developer is assigned with a task during neutron status meeting and changes the task's status to In Progress . The developer creates a branch off next and completes the task in this branch. The developer creates a pull request (PR) off next . Any new features or bugfixes must be covered by new and/or refactored automated tests. The developer asks for another developer as a reviewer to review the PR. A PR can only be approved and merged by the reviewer. The developer changes the task\u2019s status to Complete and closes the associated issue.","title":"Development procedure"},{"location":"developer/developer/#updating-mantid-dependency","text":"The mantid version and the mantid conda channel ( mantid/label/main or mantid/label/nightly ) must be synchronized across these files: environment.yml conda.recipe/meta.yml .github/workflows/package.yml","title":"Updating mantid dependency"},{"location":"developer/developer/#using-the-data-repository","text":"To run the integration tests in your local environment, it is necessary first to download the data files. Because of their size, the files are stored in the Git LFS repository lr_reduction-data _. It is necessary to have package git-lfs installed in your machine. $ sudo apt install git-lfs After this step, initialize or update the data repository: $ cd /path/to/lr_reduction $ git submodule update --init This will either clone liquidsreflectometer-data into /path/to/lr_reduction/tests/liquidsreflectometer-data or bring the liquidsreflectometer-data 's refspec in sync with the refspec listed within file /path/to/liquidsreflectometer/.gitmodules . An intro to Git LFS in the context of the Neutron Data Project is found in the Confluence pages _ (login required).","title":"Using the Data Repository"},{"location":"developer/developer/#coverage-reports","text":"GitHuh actions create reports for unit and integration tests, then combine into one report and upload it to Codecov _.","title":"Coverage reports"},{"location":"developer/developer/#building-the-documentation","text":"A repository webhook is setup to automatically trigger the latest documentation build by GitHub actions. To manually build the documentation: $ conda activate lr_reduction (lr_reduction)$ make docs After this, point your browser to file:///path/to/lr_reduction/docs/build/html/index.html","title":"Building the documentation"},{"location":"developer/developer/#creating-a-stable-release","text":"patch release, it may be allowed to bypass the creation of a candidate release. Still, we must update branch qa first, then create the release tag in branch main . For instance, to create patch version \"v2.1.1\": VERSION=\"v2.1.2\" # update the local repository git fetch --all --prune git fetch --prune --prune-tags origin # update branch qa from next, possibly bringing work done in qa missing in next git switch next git rebase -v origin/next git merge --no-edit origin/qa # commit message is automatically generated git push origin next # required to \"link\" qa to next, for future fast-forward git switch qa git rebase -v origin/qa git merge --ff-only origin/next # update branch main from qa git merge --no-edit origin/main # commit message is automatically generated git push origin qa # required to \"link\" main to qa, for future fast-forward git switch main git rebase -v origin/main git merge --ff-only origin/qa git tag $VERSION git push origin --tags main minor or major release, we create a stable release after we have created a Candidate release. For this customary procedure, follow: The Software Maturity Model for continous versioning as well as creating release candidates and stable releases. Update the :ref: Release Notes with major fixes, updates and additions since last stable release.","title":"Creating a stable release"},{"location":"user/conda_environments/","text":"Conda Environments Three conda environments are available in the analysis nodes, beamline machines, as well as the jupyter notebook severs. On a terminal: $ conda activate where is one of lr_reduction , lr_reduction-qa , and lr_reduction-dev lr_reduction Environment Activates the latest stable release of lr_reduction . Typically users will reduce their data in this environment. lr_reduction-qa Environment Activates a release-candidate environment. Instrument scientists and computational instrument scientists will carry out testing on this environment to prevent bugs being introduce in the next stable release. lr_reduction-dev Environment Activates the environment corresponding to the latest changes in the source code. Instrument scientists and computational instrument scientists will test the latest changes to lr_reduction in this environment.","title":"Conda Environments"},{"location":"user/conda_environments/#conda-environments","text":"Three conda environments are available in the analysis nodes, beamline machines, as well as the jupyter notebook severs. On a terminal: $ conda activate where is one of lr_reduction , lr_reduction-qa , and lr_reduction-dev","title":"Conda Environments"},{"location":"user/conda_environments/#lr_reduction-environment","text":"Activates the latest stable release of lr_reduction . Typically users will reduce their data in this environment.","title":"lr_reduction Environment"},{"location":"user/conda_environments/#lr_reduction-qa-environment","text":"Activates a release-candidate environment. Instrument scientists and computational instrument scientists will carry out testing on this environment to prevent bugs being introduce in the next stable release.","title":"lr_reduction-qa Environment"},{"location":"user/conda_environments/#lr_reduction-dev-environment","text":"Activates the environment corresponding to the latest changes in the source code. Instrument scientists and computational instrument scientists will test the latest changes to lr_reduction in this environment.","title":"lr_reduction-dev Environment"},{"location":"user/event_processing/","text":"Event processing The BL4B instrument leverages the concept of weighted events for several aspects of the reduction process. Following this approach, each event is treated separately and is assigned a weigth \\(w\\) to accound for various corrections. Summing events then becomes the sum of the weights for all events. Loading events and dead time correction A dead time correction is available for rates above around 2000 counts/sec. Both paralyzing and non-paralyzing implementation are available. Paralyzing refers to a detector that extends its dead time period when events occur while the detector is already unavailable to process events, while non-paralyzing refers to a detector that always becomes available after the dead time period [1]. The dead time correction to be multiplied by the measured detector counts is given by the following for the paralyzing case: \\[ C_{par} = -{\\cal Re}W_0(-R\\tau/\\Delta_{TOF}) \\Delta_{TOF}/R \\] where \\(R\\) is the number of triggers per accelerator pulse within a time-of-flight bin \\(\\Delta_{TOF}\\) . The dead time for the current BL4B detector is \\(\\tau=4.2\\) \\(\\mu s\\) . In the equation avove, \\({\\cal Re}W_0\\) referes to the principal branch of the Lambert W function. The following is used for the non-paralyzing case: \\[ C_{non-par} = 1/(1-R\\tau/\\Delta_{TOF}) \\] By default, we use a paralyzing dead time correction with \\(\\Delta_{TOF}=100\\) \\(\\mu s\\) . These parameters can be changed. The BL4B detector is a wire chamber with a detector readout that includes digitization of the position of each event. For a number of reasons, like event pileup, it is possible for the electronics to be unable to assign a coordinate to a particular trigger event. These events are labelled as error events and stored along with the good events. While only good events are used to compute reflectivity, error events are included in the \\(R\\) value defined above. For clarity, we chose to define \\(R\\) in terms of number of triggers as opposed to events. Once the dead time correction as a function for time-of-flight is computed, each event in the run being processed is assigned a weight according to the correction. \\(w_i = C(t_i)\\) where \\(t_i\\) is the time-of-flight of event \\(i\\) . The value of \\(C\\) is interpolated from the computed dead time correction distribution. [1] V. B\u00e9cares, J. Bl\u00e1zquez, Detector Dead Time Determination and OptimalCounting Rate for a Detector Near a Spallation Source ora Subcritical Multiplying System, Science and Technology of Nuclear Installations, 2012, 240693, https://doi.org/10.1155/2012/240693 Correct for emission time Since neutrons of different wavelength will spend different amount of time on average within the moderator, a linear approximation is used by the data acquisition system to account for emission time when phasing choppers. The time of flight for each event \\(i\\) is corrected by an small value given by $\\Delta t_i = -t_{off} + \\frac{h L}{m_n} A t_i $ where \\(h\\) is Planck's constant, \\(m_n\\) is the mass of the neutron, and \\(L\\) is the distance between the moderator and the detector. The \\(t_{off}\\) , \\(A\\) , and \\(L\\) parameters are process variables that are stored in the data file and can be changed in the data acquisition system. Gravity correction The reflected angle of each neutron is corrected for the effect of gravity according to reference Campbell et al [2]. This correction is done individually for each neutron event according to its wavelength. [2] R.A. Campbell et al, Eur. Phys. J. Plus (2011) 126: 107. https://doi.org/10.1140/epjp/i2011-11107-8 Event selection Following the correction described above, we are left with a list of events, each having a detector position ( \\(p_x, p_y\\) ) and a wavelength \\(\\lambda\\) . As necessary, regions of interests can be defined to identify events to include in the specular reflectivity calculation, and which will be used to estimate and subtract background. Event selection is performed before computing the reflectivity as described in the following sections. Q calculation The reflectivity \\(R(q)\\) is computed by computing the \\(q\\) value for each even and histogramming in a predefined binning of the user's choice. This approach is slightly different from the traditional approach of binning events in TOF, and then converting the TOF axis to \\(q\\) . The event-based approach allows us to bin directly into a \\(q\\) binning of our choice and avoid the need for a final rebinning. The standard way of computing the reflected signal is simply to compute \\(q\\) for each event \\(i\\) using the following equation: \\(q_{z, i} = \\frac{4\\pi}{\\lambda_i}\\sin(\\theta - \\delta_{g,i})\\) where the \\(\\delta_{g,i}\\) refers to the angular offset caused by gravity. Once \\(q\\) is computed for each neutron, they can be histogrammed, taking into account the weight assigned to each event: \\(S(q_z) = \\frac{1}{Q} \\sum_{i \\in q_z \\pm \\Delta{q_z}/2} w_i\\) where the sum is over all event falling in the \\(q_z\\) bin or width \\(\\Delta q_z\\) , and \\(w_i\\) is the weight if the \\(i^{th}\\) event. At this point we have an unnormalized \\(S(q_z)\\) , which remains to be corrected for the neutron flux. The value of \\(Q\\) is the integrated proton charge for the Constant-Q binning When using a divergent beam, or when measuring a warped sample, it may be beneficial to take into accound where a neutron landed on the detector in order to recalculate its angle, and its \\(q\\) value. In this case, the \\(q_{z, i}\\) equation above becomes: \\(q_{z, i} = \\frac{4\\pi}{\\lambda_i}\\sin(\\theta + \\delta_{f,i} - \\delta_{g,i})\\) where \\(\\delta_{f,i}\\) is the angular offset between where the specular peak appears on the detector and where the neutron was detected: \\(\\delta_{f,i} = \\mathrm{sgn}(\\theta)\\arctan(d(p_i-p_{spec})/L_{det})/2\\) where \\(d\\) is the size of a pixel, \\(p_i\\) is the pixel where event \\(i\\) was detected, \\(p_{spec}\\) is the pixel at the center of the peak distribution, \\(L_{det}\\) is the distance between the sample and the detector. Care should be taken to asign the correct sign to the angle offset. For this reason, we add the sign the scattering angle \\(\\mathrm{sgn}(\\theta)\\) on from of the previous equation to account for when we reflect up or down. Normalization options The scattering signal computed above needs to be normalized by the incoming flux in order to produce \\(R(q_z)\\) . For the simplest case, we follow the same procedure as above for the relevant direct beam run, and simply compute the \\(S_1(q_z)\\) using the standard procedure above, using the same \\(q_z\\) binning, and replacing \\(\\theta\\) by the value at which the reflected beam was measured. We are then effectively computing what the measured signal would be if all neutron from the beam would reflect with a probability of 1. We refer this distribution at \\(S_1(q_z)\\) . The measured reflectivity then becomes \\[ R(q_z) = S(q_z) / S_1(q_z) \\] This approach is equivalent to predetermining the TOF binning that would be needed to produce the \\(q_z\\) binning we actually want, summing counts in TOF for both scattered and direct beam, taking the ratio of the two, and finally converting TOF to \\(q_z\\) . The only difference is that we don't bother with the TOF bins and assign events directly into the \\(q_z\\) we know they will contribute to the denominator of for normalization. Normalization using weighted events An alternative approach to the normalization described above is also implemented to BL4B. It leverages the weighted event approach. Using this approach, we can simply histogram the direct beam event in a wavelenth distribution. In such a histogram, each bin in wavelength will have a flux \\[\\phi(\\lambda) = N_{\\lambda} / Q / \\Delta_{\\lambda}\\] where \\(N_{\\lambda}\\) is the number of neutrons in the bin of center \\(\\lambda\\) , \\(Q\\) is the integrated proton charge, and \\(\\Delta(\\lambda)\\) is the wavelength bin width for the distribution. Coming back to the calculation of the reflected signal above, we now can add a new weight for each event according to the flux for its particular wavelength: \\[ w_i \\rightarrow w_i / \\phi(\\lambda_i) q_{z,i} / \\lambda_i \\] where \\(\\phi(\\lambda)\\) is interpolated from the distribution we measured above. The \\(q_z/\\lambda\\) term is the Jacobian to account for the transformation of wavelength to \\(q\\) . With this new weigth, we can compute reflectivity directly from the \\(S(q_z)\\) equation above: \\[ R(q_z) = \\frac{1}{Q} \\sum_{i \\in q_z \\pm \\Delta{q_z}/2} w_i / \\phi(\\lambda_i) q_{z,i} / \\lambda_i \\]","title":"Event processing"},{"location":"user/event_processing/#event-processing","text":"The BL4B instrument leverages the concept of weighted events for several aspects of the reduction process. Following this approach, each event is treated separately and is assigned a weigth \\(w\\) to accound for various corrections. Summing events then becomes the sum of the weights for all events.","title":"Event processing"},{"location":"user/event_processing/#loading-events-and-dead-time-correction","text":"A dead time correction is available for rates above around 2000 counts/sec. Both paralyzing and non-paralyzing implementation are available. Paralyzing refers to a detector that extends its dead time period when events occur while the detector is already unavailable to process events, while non-paralyzing refers to a detector that always becomes available after the dead time period [1]. The dead time correction to be multiplied by the measured detector counts is given by the following for the paralyzing case: \\[ C_{par} = -{\\cal Re}W_0(-R\\tau/\\Delta_{TOF}) \\Delta_{TOF}/R \\] where \\(R\\) is the number of triggers per accelerator pulse within a time-of-flight bin \\(\\Delta_{TOF}\\) . The dead time for the current BL4B detector is \\(\\tau=4.2\\) \\(\\mu s\\) . In the equation avove, \\({\\cal Re}W_0\\) referes to the principal branch of the Lambert W function. The following is used for the non-paralyzing case: \\[ C_{non-par} = 1/(1-R\\tau/\\Delta_{TOF}) \\] By default, we use a paralyzing dead time correction with \\(\\Delta_{TOF}=100\\) \\(\\mu s\\) . These parameters can be changed. The BL4B detector is a wire chamber with a detector readout that includes digitization of the position of each event. For a number of reasons, like event pileup, it is possible for the electronics to be unable to assign a coordinate to a particular trigger event. These events are labelled as error events and stored along with the good events. While only good events are used to compute reflectivity, error events are included in the \\(R\\) value defined above. For clarity, we chose to define \\(R\\) in terms of number of triggers as opposed to events. Once the dead time correction as a function for time-of-flight is computed, each event in the run being processed is assigned a weight according to the correction. \\(w_i = C(t_i)\\) where \\(t_i\\) is the time-of-flight of event \\(i\\) . The value of \\(C\\) is interpolated from the computed dead time correction distribution. [1] V. B\u00e9cares, J. Bl\u00e1zquez, Detector Dead Time Determination and OptimalCounting Rate for a Detector Near a Spallation Source ora Subcritical Multiplying System, Science and Technology of Nuclear Installations, 2012, 240693, https://doi.org/10.1155/2012/240693","title":"Loading events and dead time correction"},{"location":"user/event_processing/#correct-for-emission-time","text":"Since neutrons of different wavelength will spend different amount of time on average within the moderator, a linear approximation is used by the data acquisition system to account for emission time when phasing choppers. The time of flight for each event \\(i\\) is corrected by an small value given by $\\Delta t_i = -t_{off} + \\frac{h L}{m_n} A t_i $ where \\(h\\) is Planck's constant, \\(m_n\\) is the mass of the neutron, and \\(L\\) is the distance between the moderator and the detector. The \\(t_{off}\\) , \\(A\\) , and \\(L\\) parameters are process variables that are stored in the data file and can be changed in the data acquisition system.","title":"Correct for emission time"},{"location":"user/event_processing/#gravity-correction","text":"The reflected angle of each neutron is corrected for the effect of gravity according to reference Campbell et al [2]. This correction is done individually for each neutron event according to its wavelength. [2] R.A. Campbell et al, Eur. Phys. J. Plus (2011) 126: 107. https://doi.org/10.1140/epjp/i2011-11107-8","title":"Gravity correction"},{"location":"user/event_processing/#event-selection","text":"Following the correction described above, we are left with a list of events, each having a detector position ( \\(p_x, p_y\\) ) and a wavelength \\(\\lambda\\) . As necessary, regions of interests can be defined to identify events to include in the specular reflectivity calculation, and which will be used to estimate and subtract background. Event selection is performed before computing the reflectivity as described in the following sections.","title":"Event selection"},{"location":"user/event_processing/#q-calculation","text":"The reflectivity \\(R(q)\\) is computed by computing the \\(q\\) value for each even and histogramming in a predefined binning of the user's choice. This approach is slightly different from the traditional approach of binning events in TOF, and then converting the TOF axis to \\(q\\) . The event-based approach allows us to bin directly into a \\(q\\) binning of our choice and avoid the need for a final rebinning. The standard way of computing the reflected signal is simply to compute \\(q\\) for each event \\(i\\) using the following equation: \\(q_{z, i} = \\frac{4\\pi}{\\lambda_i}\\sin(\\theta - \\delta_{g,i})\\) where the \\(\\delta_{g,i}\\) refers to the angular offset caused by gravity. Once \\(q\\) is computed for each neutron, they can be histogrammed, taking into account the weight assigned to each event: \\(S(q_z) = \\frac{1}{Q} \\sum_{i \\in q_z \\pm \\Delta{q_z}/2} w_i\\) where the sum is over all event falling in the \\(q_z\\) bin or width \\(\\Delta q_z\\) , and \\(w_i\\) is the weight if the \\(i^{th}\\) event. At this point we have an unnormalized \\(S(q_z)\\) , which remains to be corrected for the neutron flux. The value of \\(Q\\) is the integrated proton charge for the","title":"Q calculation"},{"location":"user/event_processing/#constant-q-binning","text":"When using a divergent beam, or when measuring a warped sample, it may be beneficial to take into accound where a neutron landed on the detector in order to recalculate its angle, and its \\(q\\) value. In this case, the \\(q_{z, i}\\) equation above becomes: \\(q_{z, i} = \\frac{4\\pi}{\\lambda_i}\\sin(\\theta + \\delta_{f,i} - \\delta_{g,i})\\) where \\(\\delta_{f,i}\\) is the angular offset between where the specular peak appears on the detector and where the neutron was detected: \\(\\delta_{f,i} = \\mathrm{sgn}(\\theta)\\arctan(d(p_i-p_{spec})/L_{det})/2\\) where \\(d\\) is the size of a pixel, \\(p_i\\) is the pixel where event \\(i\\) was detected, \\(p_{spec}\\) is the pixel at the center of the peak distribution, \\(L_{det}\\) is the distance between the sample and the detector. Care should be taken to asign the correct sign to the angle offset. For this reason, we add the sign the scattering angle \\(\\mathrm{sgn}(\\theta)\\) on from of the previous equation to account for when we reflect up or down.","title":"Constant-Q binning"},{"location":"user/event_processing/#normalization-options","text":"The scattering signal computed above needs to be normalized by the incoming flux in order to produce \\(R(q_z)\\) . For the simplest case, we follow the same procedure as above for the relevant direct beam run, and simply compute the \\(S_1(q_z)\\) using the standard procedure above, using the same \\(q_z\\) binning, and replacing \\(\\theta\\) by the value at which the reflected beam was measured. We are then effectively computing what the measured signal would be if all neutron from the beam would reflect with a probability of 1. We refer this distribution at \\(S_1(q_z)\\) . The measured reflectivity then becomes \\[ R(q_z) = S(q_z) / S_1(q_z) \\] This approach is equivalent to predetermining the TOF binning that would be needed to produce the \\(q_z\\) binning we actually want, summing counts in TOF for both scattered and direct beam, taking the ratio of the two, and finally converting TOF to \\(q_z\\) . The only difference is that we don't bother with the TOF bins and assign events directly into the \\(q_z\\) we know they will contribute to the denominator of for normalization.","title":"Normalization options"},{"location":"user/event_processing/#normalization-using-weighted-events","text":"An alternative approach to the normalization described above is also implemented to BL4B. It leverages the weighted event approach. Using this approach, we can simply histogram the direct beam event in a wavelenth distribution. In such a histogram, each bin in wavelength will have a flux \\[\\phi(\\lambda) = N_{\\lambda} / Q / \\Delta_{\\lambda}\\] where \\(N_{\\lambda}\\) is the number of neutrons in the bin of center \\(\\lambda\\) , \\(Q\\) is the integrated proton charge, and \\(\\Delta(\\lambda)\\) is the wavelength bin width for the distribution. Coming back to the calculation of the reflected signal above, we now can add a new weight for each event according to the flux for its particular wavelength: \\[ w_i \\rightarrow w_i / \\phi(\\lambda_i) q_{z,i} / \\lambda_i \\] where \\(\\phi(\\lambda)\\) is interpolated from the distribution we measured above. The \\(q_z/\\lambda\\) term is the Jacobian to account for the transformation of wavelength to \\(q\\) . With this new weigth, we can compute reflectivity directly from the \\(S(q_z)\\) equation above: \\[ R(q_z) = \\frac{1}{Q} \\sum_{i \\in q_z \\pm \\Delta{q_z}/2} w_i / \\phi(\\lambda_i) q_{z,i} / \\lambda_i \\]","title":"Normalization using weighted events"},{"location":"user/workflow/","text":"Specular reflectivity reduction workflow The specular reflectivity data reduction is build around the event_reduction.EventReflectivity class, which performs the reduction. A number of useful modules are available to handle parts of the workflow around the actual reduction. Data sets Specular reflectivity measurements at BL4B are done by combining several runs, taken at different scattering angle and wavelength band. To allow for the automation of the reduction process, several meta data entries are stored in the data files. To be able to know which data files belong together in a single reflectivity measurement, two important log entries are used: sequence_id : The sequence ID identifies a unique reflectivity curve. All data runs with a matching sequence_id are put together to create a single reflectivity curve. sequence_number : The sequence number identifies the location of a given run in the list of runs that define a full sequence. All sequences start at 1. For instance, a sequence number of 3 means that this run is the third of the complete set. This becomes important for storing reduction parameters. Reduction parameters and templates The reduction parameters are managed using the reduction_template_reader.ReductionParameters class. This class allows users to define and store the parameters required for the reduction process. By using this class, you can easily save, load, and modify the parameters, ensuring consistency and reproducibility in your data reduction workflow. Compatibility with RefRed Refred is the user interface that helps users define reduction parameters by selecting the data to process, peak and background regions, etc. A complete reflectivity curve is generally comprised of multiple runs, and RefRed allows one to save a so-called template file that contains all the information needs to reduce each run in the set. The reduction backend (this package) has utilities to read and write such templates, which are stored in XML format. A template consist of an ordered list of ReductionParameters objects, which corresponding to a specific sequence_number . To read a templates and obtains a list of ReductionParameters objects: from lr_reduction import reduction_template_reader with open(template_file, \"r\") as fd: xml_str = fd.read() data_sets = reduction_template_reader.from_xml(xml_str) From a list of ReductionParameters objects: xml_str = reduction_template_reader.to_xml(data_sets) with open(os.path.join(output_dir, \"template.xml\"), \"w\") as fd: fd.write(xml_str) Reduction workflow The main reduction workflow, which will extract specular reflectivity from a data file given a reduction template, is found in the workflow module. This workflow will is the one performed by the automated reduction system at BL4B: It will extract the correct reduction parameters from the provided template Perform the reduction and compute the reflectivity curve for that data Combine the reflectivity curve segment with other runs belonging to the same set Write out the complete reflectivity curve in an output file Write out a copy of the template by replacing the run numbers in the template by those that were used Once you have a template, you can simply do: from lr_reduction import workflow from mantid.simpleapi import LoadEventNexus # Load the data from disk ws = LoadEventNexus(Filename='/SNS/REF_L/IPTS-XXXX/nexus/REFL_YYYY.h5') # The template file you want to use template_file = '/SNS/REF_L/IPTS-XXXX/autoreduce/template.xml' # The folder where you want your output output_dir = '/tmp' workflow.reduce(ws, template_file, output_dir) Thie will produce output files in the speficied output directory.","title":"Specular reflectivity reduction workflow"},{"location":"user/workflow/#specular-reflectivity-reduction-workflow","text":"The specular reflectivity data reduction is build around the event_reduction.EventReflectivity class, which performs the reduction. A number of useful modules are available to handle parts of the workflow around the actual reduction.","title":"Specular reflectivity reduction workflow"},{"location":"user/workflow/#data-sets","text":"Specular reflectivity measurements at BL4B are done by combining several runs, taken at different scattering angle and wavelength band. To allow for the automation of the reduction process, several meta data entries are stored in the data files. To be able to know which data files belong together in a single reflectivity measurement, two important log entries are used: sequence_id : The sequence ID identifies a unique reflectivity curve. All data runs with a matching sequence_id are put together to create a single reflectivity curve. sequence_number : The sequence number identifies the location of a given run in the list of runs that define a full sequence. All sequences start at 1. For instance, a sequence number of 3 means that this run is the third of the complete set. This becomes important for storing reduction parameters.","title":"Data sets"},{"location":"user/workflow/#reduction-parameters-and-templates","text":"The reduction parameters are managed using the reduction_template_reader.ReductionParameters class. This class allows users to define and store the parameters required for the reduction process. By using this class, you can easily save, load, and modify the parameters, ensuring consistency and reproducibility in your data reduction workflow.","title":"Reduction parameters and templates"},{"location":"user/workflow/#compatibility-with-refred","text":"Refred is the user interface that helps users define reduction parameters by selecting the data to process, peak and background regions, etc. A complete reflectivity curve is generally comprised of multiple runs, and RefRed allows one to save a so-called template file that contains all the information needs to reduce each run in the set. The reduction backend (this package) has utilities to read and write such templates, which are stored in XML format. A template consist of an ordered list of ReductionParameters objects, which corresponding to a specific sequence_number . To read a templates and obtains a list of ReductionParameters objects: from lr_reduction import reduction_template_reader with open(template_file, \"r\") as fd: xml_str = fd.read() data_sets = reduction_template_reader.from_xml(xml_str) From a list of ReductionParameters objects: xml_str = reduction_template_reader.to_xml(data_sets) with open(os.path.join(output_dir, \"template.xml\"), \"w\") as fd: fd.write(xml_str)","title":"Compatibility with RefRed"},{"location":"user/workflow/#reduction-workflow","text":"The main reduction workflow, which will extract specular reflectivity from a data file given a reduction template, is found in the workflow module. This workflow will is the one performed by the automated reduction system at BL4B: It will extract the correct reduction parameters from the provided template Perform the reduction and compute the reflectivity curve for that data Combine the reflectivity curve segment with other runs belonging to the same set Write out the complete reflectivity curve in an output file Write out a copy of the template by replacing the run numbers in the template by those that were used Once you have a template, you can simply do: from lr_reduction import workflow from mantid.simpleapi import LoadEventNexus # Load the data from disk ws = LoadEventNexus(Filename='/SNS/REF_L/IPTS-XXXX/nexus/REFL_YYYY.h5') # The template file you want to use template_file = '/SNS/REF_L/IPTS-XXXX/autoreduce/template.xml' # The folder where you want your output output_dir = '/tmp' workflow.reduce(ws, template_file, output_dir) Thie will produce output files in the speficied output directory.","title":"Reduction workflow"}]} \ No newline at end of file diff --git a/docs/sitemap.xml.gz b/docs/sitemap.xml.gz index bb4ca69319c3dba256efda3f49bcd28727c77d60..35655cecd1ee611e21268b271bb0dcc80ed4359d 100644 GIT binary patch delta 13 Ucmb=gXP58h;Ajx6pU7ST02Loading events and dead time co after the dead time period [1].

The dead time correction to be multiplied by the measured detector counts is given by the following for the paralyzing case:

-

$$ +

\[ C_{par} = -{\cal Re}W_0(-R\tau/\Delta_{TOF}) \Delta_{TOF}/R -$$ -where \(R\) is the number of triggers per accelerator pulse within a time-of-flight bin \(\Delta_{TOF}\). +\]
+

where \(R\) is the number of triggers per accelerator pulse within a time-of-flight bin \(\Delta_{TOF}\). The dead time for the current BL4B detector is \(\tau=4.2\) \(\mu s\). In the equation avove, \({\cal Re}W_0\) referes to the principal branch of the Lambert W function.

-

The following is used for the non-paralyzing case: -$$ +

The following is used for the non-paralyzing case:

+
\[ C_{non-par} = 1/(1-R\tau/\Delta_{TOF}) -$$

+\]

By default, we use a paralyzing dead time correction with \(\Delta_{TOF}=100\) \(\mu s\). These parameters can be changed.

The BL4B detector is a wire chamber with a detector readout that includes digitization of the position of each event. For a number of reasons, like event pileup, it is possible for the