-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FENDL-3.2b Retrofitting #42
Draft
eitan-weinstein
wants to merge
59
commits into
svalinn:main
Choose a base branch
from
eitan-weinstein:fendl_updates
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from 10 commits
Commits
Show all changes
59 commits
Select commit
Hold shift + click to select a range
dbef6ca
First commit for FENDL3.2B retrofitting
cdd7bcd
Replacing NJOY Bash script with subprocess execution in Python
f3d010f
Simplifying mass number formatting in tendl_download() function
6c5ac24
Simplifying endf_specs() function
03e3af3
Remove now-obsolete ENDFtk warning suppression
fa4f29e
Simplify tendl_download() function using data structures
0190096
Switching tendl_download() function over to urllib dependence
413ae46
Moving card deck formatting from Pandas DataFrame to dictionary
2eb9ffd
Separating out a write function for the GROUPR input from the input c…
1247db3
Removing now-obsolete Pandas dependence
1d35b79
Simplifying card writing for groupr_input_file_writer()
eitan-weinstein de3cbb4
Fixing indexing on groupr_input_file_writer()
d20eed8
Storing elements in a single dictionary to be referenced across both …
58a4ede
Removing now-obsolete ENDFtk warning supression from gend_tools.py an…
b83be55
Updating gendf_download() function -- notably switching away from wge…
e135311
Switching CSV reading from Pandas DataFrame to dictionary
29528dd
Moving away from direct input to argparse input/options
0abb51b
Expanding argparse usage
582424b
Moving away from print statements towards logging
77e9a65
Removed unnecessary file from file cleanup list
493a35c
Expanding logger to capture 'No bottleneck testing available' message
a29bd66
Improving readability of NJOY run message for logger
69fe5f0
Updating the logging to redirect ENDFtk messages to the logger and re…
d0f7d3b
Removing stand-alone groupr script -- unnecessary and not called indi…
4318ae4
Reorganizing folder structure -- separate GROUPR folder no longer see…
b1b63f9
Finalizing move out of GROUPR/
1edd251
Moving the rest of fendl3_gendf.py to the main() function
fb2d548
Forgot to include mt_table in main()
4065d00
Streamlining endf_specs usage and placement.
4250c44
Removing direct GENDF download function -- all downloads need to be p…
93d469f
Moving GROUPR parameters to global constants.
98dcc93
Logging error if NJOY run is unsuccessful.
2460d72
Cleaning up package imports
6fcf5e5
Removing unnecessary package imports on fendl3_gendf.py
5ec6bbf
Fixing KZA formatting.
f490d38
Addressing low-level comments from most recent review.
45df27f
Improving readability
b76634f
Beginning high-level overhaul and restructuring
121e57a
Improving readability for nuclear_decay()
fb1b796
Increasing readability of argparser
c8e6cea
Major overhaul of modularity and including functionality for iteratin…
4d99f41
Removing time package.
e0529dc
Removing specific example file from GENDF files.
cc064b6
Making the file saving more versatile.
14c5730
Responding to a majority of the high-level comments from Tuesday's re…
95815b2
Fixing docstring for ensure_gendf_markers() function.
a5997b5
Improving isotope identification methods.
f83a646
Improving isotope identification methods.
98f23c3
Simplifying logging method and usage.
498c824
One more logging fix.
a807f1e
Completing response to last review and making arg processing more mod…
76b9aa1
Improving ability to iterate over all elements.
eebeea3
Fixing minor bug in execution of handle_TENDL_downloads().
912530f
Small formatting change to fit in max line length.
c374494
More minor formatting adjustments and simplifying the line length set…
f50b617
Allowing for fendle_retrofit.py to be executed from DataLib.
4ba725e
Removing unnecessary print statement.
6257033
Ensuring that NJOY output is properly handled when program is execute…
4921dba
Small formatting changes before moving over to project individual PRs.
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
import groupr_tools as grpt | ||
import pandas as pd | ||
|
||
# Call TENDL download function by user CLI input | ||
element = input('Select element: ') | ||
A = input('Select mass number: A = ') | ||
endf_path = grpt.tendl_download(element, A, 'endf') | ||
pendf_path = grpt.tendl_download(element, A, 'pendf') | ||
print(f'ENDF file can be found at ./{endf_path}') | ||
print(f'PENDF file can be found at ./{pendf_path}') | ||
|
||
# Extract necessary MT and MAT data from the ENDF file | ||
matb, MTs = grpt.endf_specs(endf_path) | ||
|
||
# Write out the GROUPR input file | ||
mt_table = pd.read_csv('./mt_table.csv') | ||
card_deck = grpt.groupr_input(matb, MTs, element, A, mt_table) | ||
|
||
# Run NJOY | ||
grpt.run_njoy(endf_path, pendf_path, card_deck, element, A) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,180 @@ | ||
# Import packages | ||
import ENDFtk | ||
import urllib | ||
import subprocess | ||
|
||
# List of elements in the Periodic Table | ||
elements = [ | ||
'H', 'He', 'Li', 'Be', 'B', 'C', 'N', 'O', 'F', 'Ne', | ||
'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'Cl', 'Ar', 'K', 'Ca', | ||
'Sc', 'Ti', 'V', 'Cr', 'Mn', 'Fe', 'Co', 'Ni', 'Cu', 'Zn', | ||
'Ga', 'Ge', 'As', 'Se', 'Br', 'Kr', 'Rb', 'Sr', 'Y', 'Zr', | ||
'Nb', 'Mo', 'Tc', 'Ru', 'Rh', 'Pd', 'Ag', 'Cd', 'In', 'Sn', | ||
'Sb', 'Te', 'I', 'Xe', 'Cs', 'Ba', 'La', 'Ce', 'Pr', 'Nd', | ||
'Pm', 'Sm', 'Eu', 'Gd', 'Tb', 'Dy', 'Ho', 'Er', 'Tm', 'Yb', | ||
'Lu', 'Hf', 'Ta', 'W', 'Re', 'Os', 'Ir', 'Pt', 'Au', 'Hg', | ||
'Tl', 'Pb', 'Bi', 'Po', 'At', 'Rn', 'Fr', 'Ra', 'Ac', 'Th', | ||
'Pa', 'U', 'Np', 'Pu', 'Am', 'Cm', 'Bk', 'Cf', 'Es', 'Fm', | ||
'Md', 'No', 'Lr', 'Rf', 'Db', 'Sg', 'Bh', 'Hs', 'Mt', 'Ds', | ||
'Rg', 'Cn', 'Nh', 'Fl', 'Mc', 'Lv', 'Ts', 'Og' | ||
] | ||
|
||
# Define a function to download the .tendl file given specific user inputs to for element and mass number | ||
def tendl_download(element, A, filetype, save_path = None): | ||
# Ensure that A is properly formatted | ||
A = str(A).zfill(3 + ('m' in A)) | ||
|
||
# Define general URL format for files in the TENDL database | ||
tendl_gen_url = 'https://tendl.web.psi.ch/tendl_2017/neutron_file/' | ||
|
||
# Create a dictionary to generalize formatting for both ENDF and PENDF files | ||
file_handling = {'endf' : {'ext': 'tendl', 'tape_num': 20}, | ||
'pendf' : {'ext': 'pendf', 'tape_num': 21}} | ||
|
||
# Construct the filetype and isotope specific URL | ||
isotope_component = f'{element}/{element}{A}/lib/endf/n-{element}{A}.' | ||
ext = file_handling[filetype.lower()]['ext'] | ||
download_url = tendl_gen_url + isotope_component + ext | ||
|
||
# Define a save path for the file if there is not one already specified | ||
if save_path is None: | ||
save_path = f'tape{file_handling[filetype.lower()]["tape_num"]}' | ||
|
||
# Check if the file exists | ||
try: | ||
urllib.request.urlopen(download_url) | ||
except urllib.error.URLError as e: | ||
file_not_found_code = 404 | ||
if str(file_not_found_code) in str(e): | ||
raise FileNotFoundError() | ||
|
||
# Download the file using urllib | ||
with urllib.request.urlopen(download_url) as f: | ||
temp_file = f.read().decode('utf-8') | ||
|
||
# Write out the file to the save_path | ||
with open(save_path, 'w') as f: | ||
f.write(temp_file) | ||
|
||
return save_path | ||
|
||
# Define a function to extract MT and MAT data from an ENDF file | ||
def endf_specs(endf_path): | ||
# Read in ENDF tape using ENDFtk | ||
tape = ENDFtk.tree.Tape.from_file(endf_path) | ||
|
||
# Determine the material ID | ||
mat_ids = tape.material_numbers | ||
matb = mat_ids[0] | ||
|
||
# Set MF for cross sections | ||
xs_MF = 3 | ||
|
||
# Extract out the file | ||
file = tape.material(matb).file(xs_MF).parse() | ||
|
||
# Extract the MT numbers that are present in the file | ||
MTs = [MT.MT for MT in file.sections.to_list()] | ||
|
||
return matb, MTs | ||
|
||
# Define a function to format GROUPR input cards | ||
def format_card(card_name, card_content, MTs): | ||
eitan-weinstein marked this conversation as resolved.
Show resolved
Hide resolved
|
||
card_str = '' | ||
gen_str = ' ' + ' '.join(map(str, card_content)) | ||
if card_name == 9: | ||
card_str = ' ' + '/\n '.join(card_content) + '/\n' | ||
elif card_name == 4: | ||
card_str += gen_str + '\n' | ||
else: | ||
card_str += gen_str + '/\n' | ||
return card_str | ||
|
||
# Define a function to create the GROUPR input file | ||
def groupr_input_file_format(matb, MTs, element, A, mt_table): | ||
|
||
cards = {} | ||
|
||
# Set Card 1 | ||
nendf = 20 # unit for endf tape | ||
npend = 21 # unit for pendf tape | ||
ngout1 = 0 # unit for input gout tape (default=0) | ||
ngout2 = 31 # unit for output gout tape (default=0) | ||
|
||
cards[1] = [nendf, npend, ngout1, ngout2] | ||
|
||
# Set Card 2 | ||
# matb -- (already defined) -- material to be processed | ||
ign = 17 # neutron group structure option | ||
igg = 0 # gamma group structure option | ||
iwt = 11 # weight function option | ||
lord = 0 # Legendgre order | ||
ntemp = 1 # number of temperatures (default = 1) | ||
nsigz = 1 # number of sigma zeroes (default = 1) | ||
iprint = 1 # long print option (0/1=minimum/maximum) -- (default=1) | ||
ismooth = 1 # swith on/off smoother operation (1/0, default=1=on) | ||
|
||
cards[2] = [matb, ign, igg, iwt, lord, ntemp, nsigz, iprint] | ||
|
||
# Set Card 3 | ||
Z = str(elements.index(element) + 1).zfill(2) | ||
title = f'"{Z}-{element}-{A} for TENDL 2017"' | ||
cards[3] = [title] | ||
|
||
# Set Card 4 | ||
temp = 293.16 # temperature in Kelvin | ||
cards[4] = [temp] | ||
|
||
# Set Card 5 | ||
sigz = 0 # sigma zero values (including infinity) | ||
cards[5] = [sigz] | ||
|
||
# Set Card 9 | ||
mfd = 3 # file to be processed | ||
mtd = MTs # sections to be processed | ||
cards[9] = [] | ||
for mt in MTs: | ||
mtname = mt_table[mt_table['MT'] == mt]['Reaction'].values[0] # description of section to be processed | ||
card9_line = f'{mfd} {mt} "{mtname}"' | ||
cards[9].append(card9_line) | ||
|
||
# Set Card 10 | ||
matd = 0 # next mat number to be processed | ||
cards[10] = [matd] | ||
|
||
return cards | ||
|
||
# Define a function to write out the GROUPR input file | ||
def groupr_input_file_writer(cards, MTs): | ||
# Write the input deck to the groupr.inp file | ||
with open('groupr.inp', 'w') as f: | ||
f.write('groupr\n') | ||
max_card_index = 10 | ||
for i in range(max_card_index + 1): | ||
try: | ||
f.write(format_card(i, cards[i], MTs)) | ||
except KeyError: | ||
continue | ||
eitan-weinstein marked this conversation as resolved.
Show resolved
Hide resolved
|
||
f.write(' 0/\nstop') | ||
|
||
# Define a function to execute NJOY bash script | ||
def run_njoy(cards, element, A): | ||
# Define the input files | ||
INPUT = 'groupr.inp' | ||
OUTPUT = 'groupr.out' | ||
|
||
# Run NJOY | ||
result = subprocess.run(['njoy'], input=open(INPUT).read(), text= True, capture_output=True) | ||
with open(OUTPUT, 'w') as output_file: | ||
output_file.write(result.stdout) | ||
|
||
# If the run is successful, print out the output and make a copy of the file as a .GENDF file | ||
if result.stderr == '': | ||
output = subprocess.run(['cat', 'output'], capture_output=True, text = True) | ||
title = cards[3][0][1:-1] | ||
title_index = output.stdout.find(title) | ||
print(output.stdout[:title_index + len(title)]) | ||
|
||
gendf_path = f'tendl_2017_{element}{A}.gendf' | ||
subprocess.run(['cp', 'tape31', gendf_path]) | ||
return gendf_path |
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Separation of concerns - make this more modular with each function performing a specific task.
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
import ENDFtk | ||
import gendf_tools as GENDFtk | ||
import pandas as pd | ||
import sys | ||
import subprocess | ||
|
||
# Load MT table | ||
# Data for MT table collected from | ||
# https://www.oecd-nea.org/dbdata/data/manual-endf/endf102_MT.pdf | ||
mt_table = pd.read_csv('mt_table.csv') | ||
|
||
# Set user parameters | ||
print('Input GENDF file or download from FENDL 3.2b') | ||
usr_selection = input('For local input, type I. For download, type D. (I, D): ') | ||
if usr_selection == 'I': | ||
gendf_path = input('Type in path of GENDF file: ') | ||
pKZA = GENDFtk.gendf_pkza_extract(gendf_path) | ||
elif usr_selection == 'D': | ||
element = input('Select target element: ') | ||
A = input('Select mass number (A): ') | ||
# Check isomeric state | ||
if 'm' not in A: | ||
gendf_path, pKZA = GENDFtk.gendf_download(element, A) | ||
else: | ||
# Use NJOY GROUPR to convert the isomer's TENDL 2017 data to a GENDF file | ||
sys.path.append('./GROUPR') | ||
import groupr_tools as GRPRtk | ||
|
||
# Download ENDF and PENDF files for the isomer | ||
endf_path = GRPRtk.tendl_download(element, A, 'endf') | ||
pendf_path = GRPRtk.tendl_download(element, A, 'pendf') | ||
|
||
# Extract necessary MT and MAT data from the ENDF file | ||
matb, MTs = GRPRtk.endf_specs(endf_path) | ||
|
||
# Write out the GROUPR input file | ||
card_deck = GRPRtk.groupr_input_file_format(matb, MTs, element, A, mt_table) | ||
GRPRtk.groupr_input_file_writer(card_deck, MTs) | ||
|
||
# Run NJOY with GROUPR to create a GENDF file for the isomer | ||
gendf_path = GRPRtk.run_njoy(card_deck, element, A) | ||
|
||
# Save pKZA value | ||
pKZA = GENDFtk.gendf_pkza_extract(gendf_path, M = 1) | ||
|
||
# Clean up repository from unnecessary intermediate files from GROUPR run | ||
groupr_files = ['groupr.inp', 'groupr.out', 'run_njoy.sh', 'tape20', | ||
'tape21', 'tape31', f'fendl3_{element}{A[:-1]}'] | ||
for file in groupr_files: | ||
subprocess.run(['rm', file]) | ||
|
||
print(f"GENDF file path: {gendf_path}") | ||
print(f"Parent KZA (pKZA): {pKZA}") | ||
|
||
# Read in data with ENDFtk | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Move the rest of this script into the main function |
||
tape = ENDFtk.tree.Tape.from_file(gendf_path) | ||
mat_ids = tape.material_numbers | ||
mat = mat_ids[0] | ||
xs_MF = 3 | ||
file = tape.material(mat).file(xs_MF) | ||
|
||
# Extract MT values | ||
MTs = [] | ||
for i in range(1000): | ||
with GENDFtk.suppress_output(): | ||
try: | ||
file.section(i) | ||
MTs.append(i) | ||
except: | ||
continue | ||
|
||
# Initialize lists | ||
cross_sections_by_MT = [] | ||
emitted_particles_list = [] | ||
dKZAs = [] | ||
|
||
# Extract data for each MT | ||
for MT in MTs: | ||
try: | ||
sigma_list = GENDFtk.extract_cross_sections(file, MT) | ||
if not sigma_list: | ||
continue | ||
dKZA, emitted_particles = GENDFtk.reaction_calculator(MT, mt_table, pKZA) | ||
if dKZA is None: | ||
continue | ||
cross_sections_by_MT.append(sigma_list) | ||
dKZAs.append(dKZA) | ||
emitted_particles_list.append(emitted_particles) | ||
except Exception as e: | ||
print(f"Error processing MT {MT}: {e}") | ||
continue | ||
|
||
# Store data in a Pandas DataFrame | ||
gendf_data = pd.DataFrame({ | ||
'Parent KZA': [pKZA] * len(dKZAs), | ||
'Daughter KZA': dKZAs, | ||
'Emitted Particles': emitted_particles_list, | ||
'Cross Sections': cross_sections_by_MT | ||
}) | ||
|
||
# Save to CSV | ||
gendf_data.to_csv('gendf_data.csv', index=False) | ||
print("Saved gendf_data.csv") | ||
print(gendf_data.head()) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you read this into a dictionary instead of a dataframe?
see: https://docs.python.org/3/library/csv.html#csv.DictReader