Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FENDL-3.2b Retrofitting #42

Draft
wants to merge 59 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
dbef6ca
First commit for FENDL3.2B retrofitting
Jun 14, 2024
cdd7bcd
Replacing NJOY Bash script with subprocess execution in Python
Jun 17, 2024
f3d010f
Simplifying mass number formatting in tendl_download() function
Jun 17, 2024
6c5ac24
Simplifying endf_specs() function
Jun 17, 2024
03e3af3
Remove now-obsolete ENDFtk warning suppression
Jun 17, 2024
fa4f29e
Simplify tendl_download() function using data structures
Jun 17, 2024
0190096
Switching tendl_download() function over to urllib dependence
Jun 17, 2024
413ae46
Moving card deck formatting from Pandas DataFrame to dictionary
Jun 17, 2024
2eb9ffd
Separating out a write function for the GROUPR input from the input c…
Jun 17, 2024
1247db3
Removing now-obsolete Pandas dependence
Jun 17, 2024
1d35b79
Simplifying card writing for groupr_input_file_writer()
eitan-weinstein Jun 18, 2024
de3cbb4
Fixing indexing on groupr_input_file_writer()
Jun 18, 2024
d20eed8
Storing elements in a single dictionary to be referenced across both …
Jun 18, 2024
58a4ede
Removing now-obsolete ENDFtk warning supression from gend_tools.py an…
Jun 18, 2024
b83be55
Updating gendf_download() function -- notably switching away from wge…
Jun 18, 2024
e135311
Switching CSV reading from Pandas DataFrame to dictionary
Jun 18, 2024
29528dd
Moving away from direct input to argparse input/options
Jun 18, 2024
0abb51b
Expanding argparse usage
Jun 18, 2024
582424b
Moving away from print statements towards logging
Jun 18, 2024
77e9a65
Removed unnecessary file from file cleanup list
Jun 19, 2024
493a35c
Expanding logger to capture 'No bottleneck testing available' message
Jun 19, 2024
a29bd66
Improving readability of NJOY run message for logger
Jun 19, 2024
69fe5f0
Updating the logging to redirect ENDFtk messages to the logger and re…
Jun 21, 2024
d0f7d3b
Removing stand-alone groupr script -- unnecessary and not called indi…
Jun 21, 2024
4318ae4
Reorganizing folder structure -- separate GROUPR folder no longer see…
Jun 21, 2024
b1b63f9
Finalizing move out of GROUPR/
Jun 21, 2024
1edd251
Moving the rest of fendl3_gendf.py to the main() function
Jun 21, 2024
fb2d548
Forgot to include mt_table in main()
Jun 21, 2024
4065d00
Streamlining endf_specs usage and placement.
Jun 24, 2024
4250c44
Removing direct GENDF download function -- all downloads need to be p…
Jun 24, 2024
93d469f
Moving GROUPR parameters to global constants.
Jun 24, 2024
98dcc93
Logging error if NJOY run is unsuccessful.
Jun 24, 2024
2460d72
Cleaning up package imports
Jun 24, 2024
6fcf5e5
Removing unnecessary package imports on fendl3_gendf.py
Jun 24, 2024
5ec6bbf
Fixing KZA formatting.
Jun 26, 2024
f490d38
Addressing low-level comments from most recent review.
Jul 1, 2024
45df27f
Improving readability
Jul 1, 2024
b76634f
Beginning high-level overhaul and restructuring
Jul 1, 2024
121e57a
Improving readability for nuclear_decay()
Jul 1, 2024
fb1b796
Increasing readability of argparser
Jul 1, 2024
c8e6cea
Major overhaul of modularity and including functionality for iteratin…
Jul 9, 2024
4d99f41
Removing time package.
Jul 9, 2024
e0529dc
Removing specific example file from GENDF files.
Jul 9, 2024
cc064b6
Making the file saving more versatile.
Jul 9, 2024
14c5730
Responding to a majority of the high-level comments from Tuesday's re…
Jul 11, 2024
95815b2
Fixing docstring for ensure_gendf_markers() function.
Jul 11, 2024
a5997b5
Improving isotope identification methods.
Jul 12, 2024
f83a646
Improving isotope identification methods.
Jul 12, 2024
98f23c3
Simplifying logging method and usage.
Jul 12, 2024
498c824
One more logging fix.
Jul 12, 2024
a807f1e
Completing response to last review and making arg processing more mod…
Jul 16, 2024
76b9aa1
Improving ability to iterate over all elements.
Jul 16, 2024
eebeea3
Fixing minor bug in execution of handle_TENDL_downloads().
Jul 16, 2024
912530f
Small formatting change to fit in max line length.
Jul 16, 2024
c374494
More minor formatting adjustments and simplifying the line length set…
Jul 16, 2024
f50b617
Allowing for fendle_retrofit.py to be executed from DataLib.
Jul 16, 2024
4ba725e
Removing unnecessary print statement.
Jul 17, 2024
6257033
Ensuring that NJOY output is properly handled when program is execute…
Jul 17, 2024
4921dba
Small formatting changes before moving over to project individual PRs.
Jul 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions src/DataLib/fendl32B_retrofit/GROUPR/groupr.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import groupr_tools as grpt
import pandas as pd

# Call TENDL download function by user CLI input
element = input('Select element: ')
A = input('Select mass number: A = ')
endf_path = grpt.tendl_download(element, A, 'endf')
pendf_path = grpt.tendl_download(element, A, 'pendf')
print(f'ENDF file can be found at ./{endf_path}')
print(f'PENDF file can be found at ./{pendf_path}')

# Extract necessary MT and MAT data from the ENDF file
matb, MTs = grpt.endf_specs(endf_path)

# Write out the GROUPR input file
mt_table = pd.read_csv('./mt_table.csv')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you read this into a dictionary instead of a dataframe?

see: https://docs.python.org/3/library/csv.html#csv.DictReader

card_deck = grpt.groupr_input(matb, MTs, element, A, mt_table)

# Run NJOY
grpt.run_njoy(endf_path, pendf_path, card_deck, element, A)
180 changes: 180 additions & 0 deletions src/DataLib/fendl32B_retrofit/GROUPR/groupr_tools.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
# Import packages
import ENDFtk
import urllib
import subprocess

# List of elements in the Periodic Table
elements = [
'H', 'He', 'Li', 'Be', 'B', 'C', 'N', 'O', 'F', 'Ne',
'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'Cl', 'Ar', 'K', 'Ca',
'Sc', 'Ti', 'V', 'Cr', 'Mn', 'Fe', 'Co', 'Ni', 'Cu', 'Zn',
'Ga', 'Ge', 'As', 'Se', 'Br', 'Kr', 'Rb', 'Sr', 'Y', 'Zr',
'Nb', 'Mo', 'Tc', 'Ru', 'Rh', 'Pd', 'Ag', 'Cd', 'In', 'Sn',
'Sb', 'Te', 'I', 'Xe', 'Cs', 'Ba', 'La', 'Ce', 'Pr', 'Nd',
'Pm', 'Sm', 'Eu', 'Gd', 'Tb', 'Dy', 'Ho', 'Er', 'Tm', 'Yb',
'Lu', 'Hf', 'Ta', 'W', 'Re', 'Os', 'Ir', 'Pt', 'Au', 'Hg',
'Tl', 'Pb', 'Bi', 'Po', 'At', 'Rn', 'Fr', 'Ra', 'Ac', 'Th',
'Pa', 'U', 'Np', 'Pu', 'Am', 'Cm', 'Bk', 'Cf', 'Es', 'Fm',
'Md', 'No', 'Lr', 'Rf', 'Db', 'Sg', 'Bh', 'Hs', 'Mt', 'Ds',
'Rg', 'Cn', 'Nh', 'Fl', 'Mc', 'Lv', 'Ts', 'Og'
]

# Define a function to download the .tendl file given specific user inputs to for element and mass number
def tendl_download(element, A, filetype, save_path = None):
# Ensure that A is properly formatted
A = str(A).zfill(3 + ('m' in A))

# Define general URL format for files in the TENDL database
tendl_gen_url = 'https://tendl.web.psi.ch/tendl_2017/neutron_file/'

# Create a dictionary to generalize formatting for both ENDF and PENDF files
file_handling = {'endf' : {'ext': 'tendl', 'tape_num': 20},
'pendf' : {'ext': 'pendf', 'tape_num': 21}}

# Construct the filetype and isotope specific URL
isotope_component = f'{element}/{element}{A}/lib/endf/n-{element}{A}.'
ext = file_handling[filetype.lower()]['ext']
download_url = tendl_gen_url + isotope_component + ext

# Define a save path for the file if there is not one already specified
if save_path is None:
save_path = f'tape{file_handling[filetype.lower()]["tape_num"]}'

# Check if the file exists
try:
urllib.request.urlopen(download_url)
except urllib.error.URLError as e:
file_not_found_code = 404
if str(file_not_found_code) in str(e):
raise FileNotFoundError()

# Download the file using urllib
with urllib.request.urlopen(download_url) as f:
temp_file = f.read().decode('utf-8')

# Write out the file to the save_path
with open(save_path, 'w') as f:
f.write(temp_file)

return save_path

# Define a function to extract MT and MAT data from an ENDF file
def endf_specs(endf_path):
# Read in ENDF tape using ENDFtk
tape = ENDFtk.tree.Tape.from_file(endf_path)

# Determine the material ID
mat_ids = tape.material_numbers
matb = mat_ids[0]

# Set MF for cross sections
xs_MF = 3

# Extract out the file
file = tape.material(matb).file(xs_MF).parse()

# Extract the MT numbers that are present in the file
MTs = [MT.MT for MT in file.sections.to_list()]

return matb, MTs

# Define a function to format GROUPR input cards
def format_card(card_name, card_content, MTs):
card_str = ''
gen_str = ' ' + ' '.join(map(str, card_content))
if card_name == 9:
card_str = ' ' + '/\n '.join(card_content) + '/\n'
elif card_name == 4:
card_str += gen_str + '\n'
else:
card_str += gen_str + '/\n'
return card_str

# Define a function to create the GROUPR input file
def groupr_input_file_format(matb, MTs, element, A, mt_table):

cards = {}

# Set Card 1
nendf = 20 # unit for endf tape
npend = 21 # unit for pendf tape
ngout1 = 0 # unit for input gout tape (default=0)
ngout2 = 31 # unit for output gout tape (default=0)

cards[1] = [nendf, npend, ngout1, ngout2]

# Set Card 2
# matb -- (already defined) -- material to be processed
ign = 17 # neutron group structure option
igg = 0 # gamma group structure option
iwt = 11 # weight function option
lord = 0 # Legendgre order
ntemp = 1 # number of temperatures (default = 1)
nsigz = 1 # number of sigma zeroes (default = 1)
iprint = 1 # long print option (0/1=minimum/maximum) -- (default=1)
ismooth = 1 # swith on/off smoother operation (1/0, default=1=on)

cards[2] = [matb, ign, igg, iwt, lord, ntemp, nsigz, iprint]

# Set Card 3
Z = str(elements.index(element) + 1).zfill(2)
title = f'"{Z}-{element}-{A} for TENDL 2017"'
cards[3] = [title]

# Set Card 4
temp = 293.16 # temperature in Kelvin
cards[4] = [temp]

# Set Card 5
sigz = 0 # sigma zero values (including infinity)
cards[5] = [sigz]

# Set Card 9
mfd = 3 # file to be processed
mtd = MTs # sections to be processed
cards[9] = []
for mt in MTs:
mtname = mt_table[mt_table['MT'] == mt]['Reaction'].values[0] # description of section to be processed
card9_line = f'{mfd} {mt} "{mtname}"'
cards[9].append(card9_line)

# Set Card 10
matd = 0 # next mat number to be processed
cards[10] = [matd]

return cards

# Define a function to write out the GROUPR input file
def groupr_input_file_writer(cards, MTs):
# Write the input deck to the groupr.inp file
with open('groupr.inp', 'w') as f:
f.write('groupr\n')
max_card_index = 10
for i in range(max_card_index + 1):
try:
f.write(format_card(i, cards[i], MTs))
except KeyError:
continue
f.write(' 0/\nstop')

# Define a function to execute NJOY bash script
def run_njoy(cards, element, A):
# Define the input files
INPUT = 'groupr.inp'
OUTPUT = 'groupr.out'

# Run NJOY
result = subprocess.run(['njoy'], input=open(INPUT).read(), text= True, capture_output=True)
with open(OUTPUT, 'w') as output_file:
output_file.write(result.stdout)

# If the run is successful, print out the output and make a copy of the file as a .GENDF file
if result.stderr == '':
output = subprocess.run(['cat', 'output'], capture_output=True, text = True)
title = cards[3][0][1:-1]
title_index = output.stdout.find(title)
print(output.stdout[:title_index + len(title)])

gendf_path = f'tendl_2017_{element}{A}.gendf'
subprocess.run(['cp', 'tape31', gendf_path])
return gendf_path
104 changes: 104 additions & 0 deletions src/DataLib/fendl32B_retrofit/fendl3_gendf.py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separation of concerns - make this more modular with each function performing a specific task.

  • argparse for command-line input/options
  • read JSON or YAML for more complicated input
  • separate methods for reading data, processing data, outputting data
  • logging for user output

Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
import ENDFtk
import gendf_tools as GENDFtk
import pandas as pd
import sys
import subprocess

# Load MT table
# Data for MT table collected from
# https://www.oecd-nea.org/dbdata/data/manual-endf/endf102_MT.pdf
mt_table = pd.read_csv('mt_table.csv')

# Set user parameters
print('Input GENDF file or download from FENDL 3.2b')
usr_selection = input('For local input, type I. For download, type D. (I, D): ')
if usr_selection == 'I':
gendf_path = input('Type in path of GENDF file: ')
pKZA = GENDFtk.gendf_pkza_extract(gendf_path)
elif usr_selection == 'D':
element = input('Select target element: ')
A = input('Select mass number (A): ')
# Check isomeric state
if 'm' not in A:
gendf_path, pKZA = GENDFtk.gendf_download(element, A)
else:
# Use NJOY GROUPR to convert the isomer's TENDL 2017 data to a GENDF file
sys.path.append('./GROUPR')
import groupr_tools as GRPRtk

# Download ENDF and PENDF files for the isomer
endf_path = GRPRtk.tendl_download(element, A, 'endf')
pendf_path = GRPRtk.tendl_download(element, A, 'pendf')

# Extract necessary MT and MAT data from the ENDF file
matb, MTs = GRPRtk.endf_specs(endf_path)

# Write out the GROUPR input file
card_deck = GRPRtk.groupr_input_file_format(matb, MTs, element, A, mt_table)
GRPRtk.groupr_input_file_writer(card_deck, MTs)

# Run NJOY with GROUPR to create a GENDF file for the isomer
gendf_path = GRPRtk.run_njoy(card_deck, element, A)

# Save pKZA value
pKZA = GENDFtk.gendf_pkza_extract(gendf_path, M = 1)

# Clean up repository from unnecessary intermediate files from GROUPR run
groupr_files = ['groupr.inp', 'groupr.out', 'run_njoy.sh', 'tape20',
'tape21', 'tape31', f'fendl3_{element}{A[:-1]}']
for file in groupr_files:
subprocess.run(['rm', file])

print(f"GENDF file path: {gendf_path}")
print(f"Parent KZA (pKZA): {pKZA}")

# Read in data with ENDFtk
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move the rest of this script into the main function

tape = ENDFtk.tree.Tape.from_file(gendf_path)
mat_ids = tape.material_numbers
mat = mat_ids[0]
xs_MF = 3
file = tape.material(mat).file(xs_MF)

# Extract MT values
MTs = []
for i in range(1000):
with GENDFtk.suppress_output():
try:
file.section(i)
MTs.append(i)
except:
continue

# Initialize lists
cross_sections_by_MT = []
emitted_particles_list = []
dKZAs = []

# Extract data for each MT
for MT in MTs:
try:
sigma_list = GENDFtk.extract_cross_sections(file, MT)
if not sigma_list:
continue
dKZA, emitted_particles = GENDFtk.reaction_calculator(MT, mt_table, pKZA)
if dKZA is None:
continue
cross_sections_by_MT.append(sigma_list)
dKZAs.append(dKZA)
emitted_particles_list.append(emitted_particles)
except Exception as e:
print(f"Error processing MT {MT}: {e}")
continue

# Store data in a Pandas DataFrame
gendf_data = pd.DataFrame({
'Parent KZA': [pKZA] * len(dKZAs),
'Daughter KZA': dKZAs,
'Emitted Particles': emitted_particles_list,
'Cross Sections': cross_sections_by_MT
})

# Save to CSV
gendf_data.to_csv('gendf_data.csv', index=False)
print("Saved gendf_data.csv")
print(gendf_data.head())
Loading