Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Notebook update with new transaction, SWU, and energy functions #57

Merged
merged 13 commits into from
Jul 26, 2024
Merged
12 changes: 6 additions & 6 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ Includes New Features, Enhancements, and Bug Fixes.

* Changelog initialization (#26, #39, #40, #46, #47, #48, #50, #51, #52, #54)
* .gitignore (#21, #34, #38)
* Repo readme (#22)
* Repo readme (#22, #57)
* Abstract (#12)
* Contributing document (#55)

### Example
Includes analysis notebooks.

* Analysis scripts (#31, #37, #56)
* Recipe update analysis (#41, #37)
* Baseline recycle scenario analysis (#20, #37)
* Preference analysis (#6, #15, #37)
* Baseline cycamore example (#2, #37)
* Analysis scripts (#31, #37, #56, #57)
* Recipe update analysis (#41, #37, #57)
* Baseline recycle scenario analysis (#20, #37, #57)
* Preference analysis (#6, #15, #37, #57)
* Baseline cycamore example (#2, #37, #57)
6,190 changes: 95 additions & 6,095 deletions EVER/update/ever_update_test_analysis.ipynb

Large diffs are not rendered by default.

6,233 changes: 115 additions & 6,118 deletions EVER/update/one_analysis.ipynb

Large diffs are not rendered by default.

Binary file added EVER/update/one_ever_update_out.sqlite
Binary file not shown.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# NEAR
![Changelog CI Status](https://github.com/nsryan2/NEAR/workflows/Changelog%20CI/badge.svg)
[![Changelog Check](https://github.com/nsryan2/NEAR/actions/workflows/changelog_test.yml/badge.svg)](https://github.com/nsryan2/NEAR/actions/workflows/changelog_test.yml)

NEAR (Non-Equilibrium Archetypes of Reactors): Houses cyclus archetypes for non-equilibrium reactors with core-loading and enrichment variability.

Expand Down
8,880 changes: 2,929 additions & 5,951 deletions baseline/baseline_analysis.ipynb

Large diffs are not rendered by default.

898 changes: 221 additions & 677 deletions baseline/baseline_recycle_analysis.ipynb

Large diffs are not rendered by default.

5,904 changes: 327 additions & 5,577 deletions baseline/cycamore_me_test_analysis.ipynb

Large diffs are not rendered by default.

955 changes: 158 additions & 797 deletions baseline/mock_clover/mock_clover_analysis.ipynb

Large diffs are not rendered by default.

100 changes: 100 additions & 0 deletions scripts/fuel_transactions.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
from cymetric import timeseries


def used_fuel_transactions(transactions, fuels):
"""
Expand Down Expand Up @@ -67,3 +69,101 @@ def total_used_fr_fuel(transactions, fuels):
transactions[f'used_{fuel}_total']

return transactions


def fuel_received(evaler, fuels, receivers):
"""
Creates a DataFrame of the total fuel received by each receiver.
The idea here is different than used_fuel_transactions(),
fresh_fuel_transactions(), and total_used_fr_fuel() in that it is not
looking at the transactions DataFrame, but rather the transactions for
specific receivers.

Parameters
----------
evaler: cymetric.evaluator.Evaluator
The object that takes in the sqlite database and can create DataFrames.
fuels: list of strs
The types of fuel traded.
receivers: list of strs
The archetypes that receive fuel.

Returns
-------
received: pd.DataFrame
The DataFrame of the total fuel received by each receiver.
"""

for receiver in receivers:
# set up the first fuel to create the DataFrame
received_1 = timeseries.transactions(
evaler=evaler, receivers=[f'{receiver}'], commodities=[fuels[0]])

received = received_1.copy()
received[f'{fuels[0]}_{receiver}'] = received['Mass']

# set up the rest of the fuels if the list is longer than 1
if len(fuels) > 1:
for fuel in range(1, len(fuels)):
received_next = timeseries.transactions(
evaler=evaler,
receivers=[f'{receiver}'],
commodities=[fuels[fuel]])

received[f'{fuels[fuel]}_{receiver}'] = received_next['Mass']

# create the total fuel received columns
for fuel in fuels:
received[f'{fuel}_{receiver}_total'] = \
received[f'{fuel}_{receiver}'].cumsum()

return received


def fuel_sent(evaler, fuels, senders):
"""
Creates a DataFrame of the total fuel sent by each sender.
The idea here is different than used_fuel_transactions(),
fresh_fuel_transactions(), and total_used_fr_fuel() in that it is not
looking at the transactions DataFrame, but rather the transactions for
specific senders.

Parameters
----------
evaler: cymetric.evaluator.Evaluator
The object that takes in the sqlite database and can create DataFrames.
fuels: list of strs
The types of fuel traded.
senders: list of strs
The archetypes that send fuel.

Returns
-------
sent: pd.DataFrame
The DataFrame of the total fuel sent by each sender.
"""

for sender in senders:
# set up the first fuel to create the DataFrame
sent_1 = timeseries.transactions(
evaler=evaler, senders=[f'{sender}'], commodities=[fuels[0]])

sent = sent_1.copy()
sent[f'{fuels[0]}_{sender}'] = sent['Mass']

# set up the rest of the fuels if the list is longer than 1
if len(fuels) > 1:
for fuel in range(1, len(fuels)):
sent_next = timeseries.transactions(
evaler=evaler,
senders=[f'{sender}'],
commodities=[fuels[fuel]])

sent[f'{fuels[fuel]}_{sender}'] = sent_next['Mass']

# create the total fuel sent columns
for fuel in fuels:
sent[f'{fuel}_{sender}_total'] = \
sent[f'{fuel}_{sender}'].cumsum()

return sent
99 changes: 99 additions & 0 deletions scripts/products.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
import pandas as pd


def energy_supply(cursor):
"""
This function will pull the energy supply data from the database
and return it as a pandas dataframe.

Parameters
----------
cursor: sqlite3.cursor
The cursor for the sqlite database.

Returns
-------
switch_energy_supply: pd.DataFrame
The energy supply data for the simulation.
"""
# Now we will pull the supplied power to get the amount of
# power from each reactor at every time step.
cursor.execute("SELECT * FROM TimeSeriessupplyPOWER")
supply_rows = cursor.fetchall()

# Create an empty dictionary that mirrors the format of
# the PowerSupply table.
energy_supply = {
'id': [],
'time': [],
'energy': []
}

# Next we will pull the power at each time step for each reactor.
for row in range(len(supply_rows)):
energy_supply['id'].append(str(supply_rows[row][1]))
energy_supply['time'].append(supply_rows[row][2])
energy_supply['energy'].append(supply_rows[row][3])

# Make the dictionary into a pandas DataTrame to match the type of the
# other data we've been working with.
energy_supply_df = pd.DataFrame.from_dict(energy_supply)

# We will turn the ids into columns of energy and make the index time
switch_energy_supply = energy_supply_df.pivot_table(
index='time', columns='id', values='energy', fill_value=0)

# Now we will add a total_energy column.
switch_energy_supply['total_energy'] = \
switch_energy_supply.iloc[:, 0:].sum(axis=1)

return switch_energy_supply


def swu_supply(cursor):
"""
This function will pull the swu supply data from the database
and return it as a pandas DataFrame.

Parameters
----------
cursor: sqlite3.cursor
The cursor for the sqlite database.

Returns
-------
switch_swu_supply: pd.DataFrame
The swu supply data for the simulation.
"""
# Now we will pull the supplied swu to get the amount of swu from each
# facility at every time step.
cursor.execute("SELECT * FROM TimeSeriesEnrichmentSWU")
swu_rows = cursor.fetchall()

# Create an empty dictionary that mirrors the format of the
# TimeSeriesEnrichmentSWU table.
swu_supply = {
'id': [],
'Time': [],
'SWU': []
}

# Next we will pull the swu at each time step for each facility.
for row in range(len(swu_rows)):
swu_supply['id'].append(str(swu_rows[row][1]))
swu_supply['Time'].append(swu_rows[row][2])
swu_supply['SWU'].append(swu_rows[row][3])

# Make the dictionary into a pandas DataFrame to match the type of the
# other data we've been working with.
swu_supply_df = pd.DataFrame.from_dict(swu_supply)

# We will turn the ids into columns of energy and make the index time.
switch_swu_supply = swu_supply_df.pivot_table(
index='Time', columns='id', values='SWU', fill_value=0)

# Now we will add a total_energy column.
switch_swu_supply['total_swu'] = \
switch_swu_supply.iloc[:, 0:].sum(axis=1)

return switch_swu_supply
60 changes: 56 additions & 4 deletions scripts/waste.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from cyclus import nucname
from cymetric.tools import reduce, merge
import pandas as pd

# These functions are almost directly copied from cymetric, but they use the
# version of PyNE built in cyclus to do the isotopics instead of the package
Expand Down Expand Up @@ -54,7 +55,7 @@ def transactions_built_in(evaler, senders=(), receivers=(), commodities=()):
commodities : of the commodity exchanged
"""

# initiate evaluation
# Initiate evaluation.
trans = evaler.eval('Transactions')
agents = evaler.eval('AgentEntry')

Expand All @@ -66,7 +67,7 @@ def transactions_built_in(evaler, senders=(), receivers=(), commodities=()):
if len(senders) != 0:
send_agent = send_agent[send_agent['Prototype'].isin(senders)]

# Clean Transation PDF
# Clean Transaction PDF.
rdc_table = []
rdc_table.append(['ReceiverId', rec_agent['ReceiverId'].tolist()])
rdc_table.append(['SenderId', send_agent['SenderId'].tolist()])
Expand All @@ -75,14 +76,14 @@ def transactions_built_in(evaler, senders=(), receivers=(), commodities=()):

trans = reduce(trans, rdc_table)

# Merge Sender to Transaction PDF
# Merge Sender to Transaction PDF.
base_col = ['SimId', 'SenderId']
added_col = base_col + ['Prototype']
trans = merge(trans, base_col, send_agent, added_col)
trans = trans.rename(index=str, columns={
'Prototype': 'SenderPrototype'})

# Merge Receiver to Transaction PDF
# Merge Receiver to Transaction PDF.
base_col = ['SimId', 'ReceiverId']
added_col = base_col + ['Prototype']
trans = merge(trans, base_col, rec_agent, added_col)
Expand All @@ -103,3 +104,54 @@ def format_nucs_built_in(nucs):
of nuclides
"""
return [nucname.id(nuc) for nuc in nucs]


def isotope_database(evaler, receivers, isotopes, commodities):
"""
This function makes a DataFrame of transaction information
for each isotope given in each of the given commodities.

Parameters
----------
evaler: cymetric.evaluator.Evaluator
The object that takes in the sqlite database and can create DataFrames.
receivers: list of strs
The archetypes that receive the fuel.
isotopes: list of strs
The isotopes that are in the fuel.
commodities: list of strs
The commodities that are traded.

Returns
-------
isotope_df: pd.DataFrame
The database of the transactions for each isotope in each commodity at
each time step.
"""

isotope_db = transactions_nuc_built_in(
evaler=evaler,
receivers=receivers,
commodities=commodities,
nucs=isotopes)

# Create the dataframe and populate the columns for each
# isotope with zeros.
isotope_df = evaler.eval('TimeList')
for nucid in isotopes:
isotope_df[nucid] = 0
isotope_df[f'{nucid}_total'] = 0

# Track the mass of each isotope stored at each time.
for transaction in range(len(isotope_db)):
for nucid in isotopes:
if isotope_db.loc[transaction, 'NucId'] == int(nucid):
mass = isotope_db.loc[transaction, 'Mass']
time_step = isotope_db.loc[transaction, 'Time']
isotope_df.loc[str(time_step), str(nucid)] = mass

# Add up the totals for each isotope over time.
for nucid in isotopes:
isotope_df[f'{nucid}_total'] = isotope_df[nucid].cumsum()

return isotope_df
Loading