Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass tests #45

Open
wants to merge 35 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 23 commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
7fd7c05
making sure tests pass
akataba Feb 6, 2024
b7ae180
getting saving to work
akataba Feb 6, 2024
69c53ce
added separately tests for noisy and noiseless environments
akataba Feb 7, 2024
08cd5dc
change to X gate for testing purposes
akataba Feb 7, 2024
8034a3e
running inferencing multiple times and modified tests
akataba Mar 6, 2024
850e395
merging new changes from main
akataba Mar 6, 2024
e9a9cb0
added a python-test yaml file in order to run remote tests
akataba Mar 6, 2024
b0e1398
merging changes from main
akataba Mar 6, 2024
9b02c28
working on getting github to install package for remote testing
akataba Mar 6, 2024
5d7da26
Create 2023-11-08_11-09-45
akataba Mar 6, 2024
e58267c
Delete results/2023-11-08_11-09-45
akataba Mar 6, 2024
988b8ea
removing a test that requires uploading a huge file
akataba Mar 6, 2024
d7d8ae5
adding line so that installation adds json files
akataba Mar 6, 2024
df9f219
making sure github installs a certain version of ray 2.4.0 to avoid d…
akataba Mar 6, 2024
7f5a667
added -v to show output
akataba Mar 6, 2024
df726a3
removing workflow for remote testing that seems to be installing ray …
akataba Mar 8, 2024
e28c8dd
remove the old gate environments and fixing the issue of circular imp…
akataba Mar 8, 2024
b49e16e
fixing imports
akataba Mar 8, 2024
8f25d02
fixing environments to get tests to pass
akataba Mar 12, 2024
de19842
fixing the detunning error
akataba Mar 13, 2024
0b80bd5
merging changes from the main branch
akataba Mar 19, 2024
8371cf2
reverting to changes from the main branch. This is to remove code not…
akataba Mar 19, 2024
ef36ba1
removed packages that don't need to be installed during testing
akataba Mar 27, 2024
cfb2baf
respsonding to pr comments
akataba Apr 17, 2024
b2c82b9
changing versions of qutip
akataba Apr 17, 2024
705ec11
adding test folder
akataba Apr 17, 2024
19ff62d
dealing with circuilar imports
akataba Apr 17, 2024
0f56bea
changing version of qutip
akataba Apr 17, 2024
154af64
changing version of qutip
akataba Apr 17, 2024
744938f
changing version of qutip
akataba Apr 17, 2024
5824ae5
changing version of qutip
akataba Apr 17, 2024
1ecfffb
changing version of qutip
akataba Apr 17, 2024
c327517
changing version of qutip
akataba Apr 17, 2024
a39697f
changing version of qutip
akataba Apr 17, 2024
9d126bd
changing to new version of ddpg
akataba Apr 17, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 0 additions & 42 deletions .github/workflows/python-package.yml

This file was deleted.

33 changes: 33 additions & 0 deletions .github/workflows/python-test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
name: Python Tests

on: [push, pull_request]

jobs:
test:

runs-on: ubuntu-latest

strategy:
matrix:
python-version: [3.8, 3.9]

steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi

- name: Install the package
run: |
pip install .

- name: Run tests
run: |
pytest tests/relaqs -v
2 changes: 1 addition & 1 deletion analysis/load_env_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"""

from relaqs import RESULTS_DIR
from relaqs.api import load_pickled_env_data
from relaqs.api.utils import load_pickled_env_data

Farquhar13 marked this conversation as resolved.
Show resolved Hide resolved
data_path = RESULTS_DIR + '2024-01-24_11-37-15_X/env_data.pkl'

Expand Down
4 changes: 4 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
ray[rllib]==2.4.0
ray[tune]
qutip
torch
1 change: 1 addition & 0 deletions scripts/best_fidelities.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
(0.01754923417485625, 0.9963428890697784)
Farquhar13 marked this conversation as resolved.
Show resolved Hide resolved
4 changes: 1 addition & 3 deletions scripts/deterministic_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,4 @@
print(f"Episode done: Total reward = {episode_reward}")
obs, info = env.reset()
num_episodes += 1
episode_reward = 0.0


episode_reward = 0.0
3 changes: 3 additions & 0 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ install_requires =

[options.packages.find]
where = src

[options.package_data]
* = *.json

[bdist_wheel]
universal = 1
Expand Down
2 changes: 1 addition & 1 deletion src/relaqs/api/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from .training import TrainRLLib
from .callbacks import GateSynthesisCallbacks
from .gates import Gate
from .utils import gate_fidelity, dm_fidelity, load_pickled_env_data

Farquhar13 marked this conversation as resolved.
Show resolved Hide resolved
120 changes: 106 additions & 14 deletions src/relaqs/api/utils.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,20 @@
import ray
import numpy as np
from numpy.linalg import eigvalsh
import pandas as pd
from scipy.linalg import sqrtm
from ray.rllib.algorithms.algorithm import Algorithm
from ray.rllib.algorithms.ddpg import DDPGConfig
from relaqs import RESULTS_DIR
import ast
from ray.tune.registry import register_env
Farquhar13 marked this conversation as resolved.
Show resolved Hide resolved
from relaqs.environments.single_qubit_env import SingleQubitEnv
from relaqs.environments.noisy_single_qubit_env import NoisySingleQubitEnv
from relaqs.quantum_noise_data.get_data import (get_month_of_all_qubit_data, get_single_qubit_detuning)
from relaqs.api.callbacks import GateSynthesisCallbacks
from relaqs import QUANTUM_NOISE_DATA_DIR
from qutip.operators import *


def load_pickled_env_data(data_path):
df = pd.read_pickle(data_path)
return df
Expand Down Expand Up @@ -76,63 +80,151 @@ def load_model(path):
return loaded_model

def get_best_episode_information(filename):
df = pd.read_csv(filename, names=['Fidelity', 'Reward', 'Actions', 'Flattened U', 'Episode Id'], header=0)
fidelity = df.iloc[:, 0]
data = load_pickled_env_data(filename)
fidelity = data["Fidelity"]
max_fidelity_idx = fidelity.argmax()
fidelity = df.iloc[max_fidelity_idx, 0]
episode = df.iloc[max_fidelity_idx, 4]
best_episodes = df[df["Episode Id"] == episode]
fidelity = data.iloc[max_fidelity_idx, 0]
episode = data.iloc[max_fidelity_idx, 4]
best_episodes = data[data["Episode Id"] == episode]
return best_episodes

def noisy_env_creator(config):
return NoisySingleQubitEnv(config)

def noiseless_env_creator(config):
return SingleQubitEnv(config)

def run(env_class, gate, n_training_iterations=1, noise_file=""):
"""Args
gate (Gate type):
environmment(gym.env)
n_training_iterations (int)
noise_file (str):
Returns
alg (rllib.algorithms.algorithm)

"""
ray.init()

env_config = env_class.get_default_env_config()
env_config["U_target"] = gate.get_matrix()

# ---------------------> Get quantum noise data <-------------------------

# ---------------------> Get quantum noise data <-------------------------
t1_list, t2_list, detuning_list = sample_noise_parameters(noise_file)

env_config["relaxation_rates_list"] = [np.reciprocal(t1_list).tolist(), np.reciprocal(t2_list).tolist()] # using real T1 data
env_config["delta"] = detuning_list
env_config["relaxation_ops"] = [sigmam(),sigmaz()]
env_config["observation_space_size"] = 2*16 + 1 + 2 + 1 # 2*16 = (complex number)*(density matrix elements = 4)^2, + 1 for fidelity + 2 for relaxation rate + 1 for detuning
env_config["verbose"] = True

# ---------------------> Configure algorithm and Environment <-------------------------
alg_config = DDPGConfig()
alg_config.framework("torch")
alg_config.environment(env_class, env_config=env_config)
alg_config.rollouts(batch_mode="complete_episodes")
alg_config.callbacks(GateSynthesisCallbacks)
alg_config.train_batch_size = env_class.get_default_env_config()["steps_per_Haar"]
alg_config.actor_lr = 4e-5
alg_config.critic_lr = 5e-4

alg_config.actor_hidden_activation = "relu"
alg_config.critic_hidden_activation = "relu"
alg_config.num_steps_sampled_before_learning_starts = 1000
alg_config.actor_hiddens = [30,30,30, 30]
alg_config.exploration_config["scale_timesteps"] = 10000

alg = alg_config.build()
list_of_results = []
for _ in range(n_training_iterations):
result = alg.train()
list_of_results.append(result['hist_stats'])
return alg, list_of_results

def run_noisless_one_qubit_experiment(gate,n_training_iterations=1):
"""Args
gate (Gate type):
environmment(gym.env)
n_training_iterations (int)
noise_file (str):
Returns
alg (rllib.algorithms.algorithm)

"""
register_env("my_env", noiseless_env_creator)
env_config = SingleQubitEnv.get_default_env_config()
env_config["U_target"] = gate.get_matrix()
# env_config["observation_space_size"] = 2*16 + 1 # 2*16 = (complex number)*(density matrix elements = 4)^2, + 1 for fidelity
env_config["verbose"] = True

# ---------------------> Configure algorithm and Environment <-------------------------
alg_config = DDPGConfig()
alg_config.framework("torch")
alg_config.environment("my_env", env_config=env_config)
alg_config.rollouts(batch_mode="complete_episodes")
alg_config.callbacks(GateSynthesisCallbacks)
alg_config.train_batch_size = SingleQubitEnv.get_default_env_config()["steps_per_Haar"]

### working 1-3 sets
alg_config.actor_lr = 4e-5
alg_config.critic_lr = 5e-4

alg_config.actor_hidden_activation = "relu"
alg_config.critic_hidden_activation = "relu"
alg_config.num_steps_sampled_before_learning_starts = 1000
alg_config.actor_hiddens = [30,30,30]
alg_config.actor_hiddens = [30,30,30, 30]
alg_config.exploration_config["scale_timesteps"] = 10000

alg = alg_config.build()
list_of_results = []
for _ in range(n_training_iterations):
result = alg.train()
list_of_results.append(result['hist_stats'])
return alg, list_of_results

ray.shutdown()

return alg
def run_noisy_one_qubit_experiment(gate, n_training_iterations=1, noise_file=" "):
"""Args
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we already have a run function in utils.py that takes an environment variable, do you think it would be better to move run_noisy_one_qubit_experiment and run_noiseless_one_qubit_experiment to the scripts folder?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I defined these different functions was that building the configuration for the different environments is different and the observation dimensions are different.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two things:

  1. The environments are written such that you don't need to change the configuration if you don't want to; one function can handle both environments (e.g., https://github.com/akataba/rl-repo/blob/main/scripts/run_and_save_v3.py).
  2. If there is something about these particular configurations that makes you want to keep these functions, I think we should move them from the api folder to the scripts folder.

gate (Gate type):
environmment(gym.env)
n_training_iterations (int)
noise_file (str):
Returns
alg (rllib.algorithms.algorithm)

"""
register_env("my_env", noisy_env_creator)
env_config = NoisySingleQubitEnv.get_default_env_config()
env_config["U_target"] = gate.get_matrix()
# ---------------------> Get quantum noise data <-------------------------
t1_list, t2_list, detuning_list = sample_noise_parameters(noise_file)

env_config["relaxation_rates_list"] = [np.reciprocal(t1_list).tolist(), np.reciprocal(t2_list).tolist()] # using real T1 data
env_config["delta"] = detuning_list
env_config["relaxation_ops"] = [sigmam(),sigmaz()]
env_config["observation_space_size"] = 2*16 + 1 + 2 + 1 # 2*16 = (complex number)*(density matrix elements = 4)^2, + 1 for fidelity + 2 for relaxation rate + 1 for detuning

# ---------------------> Configure algorithm and Environment <-------------------------
alg_config = DDPGConfig()
alg_config.framework("torch")
alg_config.environment(NoisySingleQubitEnv, env_config=env_config)
alg_config.rollouts(batch_mode="complete_episodes")
alg_config.callbacks(GateSynthesisCallbacks)
alg_config.train_batch_size = NoisySingleQubitEnv.get_default_env_config()["steps_per_Haar"]

alg_config.actor_lr = 4e-5
alg_config.critic_lr = 5e-4

alg_config.actor_hidden_activation = "relu"
alg_config.critic_hidden_activation = "relu"
alg_config.num_steps_sampled_before_learning_starts = 1000
alg_config.actor_hiddens = [30,30,30, 30]
alg_config.exploration_config["scale_timesteps"] = 10000

alg = alg_config.build()
list_of_results = []
for _ in range(n_training_iterations):
result = alg.train()
list_of_results.append(result['hist_stats'])
return alg, list_of_results


def return_env_from_alg(alg):
env = alg.workers.local_worker().env
Expand Down
Loading
Loading