Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multiple updates #5

Merged
merged 58 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
58 commits
Select commit Hold shift + click to select a range
c63ee6d
properly writing auxiliary signals
vedina Aug 21, 2024
a187e3b
optional names for protocolapplication and effect record
vedina Aug 24, 2024
68936b5
example ploomber pipeline
vedina Aug 24, 2024
feee422
instructions for running the workflow
vedina Aug 24, 2024
5511e5e
make use of effect.nx_name
vedina Aug 25, 2024
71f0dd1
refactored valuearrays
vedina Aug 25, 2024
0f5f5ad
value array tests
vedina Aug 25, 2024
e95fe6b
properde/serialization of the updated ValueArray classes
vedina Aug 25, 2024
3614a9d
make use of BaseValueArray
vedina Aug 25, 2024
56ad8c1
metavalue array for enabling attributes of auxiliary signals
vedina Aug 25, 2024
98374a0
conditions in ValueArray
vedina Aug 26, 2024
5979055
type check
vedina Aug 26, 2024
03010f0
multidimensional argument
vedina Aug 26, 2024
6a5fb3d
endpoint type
vedina Aug 26, 2024
43ab12f
use endpoint as signal name as before
vedina Aug 26, 2024
bdddb1e
pipeline updates
vedina Aug 26, 2024
985510c
metadata for the multidimensional case
vedina Aug 26, 2024
9a363fc
fixed metadata handling
vedina Aug 26, 2024
109f347
aligning with NxRaman spec
vedina Aug 26, 2024
59f4a48
trying to accommodate both ambit and nxraman definitions
vedina Aug 26, 2024
ace4830
add generation of solr index
vedina Aug 27, 2024
a7f6de4
small updates
vedina Aug 28, 2024
96026fc
convert into separate files following folder structure
vedina Aug 30, 2024
e044f33
NeXus parser
vedina Aug 31, 2024
6dc31f7
remove redundant info
vedina Aug 31, 2024
abc9cba
prefix for solr writer
vedina Aug 31, 2024
374d07a
pipeline update
vedina Aug 31, 2024
911b53e
fixed misplaced indent
vedina Sep 1, 2024
35e4933
fix
vedina Sep 1, 2024
623ff0a
should be flat array
vedina Sep 1, 2024
2302786
method for effectrecord parse
vedina Sep 1, 2024
61345e6
program_name moved ot the nxroot
vedina Sep 1, 2024
72db5f0
effectresult index
vedina Sep 1, 2024
787b535
hack for embeddings
vedina Sep 1, 2024
da27ec0
object oriented solr writer
vedina Sep 1, 2024
e9be5dc
write method
vedina Sep 1, 2024
377c36e
enable context manager
vedina Sep 1, 2024
7ce0745
ndarray json serialization
vedina Sep 1, 2024
bacb463
context manager
vedina Sep 2, 2024
588490e
linter happiness 1
vedina Sep 3, 2024
48943c7
linter happiness 2
vedina Sep 3, 2024
0a8ad87
linter happiness 3
vedina Sep 3, 2024
c4b223d
fixed parameter lookup
vedina Oct 20, 2024
751119f
typo fixed
vedina Oct 20, 2024
da4d0eb
EffectArray representation
vedina Oct 20, 2024
c807106
fixed nexus unsupported array of U1 string type
vedina Nov 11, 2024
8d790e2
flake8 happiness
vedina Nov 11, 2024
ae8313a
more flake8 happiness
vedina Nov 11, 2024
87ce883
remove nan values from conditions
vedina Nov 11, 2024
688fb59
flake8
vedina Nov 11, 2024
2c6a13a
nan cols
vedina Nov 12, 2024
f79f8fa
parameters
vedina Nov 12, 2024
9705536
ops: pre-commit: add Jupyter notebook cleanup
kerberizer Nov 12, 2024
b2c7e14
ops: pre-commit: update versions
kerberizer Nov 12, 2024
fa2df7a
ops: update poetry version
kerberizer Nov 12, 2024
b11989e
examples: remove ploomber
kerberizer Nov 12, 2024
8bd5a4e
fix: 'dict' object has no attribute 'get_all_keys'
kerberizer Nov 12, 2024
c255bca
fix pre-commit tools problems
kerberizer Nov 12, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ on: # yamllint disable-line rule:truthy
workflow_dispatch:

env:
POETRY_VERSION: 1.8.3
POETRY_VERSION: 1.8.4
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}

Expand Down
13 changes: 9 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
---
repos:
- repo: https://github.com/python-poetry/poetry
rev: 1.8.3
rev: 1.8.4
hooks:
- id: poetry-check
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
rev: v5.0.0
hooks:
- id: check-docstring-first
- id: check-json
Expand All @@ -16,13 +16,18 @@ repos:
- id: name-tests-test
- id: pretty-format-json
args: [--autofix, --no-ensure-ascii]
exclude: \.ipynb$
- id: trailing-whitespace
- repo: https://github.com/srstevenson/nb-clean
rev: 4.0.1
hooks:
- id: nb-clean
- repo: https://github.com/facebook/usort
rev: v1.0.8
hooks:
- id: usort
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 24.8.0
rev: 24.10.0
hooks:
- id: black
args: [--preview]
Expand All @@ -33,7 +38,7 @@ repos:
args: [--exit-zero]
verbose: true
additional_dependencies:
- flake8-bugbear == 24.4.26
- flake8-bugbear == 24.10.31
- repo: https://github.com/adrienverge/yamllint
rev: v1.35.1
hooks:
Expand Down
226 changes: 131 additions & 95 deletions examples/demo.ipynb
Original file line number Diff line number Diff line change
@@ -1,98 +1,134 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"from pyambit.datamodel import Substances, Study \n",
"import nexusformat.nexus.tree as nx\n",
"import os.path\n",
"import tempfile\n",
"# to_nexus is not added without this import\n",
"from pyambit import nexus_writer\n",
"import json"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def query(url = \"https://apps.ideaconsult.net/gracious/substance/\" ,params = {\"max\" : 1}):\n",
" substances = None\n",
" headers = {'Accept': 'application/json'}\n",
" result = requests.get(url,params=params,headers=headers)\n",
" if result.status_code==200:\n",
" response = result.json()\n",
" substances = Substances.model_construct(**response)\n",
" for substance in substances.substance:\n",
" url_study = \"{}/study\".format(substance.URI)\n",
" study = requests.get(url_study,headers=headers)\n",
" if study.status_code==200:\n",
" response_study = study.json()\n",
" substance.study = Study.model_construct(**response_study).study\n",
"\n",
" return substances\n",
"\n",
"def write_studies_nexus(substances):\n",
" for substance in substances.substance:\n",
" for study in substance.study:\n",
" file = os.path.join(tempfile.gettempdir(), \"study_{}.nxs\".format(study.uuid))\n",
" nxroot = nx.NXroot()\n",
" try:\n",
" study.to_nexus(nxroot)\n",
" nxroot.save(file, mode=\"w\")\n",
" except Exception as err:\n",
" #print(\"error\",file,str(err))\n",
" print(file)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"try:\n",
" substances = query(params = {\"max\" : 10}) \n",
" _json = substances.model_dump(exclude_none=True)\n",
" new_substances = Substances.model_construct(**_json)\n",
" #test roundtrip\n",
" assert substances == new_substances\n",
"\n",
" file = os.path.join(tempfile.gettempdir(), \"remote.json\")\n",
" print(file)\n",
" with open(file, 'w', encoding='utf-8') as file:\n",
" file.write(substances.model_dump_json(exclude_none=True))\n",
" write_studies_nexus(substances)\n",
"except Exception as x:\n",
" print(x)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.12.5"
}
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"from pyambit.datamodel import Substances, Study \n",
"import nexusformat.nexus.tree as nx\n",
"import os.path\n",
"import tempfile\n",
"# to_nexus is not added without this import\n",
"from pyambit import nexus_writer\n",
"import json\n",
"from IPython.display import display, HTML"
]
},
"nbformat": 4,
"nbformat_minor": 2
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def query(url = \"https://apps.ideaconsult.net/gracious/substance/\" ,params = {\"max\" : 1}):\n",
" substances = None\n",
" headers = {'Accept': 'application/json'}\n",
" result = requests.get(url,params=params,headers=headers)\n",
" if result.status_code==200:\n",
" response = result.json()\n",
" substances = Substances.model_construct(**response)\n",
" for substance in substances.substance:\n",
" url_study = \"{}/study?max=10000\".format(substance.URI)\n",
" study = requests.get(url_study,headers=headers)\n",
" if study.status_code==200:\n",
" response_study = study.json()\n",
" substance.study = Study.model_construct(**response_study).study\n",
" #break\n",
"\n",
" return substances\n",
"\n",
"def write_studies_nexus(substances, single_file=True):\n",
" if single_file:\n",
" nxroot = nx.NXroot()\n",
" substances.to_nexus(nxroot)\n",
" file = os.path.join(tempfile.gettempdir(), \"remote.nxs\")\n",
" print(file)\n",
" nxroot.save(file, mode=\"w\")\n",
" else: \n",
" for substance in substances.substance:\n",
" for study in substance.study:\n",
" file = os.path.join(tempfile.gettempdir(), \"study_{}.nxs\".format(study.uuid))\n",
" print(file)\n",
" nxroot = nx.NXroot()\n",
" try:\n",
" study.to_nexus(nxroot)\n",
" nxroot.save(file, mode=\"w\")\n",
" except Exception as err:\n",
" #print(\"error\",file,str(err))\n",
" print(file)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import traceback"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"url = \"https://apps.ideaconsult.net/gracious/substance/\"\n",
"#url = \"http://localhost:9090/ambit2/substance/\"\n",
"#url = \"http://localhost:9090/ambit2/substance/POLY-e02442cc-8f7c-3a71-82cf-7df5888a4bfa\"\n",
"#url = \"http://localhost:9090/ambit2/substance/POLY-25d13fa6-c18b-35c8-b0f6-7325f5f3e505\"\n",
"try:\n",
" substances = query(url=url,params = {\"max\" : 1}) \n",
" _json = substances.model_dump(exclude_none=True)\n",
" new_substances = Substances.model_construct(**_json)\n",
" #test roundtrip\n",
" assert substances == new_substances\n",
"\n",
" file = os.path.join(tempfile.gettempdir(), \"remote.json\")\n",
" print(file)\n",
" with open(file, 'w', encoding='utf-8') as file:\n",
" file.write(substances.model_dump_json(exclude_none=True))\n",
" \n",
" for s in substances.substance:\n",
" for pa in s.study:\n",
" effectarrays_only, df = pa.convert_effectrecords2array()\n",
" display(df.dropna(axis=1,how=\"all\"))\n",
" print(effectarrays_only)\n",
" #break\n",
" #write_studies_nexus(substances, single_file=False)\n",
"except Exception as x:\n",
" traceback.print_exc()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
3 changes: 0 additions & 3 deletions examples/test.py

This file was deleted.

Loading
Loading