Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use hdf5 or nexus file in XRD #113

Open
wants to merge 48 commits into
base: main
Choose a base branch
from
Open

Use hdf5 or nexus file in XRD #113

wants to merge 48 commits into from

Conversation

ka-sarthak
Copy link
Collaborator

@ka-sarthak ka-sarthak commented Aug 28, 2024

When array data from XRD measurements is added to the archives, the loading time increases as the archives become heavier (especially in the case of RSM which stores multiple 2D arrays). One solution is to use an auxiliary file to offload the heavy data and only save references to the auxiliary files in the archives.

To implement, we can use .h5 files to store the data and make references to the offloaded datasets using HDF5Reference. Additionally, we can also generate a nexus .nx file instead of .h5 file. Nexus file uses the .h5 file as the base file type and validates the data with the data models built by the Nexus community.

The current plots are generated using Plotly. The .json files containing the plot data is also being stored in the archive. This also needs to be offloaded to make the archives lighter. Using H5WebAnnotations of NOMAD, we can leverage the H5Web to generate plots from the .h5 or .nx files.

To this end, the following steps are needed

  • Use HDF5Reference as the type of the Quantity for array data: intensity, two_theta, q_parallel, q_perpendicular, q_norm, omega, phi, chi.
  • Implement util class HDF5Handler or functions to create auxiliary files from the normalizers of the schema
  • Generate a .h5 to store the data and save references to its datasets in HDF5Reference quantities.
  • Generate a .nxs file based on the archive. This happens in the HDF5Handler and uses pynxtools.
  • Add annotations in auxiliary files to generate plots for H5Web viewer
  • Add back compatibility

Summary by Sourcery

Implement support for storing XRD array data in external HDF5 or Nexus files, and generate plots using H5WebAnnotations.

New Features:

  • Visualize XRD data using H5Web plots.

Tests:

  • Updated tests to accommodate changes in data handling.

@ka-sarthak ka-sarthak self-assigned this Aug 28, 2024
@ka-sarthak ka-sarthak force-pushed the write-nexus-section branch 2 times, most recently from f2bef40 to d583974 Compare September 3, 2024 08:55
@ka-sarthak ka-sarthak changed the title Use nexus section in XRD Use hdf5/nexus file in XRD Dec 19, 2024
@ka-sarthak ka-sarthak changed the title Use hdf5/nexus file in XRD Use hdf5 or nexus file in XRD Dec 19, 2024
@ka-sarthak ka-sarthak marked this pull request as draft December 19, 2024 15:16
@ka-sarthak
Copy link
Collaborator Author

ka-sarthak commented Dec 19, 2024

@hampusnasstrom @aalbino2 I merged the implementation of the HDF5Handler and support for .h5 file as an auxiliary file.

The Plotly plots are removed in favor of the plots from H5Web. @budschi's current viewpoint is that Plotly plots have better visualizations and it might be a good idea to preserve them for 1D scans. This can be a point of discussion when we review this PR after the vacations

@RubelMozumder will soon merge his implementations from #147 which will allow to use .nx file as an auxiliary file

@ka-sarthak
Copy link
Collaborator Author

@RubelMozumder I have combined the common functionality from walk_through_object and _set_hdf5_ref into one util function resolve_path

@ka-sarthak
Copy link
Collaborator Author

ka-sarthak commented Dec 20, 2024

TODO

  • Combine the mapping in nx.py which is ingested by the Handler as an argument.
  • Try to overwrite the .nxs file without deleting the mainfile. As per @TLCFEM, we should avoid deleting the mainfile

@TLCFEM
Copy link

TLCFEM commented Dec 20, 2024

Have you checked what is the root cause of the issue?
Is the file still occupied when it is read by something elese?

@ka-sarthak
Copy link
Collaborator Author

@TLCFEM I wasn't able to investigate it yet. But this will be among the first things I do in the new year and will reach out to you with my findings. Happy Holidays!

@TLCFEM
Copy link

TLCFEM commented Dec 20, 2024

If it is not the case, then all discussions are not valid anymore.
So you check the access pattern first.
HDF5 has quite a few caveats and requires some knowledge of how things work internally.

@RubelMozumder
Copy link
Contributor

If it is not the case, then all discussions are not valid anymore. So you check the access pattern first. HDF5 has quite a few caveats and requires some knowledge of how things work internally.

If I explain the situation that may lay bear the scenario.
We have eln that takes input put file. In the processing of the eln object or archive.json it generates a output file of type .h5 or .nxs file. If it is a .nxs file then nomad index it as an entry.
So, In the first attempt of eln processing, there is no error and all looks goods.

Issue: In the second attempt of reprocessing the entire upload (archive.json, .nxs and so on) nomad starts processing the archive.json and .nxs (nomad entry). The reprocessing of the archive.json also recreates the .nxs file then the issue comes in place. As far as I understand there are two worker processes working on the same file object .nxs concurrently.

Temporary solution:
In each processing of the archive.json we delete the .nxs (nomad entry) file if it exists and regenerate the .nxs file again. Which might not be the right approach to handle this case.

@aalbino2
Copy link
Contributor

aalbino2 commented Jan 7, 2025

If it is not the case, then all discussions are not valid anymore. So you check the access pattern first. HDF5 has quite a few caveats and requires some knowledge of how things work internally.

If I explain the situation that may lay bear the scenario. We have eln that takes input put file. In the processing of the eln object or archive.json it generates a output file of type .h5 or .nxs file. If it is a .nxs file then nomad index it as an entry. So, In the first attempt of eln processing, there is no error and all looks goods.

Issue: In the second attempt of reprocessing the entire upload (archive.json, .nxs and so on) nomad starts processing the archive.json and .nxs (nomad entry). The reprocessing of the archive.json also recreates the .nxs file then the issue comes in place. As far as I understand there are two worker processes working on the same file object .nxs concurrently.

Temporary solution: In each processing of the archive.json we delete the .nxs (nomad entry) file if it exists and regenerate the .nxs file again. Which might not be the right approach to handle this case.

@RubelMozumder what prevents you from checking the existence of the .nxs file and create a new one only in the case it doesn't exist yet?

@ka-sarthak
Copy link
Collaborator Author

After discussing with @TLCFEM, we found the following things:

  • There is a resource contention issue, where multiple processes try to access the generated nexus file in different modes (read and write). Generation of the nexus file is not the problem, but triggering a reprocess using m_context.process_updated_raw_file(filename, allow_modify=True) from the ELN normalizer can lead to resource contention. This is because a new worker is assigned for this reprocess in parallel to the worker which is handling the normalization. ELN normalization worked might have the nexus file open in write mode, while the reprocess worker tries to open it in read mode to process the nexus entry.
  • The behavior is unpredictable, as sometimes the entry normalization can happen without the resource contention error, and other times, it might get one.

Some directions for resolving this:

  • Use sleep timers in the nexus processing that is triggered by the nexus parser. This allows the ELN process to be completed (and the file is closed) before the processing of the nexus entry is triggered. However, this isn't a solution as one can't know what timer value fits all cases.
  • Delete the nexus file if exists before triggering the nexus file writing from the ELN. This makes sure that no nexus entry is being processed during the nexus file writing process.
  • Do not reprocess m_context.process_updated_raw_file(filename, allow_modify=True) through the normalizer. This avoids entering the resource contention situation. Rather, the user can trigger a reprocess of the upload from the GUI. Drawback: user inconvenience.
  • Enforce the reprocess m_context.process_updated_raw_file(filename, allow_modify=True) to use the current process, rather than creating a new worker for it. This can be done by using entry.process_entry_local() instead of entry.process_entry() look here.

@ka-sarthak
Copy link
Collaborator Author

Currently, the handler exposes the write_file method that can be used at any point multiple times during the normalization. We should limit this so that the resource contention problems are more tractable. One write file per normalization, this also allows the nexus entry to contain the latest changes of the nexus file.

@RubelMozumder
Copy link
Contributor

After discussing with @TLCFEM, we found the following things:

  • There is a resource contention issue, where multiple processes try to access the generated nexus file in different modes (read and write). Generation of the nexus file is not the problem, but triggering a reprocess using m_context.process_updated_raw_file(filename, allow_modify=True) from the ELN normalizer can lead to resource contention. This is because a new worker is assigned for this reprocess in parallel to the worker which is handling the normalization. ELN normalization worked might have the nexus file open in write mode, while the reprocess worker tries to open it in read mode to process the nexus entry.
  • The behavior is unpredictable, as sometimes the entry normalization can happen without the resource contention error, and other times, it might get one.

Some directions for resolving this:

  • Use sleep timers in the nexus processing that is triggered by the nexus parser. This allows the ELN process to be completed (and the file is closed) before the processing of the nexus entry is triggered. However, this isn't a solution as one can't know what timer value fits all cases.
  • Delete the nexus file if exists before triggering the nexus file writing from the ELN. This makes sure that no nexus entry is being processed during the nexus file writing process.
  • Do not reprocess m_context.process_updated_raw_file(filename, allow_modify=True) through the normalizer. This avoids entering the resource contention situation. Rather, the user can trigger a reprocess of the upload from the GUI. Drawback: user inconvenience.
  • Enforce the reprocess m_context.process_updated_raw_file(filename, allow_modify=True) to use the current process, rather than creating a new worker for it. This can be done by using entry.process_entry_local() instead of entry.process_entry() look here.

It may resolve the race condition reading/writing function on the same file. There is another issue,
Let's suppose the first time the raw file processing nexus writer succeeds and creates a nexus entry. On the second attempt, for some reason, the nexus process fails, but the entry is still there from the first process. It needs to delete the nexus file and entry as well and write an hdf5 file.

I think that needs to be fixed by area-D to delete an entry (corrupted) and its related file from the single process thread running normalizer.

The PR: #157 can help, you see that the test is completely failed.

@RubelMozumder
Copy link
Contributor

@lauri-codes, is there any functionality that deletes a entry, associated mainfile and the residue (if there is something e.g. ES data) of that deleted entry? This deletion must happens inside the eln normalization process.

Just a quick overview of implementation:

try:
   create a nexus file which ends up as a nexus entry
Except Error:
   Delete nexus mainfile, entry and residue meatadata
   create hdf file (hdf5 is not a nomad entry)

Then we make reference to concepts in nexus file or hdf5 file for entry quantities.

Currently,
We are using os.remove to delete a mainfile (which we believe not a correct way to do), still the mainfile deletion does not delete the entry and its matadata.

You may want to take a quick view of code in function write_file here:

I have created a small function to delete mainfile, entry and ES (here:

def delete_entry_file(archive, mainfile, delete_entry=False):
(this raise an error from different process I can not trace back to my code from where the error is coming. It also fail to eln entry normalization process.

If you could please suggest any functionality that is available in NOMAD.

@lauri-codes
Copy link

@RubelMozumder: There is no such functionality, and I doubt there ever will be. Deleting entries during processing is not something we can really endorse in any way: there are too many ways to screw this up (what happens if the entry is deleted and then an expection happens before the new data is stored? What happens when some other processed entry tries to read the deleted entry simultaneously? What happens if the file is opened by another process and there is a lock on it when someone tries to delete it?)

I would instead want to try and understand what is the goal you are trying to achieve with this normalizer. It is reasonable to create temporary files during normalization and also reasonable to create new entries at the end of normalization (assuming there are no circular processing steps or parallel processes that might cause issues).

@ka-sarthak
Copy link
Collaborator Author

First processing:

  • ELN normalization opens the nexus file in write mode to generate it--> nexus parser opens it in read mode to create the nexus entry.

Reprocessing the upload:

  • ELN normalization opens the nexus file in write mode, while nexus entry tries to open it in read mode: leading to resource contention.

One way to avoid this is to control the nexus file access using an overwrite nexus file switch (BoolEditQuantity) in the ELN. In the first processing, the ELN generates the nexus file and sets the switch to False. When reprocessing the upload, ELN will not open nexus file in write mode as the switch is not set. When the user wants to update the nexus file, they go inside the entry, set the switch, and reprocess the entry. This overwrites the nexus file before setting back the switch to False. In this case, only the ELN will be accessing the nexus file. No resource contention.

@ka-sarthak
Copy link
Collaborator Author

The above solution does not work as intended due to the following issue: https://gitlab.mpcdf.mpg.de/nomad-lab/nomad-FAIR/-/merge_requests/2301

@ka-sarthak
Copy link
Collaborator Author

The changes made here are back-compatible. However, the oasis admins must re-process all the ELNXRayDiffraction entries.

@ka-sarthak ka-sarthak marked this pull request as ready for review January 23, 2025 16:53
@ka-sarthak
Copy link
Collaborator Author

@hampusnasstrom Thanks for the suggestion. Retaining the older sections, and then substituting them with the HDF5 ones upon reprocessing/renormalizing solves the issue.
When an old entry is opened, it contains the non-HDF5 results section, which can be interacted with normally. Additionally, the ELN schema no longer inherits PlotSection, so Plotly plots will not appear in the old entries. When the entry is saved, the result section gets replaced with the HDF5 one, seamlessly, and the HDF5 plots show up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants