All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to PEP 440 and uses Semantic Versioning.
- Github issue templates (thanks to Scott Staniewicz)
- Ensures CLI correctly populates
apply_water_mask
andwater_mask_path
arguments.
- Tests for the CLI and main python interace tests.
- Issues with test_main.py related to where tmp directory was being created (solution, ensure tmp is made explicitly relative to the test directory as in
test_workflows.py
). - All dependencies within virtual environment are back to conda-forge from PyPI.
- Product directory parameter is now correctly writing to the specified directory (fixes #37).
- Fixed the CLI test (bug). The runconfig instance will have different product paths than the one created via the CLI because the product paths have the processing time in them, and that is different depending on when the runconfig object is created in the test and within the CLI-run test.
- Added a
n_workers_for_despeckling
argument to theRunConfigData
model, CLI, and relevant processing functions. - A test to ensure that the product directory is being correctly created and used within runconfig (added to test_main.py).
- CLI issues with bucket/prefix for S3 upload (resolves #32).
- Included
__main__.py
testing for the SAS entrypoint of the CLI; uses the cropped dataset to test the workflow. - Includes
dist-s1 run_sas
testing and golden dataset comparision. - Updates to README regarding GPU environment setup.
- Minimum working example of generation fo the product via
dist-s1 run
- Integration of
dist-s1-enumerator
to identify/localize the inputs from MGRS tile ID, post-date, and track number - Notebooks and examples to run end-to-end workflows as well as Science Application Software (SAS) workflows
- Docker image with nvidia compatibility (fixes the cuda version at 11.8)
- Download and application of the water mask (can specify a path or request it to generate from UMD GLAD LCLUC data).
- Extensive instructions in the README for various use-case scenarios.
- Golden dataset test for SAS workflow
- Allow user to specify bucket/prefix for S3 upload - makes library compatible with Hyp3.
- Ensure Earthdata credentials are provided in ~/.netrc and allow for them to be passed as suitable evnironment variables.
- Create a GPU compatible docker image (ongoing) - use nvidia docker image.
- Ensures pyyaml is in the environment (used for serialization of runconfig).
- Update equality testing for DIST-S1 product comparison.
- CLI issues with hyp3
- Pyproject.toml file to handle ruff
- Python 3.13 support
- Updated dockerimage to ensure on login the conda environment is activated
- Instructions in the README for OPERA delivery.
- A
.Dockerignore
file to remove extraneous files from the docker image - Allow
/home/ops
directory in Docker image to be open to all users
- Pypi delivery workflow
- Entrypoint for CLI to localize data via internet (the SAS workflow is assumed not to have internet access)
- Data models for output data and product naming conventions
- Ensures output products follow the product and the tif layers follow the expected naming conventions
- Provides testing/validation of the structure (via tmp directories)
- CLI entrypoints now utilize
dist-s1 run_sas
anddist-s1 run
rathern than justdist-s1
.- The
dist-s1 run_sas
is the primary entrypoint for Science Application Software (SAS) for SDS operations. - The
dist-s1 run
is the simplified entrypoint for external users, allowing for the localization of data from publicly available data sources.
- The
- Initial internal release of the DIST-S1 project. Test github release workflow