Skip to content

Commit

Permalink
Merge pull request #793 from fractal-analytics-platform/pydantic-v2-s…
Browse files Browse the repository at this point in the history
…econd-attempt

Fully move to Pydantic v2
tcompa authored Jul 24, 2024
2 parents ce7f897 + 2aed673 commit 8ab8499
Showing 68 changed files with 1,515 additions and 1,122 deletions.
29 changes: 29 additions & 0 deletions .github/workflows/manifest_external_packages.yml
Original file line number Diff line number Diff line change
@@ -23,36 +23,46 @@ jobs:

package: [skip]
github_repo: [skip]
github_branch: [skip]
manifest_path: [skip]
cmd_install: [skip]
cmd_create_manifest: [skip]
custom_dependencies: [skip]

include:

- package: scMultipleX
github_repo: fmi-basel/gliberal-scMultipleX
github_branch: main
manifest_path: src/scmultiplex/__FRACTAL_MANIFEST__.json
cmd_install: 'python -m pip install -e .[fractal-tasks]'
cmd_create_manifest: 'python src/scmultiplex/dev/create_manifest.py'
custom_dependencies: 'image_registration'

- package: fractal-helper-tasks
github_repo: jluethi/fractal-helper-tasks
github_branch: main
manifest_path: src/fractal_helper_tasks/__FRACTAL_MANIFEST__.json
cmd_install: 'python -m pip install -e .'
cmd_create_manifest: 'python src/fractal_helper_tasks/dev/create_manifest.py'
custom_dependencies: ''

- package: APx_fractal_task_collection
github_repo: Apricot-Therapeutics/APx_fractal_task_collection
github_branch: pydantic_v2
manifest_path: src/apx_fractal_task_collection/__FRACTAL_MANIFEST__.json
cmd_install: 'python -m pip install -e .'
cmd_create_manifest: 'python src/apx_fractal_task_collection/dev/update_manifest.py'
custom_dependencies: ''

exclude:
- package: skip
github_repo: skip
github_branch: skip
manifest_path: skip
cmd_install: skip
cmd_create_manifest: skip
custom_dependencies: skip

steps:

@@ -63,6 +73,7 @@ jobs:
uses: actions/checkout@v4
with:
repository: ${{ matrix.github_repo }}
ref: ${{ matrix.github_branch }}

- uses: actions/setup-python@v5
with:
@@ -75,11 +86,29 @@ jobs:
- name: Install package
run: ${{ matrix.cmd_install }}

- name: Get current branch of `fractal-tasks-core`
uses: actions/checkout@v4
with:
path: fractal-tasks-core

- name: Install current fractal-tasks-core (this may fail)
run: python -m pip install -e ./fractal-tasks-core

- name: Install custom additional dependencies (see issue 803)
if: ${{ matrix.custom_dependencies != '' }}
run: python -m pip install ${{ matrix.custom_dependencies }}

- name: Create manifest
run: ${{ matrix.cmd_create_manifest }}

- name: Setup friendly diff style
run: echo "*.json diff=json" >> .gitattributes && git config diff.json.textconv "jq --sort-keys '.' \$1"

- name: Run git diff for manifest
run: git diff ${{ matrix.manifest_path }}

- name: Clean up before checking repo status
run: rm -rf fractal-tasks-core .gitattributes

- name: Check repo status
run: if [[ -z $(git status -s) ]]; then echo "Clean status"; else echo "Dirty status"; git status; exit 1; fi
20 changes: 16 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,20 @@
**Note**: Numbers like (\#123) point to closed Pull Requests on the fractal-tasks-core repository.

# 1.2.0 (unreleased)

* Core-library and tasks:
* Switch all core models to Pydantic V2 (\#793).
* JSON Schema generation tools:
* Move JSON-Schema tools to Pydantic V2 (\#793).
* Testing:
* Remove dependency on `pytest-pretty` (\#793).
* Update `manifest_external_packages.yml` GitHub Action so that it installs the current `fractal-tasks-core` (\#793).

# 1.1.1

* Tasks:
* Fix issue with masked ROI & relabeling in Cellpose task (\#785).
* Fix issue with masking ROI label types in `masked_loading_wrapper` for Cellpose task (\#785).
* Fix issue with masked ROI & relabeling in Cellpose task (\#786).
* Fix issue with masking ROI label types in `masked_loading_wrapper` for Cellpose task (\#786).
* Enable workaround to support yx images in Cellpose task (\#789).
* Fix error handling in `calculate_registration_image_based` (\#799).
* Fix minor issues with call-signature and type hints in `calculate_registration_image_based` (\#799).
@@ -22,8 +32,8 @@
* Refactor Cellpose Task inputs: Support independent normalization of 2 input channels in the Cellpose task (\#738).
* Rename `task.cellpose_transforms` into `tasks.cellpose_utils` (\#738).
* Fix wrong repeated overlap checks for bounding-boxes in Cellpose task (\#778).
* Fix minor MIP issues related to plate metadata and expecting acquisition metadata in all NGFF plates(\#781).
* Add `chi2_shift` option to Calculate Registration (image-based) task(\#741).
* Fix minor MIP issues related to plate metadata and expecting acquisition metadata in all NGFF plates (\#781).
* Add `chi2_shift` option to Calculate Registration (image-based) task (\#741).
* Development:
* Switch to transitional pydantic.v1 imports, changes pydantic requirement to `==1.10.16` or `>=2.6.3` (\#760).
* Support JSON-Schema generation for `Enum` task arguments (\#749).
@@ -35,6 +45,8 @@
* Test manifest creation for three other tasks packages (\#763).
* NGFF subpackage
* Fix Plate model to correspond better to 0.4.0 NGFF spec: Now makes acquisition metadata optional (\#781).
* Dependencies:
* Add `image_registration` within `fractal-tasks` extra (\#741).

# 1.0.2

1,407 changes: 710 additions & 697 deletions fractal_tasks_core/__FRACTAL_MANIFEST__.json

Large diffs are not rendered by default.

43 changes: 24 additions & 19 deletions fractal_tasks_core/channels.py
Original file line number Diff line number Diff line change
@@ -18,8 +18,10 @@
from typing import Union

import zarr
from pydantic.v1 import BaseModel
from pydantic.v1 import validator
from pydantic import BaseModel
from pydantic import field_validator
from pydantic import model_validator
from typing_extensions import Self

from fractal_tasks_core import __OME_NGFF_VERSION__

@@ -43,8 +45,8 @@ class Window(BaseModel):
end: Upper-bound rescaling value for visualization.
"""

min: Optional[int]
max: Optional[int]
min: Optional[int] = None
max: Optional[int] = None
start: int
end: int

@@ -69,19 +71,20 @@ class OmeroChannel(BaseModel):
# Custom

wavelength_id: str
index: Optional[int]
index: Optional[int] = None

# From OME-NGFF v0.4 transitional metadata

label: Optional[str]
window: Optional[Window]
color: Optional[str]
label: Optional[str] = None
window: Optional[Window] = None
color: Optional[str] = None
active: bool = True
coefficient: int = 1
inverted: bool = False

@validator("color", always=True)
def valid_hex_color(cls, v, values):
@field_validator("color", mode="after")
@classmethod
def valid_hex_color(cls, v: Optional[str]) -> Optional[str]:
"""
Check that `color` is made of exactly six elements which are letters
(a-f or A-F) or digits (0-9).
@@ -117,23 +120,24 @@ class ChannelInputModel(BaseModel):
wavelength_id: Optional[str] = None
label: Optional[str] = None

@validator("label", always=True)
def mutually_exclusive_channel_attributes(cls, v, values):
@model_validator(mode="after")
def mutually_exclusive_channel_attributes(self: Self) -> Self:
"""
Check that either `label` or `wavelength_id` is set.
"""
wavelength_id = values.get("wavelength_id")
label = v
if wavelength_id and v:
wavelength_id = self.wavelength_id
label = self.label

if wavelength_id and label:
raise ValueError(
"`wavelength_id` and `label` cannot be both set "
f"(given {wavelength_id=} and {label=})."
)
if wavelength_id is None and v is None:
if wavelength_id is None and label is None:
raise ValueError(
"`wavelength_id` and `label` cannot be both `None`"
)
return v
return self


class ChannelNotFoundError(ValueError):
@@ -337,7 +341,7 @@ def define_omero_channels(
can be written to OMERO metadata.
"""

new_channels = [c.copy(deep=True) for c in channels]
new_channels = [c.model_copy(deep=True) for c in channels]
default_colors = ["00FFFF", "FF00FF", "FFFF00"]

for channel in new_channels:
@@ -372,7 +376,8 @@ def define_omero_channels(
raise ValueError(f"Non-unique labels in {new_channels=}")

new_channels_dictionaries = [
c.dict(exclude={"index"}, exclude_unset=True) for c in new_channels
c.model_dump(exclude={"index"}, exclude_unset=True)
for c in new_channels
]

return new_channels_dictionaries
2 changes: 1 addition & 1 deletion fractal_tasks_core/dev/check_manifest.py
Original file line number Diff line number Diff line change
@@ -87,7 +87,7 @@ def _compare_dicts(
# Check global properties of manifest
if not manifest["has_args_schemas"]:
raise ValueError(f'{manifest["has_args_schemas"]=}')
if manifest["args_schema_version"] != "pydantic_v1":
if manifest["args_schema_version"] != "pydantic_v2":
raise ValueError(f'{manifest["args_schema_version"]=}')

# Loop over tasks
14 changes: 8 additions & 6 deletions fractal_tasks_core/dev/create_manifest.py
Original file line number Diff line number Diff line change
@@ -24,11 +24,16 @@
from fractal_tasks_core.dev.lib_task_docs import create_docs_info


logging.basicConfig(level=logging.INFO)


ARGS_SCHEMA_VERSION = "pydantic_v2"


def create_manifest(
package: str = "fractal_tasks_core",
manifest_version: str = "2",
has_args_schemas: bool = True,
args_schema_version: str = "pydantic_v1",
docs_link: Optional[str] = None,
custom_pydantic_models: Optional[list[tuple[str, str, str]]] = None,
):
@@ -52,9 +57,6 @@ def create_manifest(
manifest_version: Only `"2"` is supported.
has_args_schemas:
Whether to autogenerate JSON Schemas for task arguments.
args_schema_version:
Only `"pydantic_v1"` is currently supported in `fractal-server`
and `fractal-web`.
custom_pydantic_models:
Custom models to be included when building JSON Schemas for task
arguments.
@@ -73,7 +75,7 @@ def create_manifest(
has_args_schemas=has_args_schemas,
)
if has_args_schemas:
manifest["args_schema_version"] = args_schema_version
manifest["args_schema_version"] = ARGS_SCHEMA_VERSION

# Prepare a default value of docs_link
if package == "fractal_tasks_core" and docs_link is None:
@@ -89,7 +91,7 @@ def create_manifest(
# to manifest["task_list"]
for task_obj in TASK_LIST:
# Convert Pydantic object to dictionary
task_dict = task_obj.dict(
task_dict = task_obj.model_dump(
exclude={"meta_init", "executable_init", "meta", "executable"},
exclude_unset=True,
)
126 changes: 56 additions & 70 deletions fractal_tasks_core/dev/lib_args_schemas.py
Original file line number Diff line number Diff line change
@@ -21,11 +21,9 @@
from typing import Optional

from docstring_parser import parse as docparse
from pydantic.v1.decorator import ALT_V_ARGS
from pydantic.v1.decorator import ALT_V_KWARGS
from pydantic.v1.decorator import V_DUPLICATE_KWARGS
from pydantic.v1.decorator import V_POSITIONAL_ONLY_NAME
from pydantic.v1.decorator import ValidatedFunction
from pydantic._internal import _generate_schema
from pydantic._internal import _typing_extra
from pydantic._internal._config import ConfigWrapper

from fractal_tasks_core.dev.lib_descriptions import (
_get_class_attrs_descriptions,
@@ -39,6 +37,9 @@
from fractal_tasks_core.dev.lib_descriptions import (
_insert_function_args_descriptions,
)
from fractal_tasks_core.dev.lib_pydantic_generatejsonschema import (
CustomGenerateJsonSchema,
)
from fractal_tasks_core.dev.lib_signature_constraints import _extract_function
from fractal_tasks_core.dev.lib_signature_constraints import (
_validate_function_signature,
@@ -48,6 +49,7 @@

_Schema = dict[str, Any]


FRACTAL_TASKS_CORE_PYDANTIC_MODELS = [
("fractal_tasks_core", "channels.py", "OmeroChannel"),
("fractal_tasks_core", "channels.py", "Window"),
@@ -90,58 +92,6 @@
]


def _remove_args_kwargs_properties(old_schema: _Schema) -> _Schema:
"""
Remove `args` and `kwargs` schema properties.
Pydantic v1 automatically includes `args` and `kwargs` properties in
JSON Schemas generated via `ValidatedFunction(task_function,
config=None).model.schema()`, with some default (empty) values -- see see
https://github.com/pydantic/pydantic/blob/1.10.X-fixes/pydantic/decorator.py.
Verify that these properties match with their expected default values, and
then remove them from the schema.
Args:
old_schema: TBD
"""
new_schema = old_schema.copy()
args_property = new_schema["properties"].pop("args")
kwargs_property = new_schema["properties"].pop("kwargs")
expected_args_property = {"title": "Args", "type": "array", "items": {}}
expected_kwargs_property = {"title": "Kwargs", "type": "object"}
if args_property != expected_args_property:
raise ValueError(
f"{args_property=}\ndiffers from\n{expected_args_property=}"
)
if kwargs_property != expected_kwargs_property:
raise ValueError(
f"{kwargs_property=}\ndiffers from\n"
f"{expected_kwargs_property=}"
)
logging.info("[_remove_args_kwargs_properties] END")
return new_schema


def _remove_pydantic_internals(old_schema: _Schema) -> _Schema:
"""
Remove schema properties that are only used internally by Pydantic V1.
Args:
old_schema: TBD
"""
new_schema = old_schema.copy()
for key in (
V_POSITIONAL_ONLY_NAME,
V_DUPLICATE_KWARGS,
ALT_V_ARGS,
ALT_V_KWARGS,
):
new_schema["properties"].pop(key, None)
logging.info("[_remove_pydantic_internals] END")
return new_schema


def _remove_attributes_from_descriptions(old_schema: _Schema) -> _Schema:
"""
Keeps only the description part of the docstrings: e.g from
@@ -161,16 +111,40 @@ def _remove_attributes_from_descriptions(old_schema: _Schema) -> _Schema:
old_schema: TBD
"""
new_schema = old_schema.copy()
if "definitions" in new_schema:
for name, definition in new_schema["definitions"].items():
parsed_docstring = docparse(definition["description"])
new_schema["definitions"][name][
"description"
] = parsed_docstring.short_description
if "$defs" in new_schema:
for name, definition in new_schema["$defs"].items():
if "description" in definition.keys():
parsed_docstring = docparse(definition["description"])
new_schema["$defs"][name][
"description"
] = parsed_docstring.short_description
elif "title" in definition.keys():
title = definition["title"]
new_schema["$defs"][name][
"description"
] = f"Missing description for {title}."
else:
new_schema["$defs"][name][
"description"
] = "Missing description"
logging.info("[_remove_attributes_from_descriptions] END")
return new_schema


def _create_schema_for_function(function: Callable) -> _Schema:
namespace = _typing_extra.add_module_globals(function, None)
gen_core_schema = _generate_schema.GenerateSchema(
ConfigWrapper(None), namespace
)
core_schema = gen_core_schema.generate_schema(function)
clean_core_schema = gen_core_schema.clean_schema(core_schema)
gen_json_schema = CustomGenerateJsonSchema()
json_schema = gen_json_schema.generate(
clean_core_schema, mode="validation"
)
return json_schema


def create_schema_for_single_task(
executable: str,
package: Optional[str] = "fractal_tasks_core",
@@ -188,9 +162,10 @@ def create_schema_for_single_task(
2. `task_function` argument is provided, `executable` is an absolute path
to the function module, and `package` is `None. This is useful for
testing.
"""

DEFINITIONS_KEY = "$defs"

logging.info("[create_schema_for_single_task] START")
if task_function is None:
usage = "1"
@@ -243,14 +218,23 @@ def create_schema_for_single_task(
_validate_function_signature(task_function)

# Create and clean up schema
vf = ValidatedFunction(task_function, config=None)
schema = vf.model.schema()
schema = _remove_args_kwargs_properties(schema)
schema = _remove_pydantic_internals(schema)
schema = _create_schema_for_function(task_function)
schema = _remove_attributes_from_descriptions(schema)

# Include titles for custom-model-typed arguments
schema = _include_titles(schema, verbose=verbose)
schema = _include_titles(
schema, definitions_key=DEFINITIONS_KEY, verbose=verbose
)

# Include main title
if schema.get("title") is None:

def to_camel_case(snake_str):
return "".join(
x.capitalize() for x in snake_str.lower().split("_")
)

schema["title"] = to_camel_case(task_function.__name__)

# Include descriptions of function. Note: this function works both
# for usages 1 or 2 (see docstring).
@@ -260,8 +244,9 @@ def create_schema_for_single_task(
function_name=function_name,
verbose=verbose,
)

schema = _insert_function_args_descriptions(
schema=schema, descriptions=function_args_descriptions, verbose=verbose
schema=schema, descriptions=function_args_descriptions
)

# Merge lists of fractal-tasks-core and user-provided Pydantic models
@@ -293,6 +278,7 @@ def create_schema_for_single_task(
schema=schema,
class_name=class_name,
descriptions=attrs_descriptions,
definition_key=DEFINITIONS_KEY,
)

logging.info("[create_schema_for_single_task] END")
16 changes: 11 additions & 5 deletions fractal_tasks_core/dev/lib_descriptions.py
Original file line number Diff line number Diff line change
@@ -207,7 +207,11 @@ def _insert_function_args_descriptions(


def _insert_class_attrs_descriptions(
*, schema: dict, class_name: str, descriptions: dict
*,
schema: dict,
class_name: str,
descriptions: dict,
definition_key: str,
):
"""
Merge the descriptions obtained via `_get_attributes_models_descriptions`
@@ -217,14 +221,16 @@ def _insert_class_attrs_descriptions(
schema: TBD
class_name: TBD
descriptions: TBD
definitions_key: Either `"definitions"` (for Pydantic V1) or
`"$defs"` (for Pydantic V2)
"""
new_schema = schema.copy()
if "definitions" not in schema:
if definition_key not in schema:
return new_schema
else:
new_definitions = schema["definitions"].copy()
new_definitions = schema[definition_key].copy()
# Loop over existing definitions
for name, definition in schema["definitions"].items():
for name, definition in schema[definition_key].items():
if name == class_name:
for prop in definition["properties"]:
if "description" in new_definitions[name]["properties"][prop]:
@@ -235,6 +241,6 @@ def _insert_class_attrs_descriptions(
new_definitions[name]["properties"][prop][
"description"
] = descriptions[prop]
new_schema["definitions"] = new_definitions
new_schema[definition_key] = new_definitions
logging.info("[_insert_class_attrs_descriptions] END")
return new_schema
81 changes: 81 additions & 0 deletions fractal_tasks_core/dev/lib_pydantic_generatejsonschema.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
"""
Custom Pydantic v2 JSON Schema generation tools.
As of Pydantic V2, the JSON Schema representation of model attributes marked
as `Optional` changed, and the new behavior consists in marking the
corresponding properties as an `anyOf` of either a `null` or the actual type.
This is not always the required behavior, see e.g.
* https://github.com/pydantic/pydantic/issues/7161
* https://github.com/pydantic/pydantic/issues/8394
Here we list some alternative ways of reverting this change.
"""
import logging

from pydantic.json_schema import GenerateJsonSchema
from pydantic.json_schema import JsonSchemaValue
from pydantic_core.core_schema import WithDefaultSchema

logger = logging.getLogger("CustomGenerateJsonSchema")


class CustomGenerateJsonSchema(GenerateJsonSchema):
def get_flattened_anyof(
self, schemas: list[JsonSchemaValue]
) -> JsonSchemaValue:
null_schema = {"type": "null"}
if null_schema in schemas:
logger.warning(
"Drop `null_schema` before calling `get_flattened_anyof`"
)
schemas.pop(schemas.index(null_schema))
return super().get_flattened_anyof(schemas)

def default_schema(self, schema: WithDefaultSchema) -> JsonSchemaValue:
json_schema = super().default_schema(schema)
if "default" in json_schema.keys() and json_schema["default"] is None:
logger.warning(f"Pop `None` default value from {json_schema=}")
json_schema.pop("default")
return json_schema


# class GenerateJsonSchemaA(GenerateJsonSchema):
# def nullable_schema(self, schema):
# null_schema = {"type": "null"}
# inner_json_schema = self.generate_inner(schema["schema"])
# if inner_json_schema == null_schema:
# return null_schema
# else:
# logging.info("A: Skip calling `get_flattened_anyof` method")
# return inner_json_schema


# class GenerateJsonSchemaB(GenerateJsonSchemaA):
# def default_schema(self, schema: WithDefaultSchema) -> JsonSchemaValue:
# original_json_schema = super().default_schema(schema)
# new_json_schema = deepcopy(original_json_schema)
# default = new_json_schema.get("default", None)
# if default is None:
# logging.info("B: Pop None default")
# new_json_schema.pop("default")
# return new_json_schema


# class GenerateJsonSchemaC(GenerateJsonSchema):
# def get_flattened_anyof(
# self, schemas: list[JsonSchemaValue]
# ) -> JsonSchemaValue:

# original_json_schema_value = super().get_flattened_anyof(schemas)
# members = original_json_schema_value.get("anyOf")
# logging.info("C", original_json_schema_value)
# if (
# members is not None
# and len(members) == 2
# and {"type": "null"} in members
# ):
# new_json_schema_value = {"type": [t["type"] for t in members]}
# logging.info("C", new_json_schema_value)
# return new_json_schema_value
# else:
# return original_json_schema_value
15 changes: 11 additions & 4 deletions fractal_tasks_core/dev/lib_titles.py
Original file line number Diff line number Diff line change
@@ -52,7 +52,11 @@ def _include_titles_for_properties(
return new_properties


def _include_titles(schema: _Schema, verbose: bool = False) -> _Schema:
def _include_titles(
schema: _Schema,
definitions_key: str,
verbose: bool = False,
) -> _Schema:
"""
Include property titles, when missing.
@@ -65,6 +69,9 @@ def _include_titles(schema: _Schema, verbose: bool = False) -> _Schema:
Args:
schema: TBD
definitions_key: Either `"definitions"` (for Pydantic V1) or
`"$defs"` (for Pydantic V2)
verbose:
"""
new_schema = schema.copy()

@@ -82,8 +89,8 @@ def _include_titles(schema: _Schema, verbose: bool = False) -> _Schema:
logging.info("[_include_titles] Titles for properties now included.")

# Update properties of definitions
if "definitions" in schema.keys():
new_definitions = schema["definitions"].copy()
if definitions_key in schema.keys():
new_definitions = schema[definitions_key].copy()
for def_name, def_schema in new_definitions.items():
if "properties" not in def_schema.keys():
if verbose:
@@ -96,7 +103,7 @@ def _include_titles(schema: _Schema, verbose: bool = False) -> _Schema:
def_schema["properties"], verbose=verbose
)
new_definitions[def_name]["properties"] = new_properties
new_schema["definitions"] = new_definitions
new_schema[definitions_key] = new_definitions

if verbose:
logging.info(
10 changes: 5 additions & 5 deletions fractal_tasks_core/dev/task_models.py
Original file line number Diff line number Diff line change
@@ -18,7 +18,7 @@
from typing import Any
from typing import Optional

from pydantic.v1 import BaseModel
from pydantic import BaseModel


class _BaseTask(BaseModel):
@@ -28,9 +28,9 @@ class Config:

name: str
executable: str
meta: Optional[dict[str, Any]]
input_types: Optional[dict[str, bool]]
output_types: Optional[dict[str, bool]]
meta: Optional[dict[str, Any]] = None
input_types: Optional[dict[str, bool]] = None
output_types: Optional[dict[str, bool]] = None


class CompoundTask(_BaseTask):
@@ -41,7 +41,7 @@ class CompoundTask(_BaseTask):
"""

executable_init: str
meta_init: Optional[dict[str, Any]]
meta_init: Optional[dict[str, Any]] = None

@property
def executable_non_parallel(self) -> str:
2 changes: 1 addition & 1 deletion fractal_tasks_core/labels.py
Original file line number Diff line number Diff line change
@@ -16,7 +16,7 @@
from typing import Optional

import zarr.hierarchy
from pydantic.v1.error_wrappers import ValidationError
from pydantic import ValidationError

from fractal_tasks_core.ngff import NgffImageMeta
from fractal_tasks_core.zarr_utils import OverwriteNotAllowedError
55 changes: 35 additions & 20 deletions fractal_tasks_core/ngff/specs.py
Original file line number Diff line number Diff line change
@@ -5,16 +5,27 @@
import logging
from typing import Literal
from typing import Optional
from typing import TypeVar
from typing import Union

from pydantic.v1 import BaseModel
from pydantic.v1 import Field
from pydantic.v1 import validator
from pydantic import BaseModel
from pydantic import Field
from pydantic import field_validator


logger = logging.getLogger(__name__)


T = TypeVar("T")


def unique_items_validator(values: list[T]) -> list[T]:
for ind, value in enumerate(values, start=1):
if value in values[ind:]:
raise ValueError(f"Non-unique values in {values}.")
return values


class Window(BaseModel):
"""
Model for `Channel.window`.
@@ -76,7 +87,7 @@ class ScaleCoordinateTransformation(BaseModel):
"""

type: Literal["scale"]
scale: list[float] = Field(..., min_items=2)
scale: list[float] = Field(..., min_length=2)


class TranslationCoordinateTransformation(BaseModel):
@@ -90,7 +101,7 @@ class TranslationCoordinateTransformation(BaseModel):
"""

type: Literal["translation"]
translation: list[float] = Field(..., min_items=2)
translation: list[float] = Field(..., min_length=2)


class Dataset(BaseModel):
@@ -105,7 +116,7 @@ class Dataset(BaseModel):
Union[
ScaleCoordinateTransformation, TranslationCoordinateTransformation
]
] = Field(..., min_items=1)
] = Field(..., min_length=1)

@property
def scale_transformation(self) -> ScaleCoordinateTransformation:
@@ -139,9 +150,9 @@ class Multiscale(BaseModel):
"""

name: Optional[str] = None
datasets: list[Dataset] = Field(..., min_items=1)
datasets: list[Dataset] = Field(..., min_length=1)
version: Optional[str] = None
axes: list[Axis] = Field(..., max_items=5, min_items=2, unique_items=True)
axes: list[Axis] = Field(..., max_length=5, min_length=2)
coordinateTransformations: Optional[
list[
Union[
@@ -150,13 +161,19 @@ class Multiscale(BaseModel):
]
]
] = None
_check_unique = field_validator("axes")(unique_items_validator)

@validator("coordinateTransformations", always=True)
def _no_global_coordinateTransformations(cls, v):
@field_validator("coordinateTransformations", mode="after")
@classmethod
def _no_global_coordinateTransformations(
cls, v: Optional[list]
) -> Optional[list]:
"""
Fail if Multiscale has a (global) coordinateTransformations attribute.
"""
if v is not None:
if v is None:
return v
else:
raise NotImplementedError(
"Global coordinateTransformations at the multiscales "
"level are not currently supported in the fractal-tasks-core "
@@ -174,10 +191,10 @@ class NgffImageMeta(BaseModel):
multiscales: list[Multiscale] = Field(
...,
description="The multiscale datasets for this image",
min_items=1,
unique_items=True,
min_length=1,
)
omero: Optional[Omero] = None
_check_unique = field_validator("multiscales")(unique_items_validator)

@property
def multiscale(self) -> Multiscale:
@@ -325,14 +342,12 @@ class Well(BaseModel):
"""

images: list[ImageInWell] = Field(
...,
description="The images included in this well",
min_items=1,
unique_items=True,
..., description="The images included in this well", min_length=1
)
version: Optional[str] = Field(
None, description="The version of the specification"
)
_check_unique = field_validator("images")(unique_items_validator)


class NgffWellMeta(BaseModel):
@@ -446,10 +461,10 @@ class Plate(BaseModel):
See https://ngff.openmicroscopy.org/0.4/#plate-md.
"""

acquisitions: Optional[list[AcquisitionInPlate]]
acquisitions: Optional[list[AcquisitionInPlate]] = None
columns: list[ColumnInPlate]
field_count: Optional[int]
name: Optional[str]
field_count: Optional[int] = None
name: Optional[str] = None
rows: list[RowInPlate]
# version will become required in 0.5
version: Optional[str] = Field(
2 changes: 1 addition & 1 deletion fractal_tasks_core/roi/v1_checks.py
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@

import anndata as ad
import zarr
from pydantic.v1.error_wrappers import ValidationError
from pydantic import ValidationError

from fractal_tasks_core.tables.v1 import MaskingROITableAttrs

12 changes: 7 additions & 5 deletions fractal_tasks_core/tables/v1.py
Original file line number Diff line number Diff line change
@@ -11,9 +11,9 @@
import anndata as ad
import zarr.hierarchy
from anndata.experimental import write_elem
from pydantic.v1 import BaseModel
from pydantic.v1 import validator
from pydantic.v1.error_wrappers import ValidationError
from pydantic import BaseModel
from pydantic import field_validator
from pydantic import ValidationError

from fractal_tasks_core.zarr_utils import OverwriteNotAllowedError

@@ -29,7 +29,8 @@ class MaskingROITableAttrs(BaseModel):
region: _RegionType
instance_key: str

@validator("type", always=True)
@field_validator("type")
@classmethod
def warning_for_old_table_type(cls, v):
if v == "ngff:region_table":
warning_msg = (
@@ -47,7 +48,8 @@ class FeatureTableAttrs(BaseModel):
region: _RegionType
instance_key: str

@validator("type", always=True)
@field_validator("type")
@classmethod
def warning_for_old_table_type(cls, v):
if v == "ngff:region_table":
warning_msg = (
3 changes: 1 addition & 2 deletions fractal_tasks_core/tasks/_zarr_utils.py
Original file line number Diff line number Diff line change
@@ -65,7 +65,6 @@ def _update_well_metadata(
"""
lock = FileLock(f"{well_url}/.zattrs.lock")
with lock.acquire(timeout=timeout):

well_meta = load_NgffWellMeta(well_url)
existing_well_images = [image.path for image in well_meta.well.images]
if new_image_path in existing_well_images:
@@ -94,7 +93,7 @@ def _update_well_metadata(
)

well_group = zarr.group(well_url)
well_group.attrs.put(well_meta.dict(exclude_none=True))
well_group.attrs.put(well_meta.model_dump(exclude_none=True))

# One could catch the timeout with a try except Timeout. But what to do
# with it?
4 changes: 2 additions & 2 deletions fractal_tasks_core/tasks/apply_registration_to_image.py
Original file line number Diff line number Diff line change
@@ -21,7 +21,7 @@
import dask.array as da
import numpy as np
import zarr
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

from fractal_tasks_core.ngff import load_NgffImageMeta
from fractal_tasks_core.ngff.zarr_utils import load_NgffWellMeta
@@ -47,7 +47,7 @@
logger = logging.getLogger(__name__)


@validate_arguments
@validate_call
def apply_registration_to_image(
*,
# Fractal parameters
Original file line number Diff line number Diff line change
@@ -19,7 +19,7 @@
import dask.array as da
import numpy as np
import zarr
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call
from skimage.registration import phase_cross_correlation

from fractal_tasks_core.channels import get_channel_from_image_zarr
@@ -68,7 +68,7 @@ def register(self, img_ref, img_acq_x):
return chi2_shift_out(img_ref, img_acq_x)


@validate_arguments
@validate_call
def calculate_registration_image_based(
*,
# Fractal arguments
10 changes: 5 additions & 5 deletions fractal_tasks_core/tasks/cellpose_segmentation.py
Original file line number Diff line number Diff line change
@@ -26,8 +26,8 @@
import pandas as pd
import zarr
from cellpose import models
from pydantic.v1 import Field
from pydantic.v1.decorator import validate_arguments
from pydantic import Field
from pydantic import validate_call

import fractal_tasks_core
from fractal_tasks_core.labels import prepare_label_group
@@ -190,7 +190,7 @@ def segment_ROI(
return mask.astype(label_dtype)


@validate_arguments
@validate_call
def cellpose_segmentation(
*,
# Fractal parameters
@@ -378,14 +378,14 @@ def cellpose_segmentation(
# Workaround for #788
if ngff_image_meta.axes_names[0] != "c":
new_datasets = rescale_datasets(
datasets=[ds.dict() for ds in ngff_image_meta.datasets],
datasets=[ds.model_dump() for ds in ngff_image_meta.datasets],
coarsening_xy=coarsening_xy,
reference_level=level,
remove_channel_axis=False,
)
else:
new_datasets = rescale_datasets(
datasets=[ds.dict() for ds in ngff_image_meta.datasets],
datasets=[ds.model_dump() for ds in ngff_image_meta.datasets],
coarsening_xy=coarsening_xy,
reference_level=level,
remove_channel_axis=True,
36 changes: 18 additions & 18 deletions fractal_tasks_core/tasks/cellpose_utils.py
Original file line number Diff line number Diff line change
@@ -16,10 +16,10 @@
from typing import Optional

import numpy as np
from pydantic.v1 import BaseModel
from pydantic.v1 import Field
from pydantic.v1 import root_validator
from pydantic.v1 import validator
from pydantic import BaseModel
from pydantic import Field
from pydantic import model_validator
from typing_extensions import Self

from fractal_tasks_core.channels import ChannelInputModel
from fractal_tasks_core.channels import ChannelNotFoundError
@@ -70,14 +70,14 @@ class CellposeCustomNormalizer(BaseModel):
# that are stored in OME-Zarr histograms and use this pydantic model that
# those histograms actually exist

@root_validator
def validate_conditions(cls, values):
@model_validator(mode="after")
def validate_conditions(self: Self) -> Self:
# Extract values
type = values.get("type")
lower_percentile = values.get("lower_percentile")
upper_percentile = values.get("upper_percentile")
lower_bound = values.get("lower_bound")
upper_bound = values.get("upper_bound")
type = self.type
lower_percentile = self.lower_percentile
upper_percentile = self.upper_percentile
lower_bound = self.lower_bound
upper_bound = self.upper_bound

# Verify that custom parameters are only provided when type="custom"
if type != "custom":
@@ -128,7 +128,7 @@ def validate_conditions(cls, values):
"at the same time. Hint: use only one of the two options."
)

return values
return self

@property
def cellpose_normalize(self) -> bool:
@@ -256,19 +256,19 @@ class CellposeChannel2InputModel(BaseModel):
default_factory=CellposeCustomNormalizer
)

@validator("label", always=True)
def mutually_exclusive_channel_attributes(cls, v, values):
@model_validator(mode="after")
def mutually_exclusive_channel_attributes(self: Self) -> Self:
"""
Check that only 1 of `label` or `wavelength_id` is set.
"""
wavelength_id = values.get("wavelength_id")
label = v
if wavelength_id and v:
wavelength_id = self.wavelength_id
label = self.label
if (wavelength_id is not None) and (label is not None):
raise ValueError(
"`wavelength_id` and `label` cannot be both set "
f"(given {wavelength_id=} and {label=})."
)
return v
return self

def is_set(self):
if self.wavelength_id or self.label:
4 changes: 2 additions & 2 deletions fractal_tasks_core/tasks/cellvoyager_to_ome_zarr_compute.py
Original file line number Diff line number Diff line change
@@ -18,7 +18,7 @@
import zarr
from anndata import read_zarr
from dask.array.image import imread
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

from fractal_tasks_core.cellvoyager.filenames import (
glob_with_multiple_patterns,
@@ -53,7 +53,7 @@ def sort_fun(filename: str) -> list[int]:
return [site, z_index]


@validate_arguments
@validate_call
def cellvoyager_to_ome_zarr_compute(
*,
# Fractal parameters
6 changes: 3 additions & 3 deletions fractal_tasks_core/tasks/cellvoyager_to_ome_zarr_init.py
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@
from typing import Optional

import pandas as pd
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

import fractal_tasks_core
from fractal_tasks_core.cellvoyager.filenames import (
@@ -49,7 +49,7 @@
logger = logging.getLogger(__name__)


@validate_arguments
@validate_call
def cellvoyager_to_ome_zarr_init(
*,
# Fractal parameters
@@ -371,7 +371,7 @@ def cellvoyager_to_ome_zarr_init(
well_ID=get_filename_well_id(row, column),
image_extension=image_extension,
image_glob_patterns=image_glob_patterns,
).dict(),
).model_dump(),
}
)
group_well = group_plate.create_group(f"{row}/{column}/")
Original file line number Diff line number Diff line change
@@ -18,7 +18,7 @@

import pandas as pd
import zarr
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call
from zarr.errors import ContainsGroupError

import fractal_tasks_core
@@ -52,7 +52,7 @@
logger = logging.getLogger(__name__)


@validate_arguments
@validate_call
def cellvoyager_to_ome_zarr_init_multiplex(
*,
# Fractal parameters
@@ -115,7 +115,6 @@ def cellvoyager_to_ome_zarr_init_multiplex(
"""

if metadata_table_files:

# Checks on the dict:
# 1. Acquisitions in acquisitions dict and metadata_table_files match
# 2. Files end with ".csv"
@@ -268,7 +267,6 @@ def cellvoyager_to_ome_zarr_init_multiplex(
logging.info(f"{acquisitions_sorted=}")

for acquisition in acquisitions_sorted:

# Define plate zarr
image_folder = dict_acquisitions[acquisition]["image_folder"]
logger.info(f"Looking at {image_folder=}")
@@ -382,7 +380,7 @@ def cellvoyager_to_ome_zarr_init_multiplex(
image_extension=image_extension,
image_glob_patterns=image_glob_patterns,
acquisition=acquisition,
).dict(),
).model_dump(),
}
)
try:
8 changes: 4 additions & 4 deletions fractal_tasks_core/tasks/copy_ome_zarr_hcs_plate.py
Original file line number Diff line number Diff line change
@@ -15,7 +15,7 @@
import logging
from typing import Any

from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

import fractal_tasks_core
from fractal_tasks_core.ngff.specs import NgffPlateMeta
@@ -181,12 +181,12 @@ def _generate_plate_well_metadata(
# Validate with NgffPlateMeta model
plate_metadata_dicts[old_plate_url] = NgffPlateMeta(
**plate_metadata_dicts[old_plate_url]
).dict(exclude_none=True)
).model_dump(exclude_none=True)

return plate_metadata_dicts, new_well_image_attrs, well_image_attrs


@validate_arguments
@validate_call
def copy_ome_zarr_hcs_plate(
*,
# Fractal parameters
@@ -277,7 +277,7 @@ def copy_ome_zarr_hcs_plate(
well_attrs = dict(
well=dict(
images=[
img.dict(exclude_none=True)
img.model_dump(exclude_none=True)
for img in new_well_image_attrs[old_plate_url][
well_sub_url
]
4 changes: 2 additions & 2 deletions fractal_tasks_core/tasks/find_registration_consensus.py
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@

import anndata as ad
import zarr
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

from fractal_tasks_core.roi import (
are_ROI_table_columns_valid,
@@ -38,7 +38,7 @@
logger = logging.getLogger(__name__)


@validate_arguments
@validate_call
def find_registration_consensus(
*,
# Fractal parameters
4 changes: 2 additions & 2 deletions fractal_tasks_core/tasks/illumination_correction.py
Original file line number Diff line number Diff line change
@@ -22,7 +22,7 @@
import dask.array as da
import numpy as np
import zarr
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call
from skimage.io import imread

from fractal_tasks_core.channels import get_omero_channel_list
@@ -92,7 +92,7 @@ def correct(
return new_img_stack.astype(dtype)


@validate_arguments
@validate_call
def illumination_correction(
*,
# Fractal parameters
4 changes: 2 additions & 2 deletions fractal_tasks_core/tasks/image_based_registration_hcs_init.py
Original file line number Diff line number Diff line change
@@ -15,7 +15,7 @@
import logging
from typing import Any

from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

from fractal_tasks_core.tasks._registration_utils import (
create_well_acquisition_dict,
@@ -24,7 +24,7 @@
logger = logging.getLogger(__name__)


@validate_arguments
@validate_call
def image_based_registration_hcs_init(
*,
# Fractal parameters
5 changes: 2 additions & 3 deletions fractal_tasks_core/tasks/import_ome_zarr.py
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@

import dask.array as da
import zarr
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

from fractal_tasks_core.channels import update_omero_channels
from fractal_tasks_core.ngff import detect_ome_ngff_type
@@ -159,7 +159,7 @@ def _process_single_image(
return types


@validate_arguments
@validate_call
def import_ome_zarr(
*,
# Fractal parameters
@@ -306,7 +306,6 @@ def import_ome_zarr(


if __name__ == "__main__":

from fractal_tasks_core.tasks._utils import run_fractal_task

run_fractal_task(
Original file line number Diff line number Diff line change
@@ -14,7 +14,7 @@
"""
import logging

from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

from fractal_tasks_core.tasks._registration_utils import (
create_well_acquisition_dict,
@@ -23,7 +23,7 @@
logger = logging.getLogger(__name__)


@validate_arguments
@validate_call
def init_group_by_well_for_multiplexing(
*,
# Fractal parameters
58 changes: 33 additions & 25 deletions fractal_tasks_core/tasks/io_models.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
from typing import Literal
from typing import Optional

from pydantic.v1 import BaseModel
from pydantic.v1 import Field
from pydantic.v1 import validator
from pydantic import BaseModel
from pydantic import Field
from pydantic import model_validator
from typing_extensions import Self

from fractal_tasks_core.channels import ChannelInputModel
from fractal_tasks_core.channels import OmeroChannel
@@ -58,8 +59,8 @@ class InitArgsCellVoyager(BaseModel):
plate_prefix: str
well_ID: str
image_extension: str
image_glob_patterns: Optional[list[str]]
acquisition: Optional[int]
image_glob_patterns: Optional[list[str]] = None
acquisition: Optional[int] = None


class InitArgsIllumination(BaseModel):
@@ -119,17 +120,20 @@ class NapariWorkflowsOutput(BaseModel):
label_name: str
table_name: Optional[str] = None

@validator("table_name", always=True)
def table_name_only_for_dataframe_type(cls, v, values):
@model_validator(mode="after")
def table_name_only_for_dataframe_type(self: Self) -> Self:
"""
Check that table_name is set only for dataframe outputs.
"""
_type = values.get("type")
if (_type == "dataframe" and (not v)) or (_type != "dataframe" and v):
_type = self.type
_table_name = self.table_name
if (_type == "dataframe" and (not _table_name)) or (
_type != "dataframe" and _table_name
):
raise ValueError(
f"Output item has type={_type} but table_name={v}."
f"Output item has type={_type} but table_name={_table_name}."
)
return v
return self


class NapariWorkflowsInput(BaseModel):
@@ -143,27 +147,31 @@ class NapariWorkflowsInput(BaseModel):
"""

type: Literal["image", "label"]
label_name: Optional[str]
channel: Optional[ChannelInputModel]
label_name: Optional[str] = None
channel: Optional[ChannelInputModel] = None

@validator("label_name", always=True)
def label_name_is_present(cls, v, values):
@model_validator(mode="after")
def label_name_is_present(self: Self) -> Self:
"""
Check that label inputs have `label_name` set.
"""
_type = values.get("type")
if _type == "label" and not v:
label_name = self.label_name
_type = self.type
if _type == "label" and label_name is None:
raise ValueError(
f"Input item has type={_type} but label_name={v}."
f"Input item has type={_type} but label_name={label_name}."
)
return v
return self

@validator("channel", always=True)
def channel_is_present(cls, v, values):
@model_validator(mode="after")
def channel_is_present(self: Self) -> Self:
"""
Check that image inputs have `channel` set.
"""
_type = values.get("type")
if _type == "image" and not v:
raise ValueError(f"Input item has type={_type} but channel={v}.")
return v
_type = self.type
channel = self.channel
if _type == "image" and channel is None:
raise ValueError(
f"Input item has type={_type} but channel={channel}."
)
return self
7 changes: 3 additions & 4 deletions fractal_tasks_core/tasks/maximum_intensity_projection.py
Original file line number Diff line number Diff line change
@@ -18,7 +18,7 @@
import anndata as ad
import dask.array as da
import zarr
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call
from zarr.errors import ContainsArrayError

from fractal_tasks_core.ngff import load_NgffImageMeta
@@ -35,7 +35,7 @@
logger = logging.getLogger(__name__)


@validate_arguments
@validate_call
def maximum_intensity_projection(
*,
# Fractal parameters
@@ -63,7 +63,7 @@ def maximum_intensity_projection(
ngff_image = load_NgffImageMeta(init_args.origin_url)
# Currently not using the validation models due to wavelength_id issue
# See #681 for discussion
# new_attrs = ngff_image.dict(exclude_none=True)
# new_attrs = ngff_image.model_dump(exclude_none=True)
# Current way to get the necessary metadata for MIP
group = zarr.open_group(init_args.origin_url, mode="r")
new_attrs = group.attrs.asdict()
@@ -183,7 +183,6 @@ def maximum_intensity_projection(


if __name__ == "__main__":

from fractal_tasks_core.tasks._utils import run_fractal_task

run_fractal_task(
10 changes: 5 additions & 5 deletions fractal_tasks_core/tasks/napari_workflows_wrapper.py
Original file line number Diff line number Diff line change
@@ -22,7 +22,7 @@
import pandas as pd
import zarr
from napari_workflows._io_yaml_v1 import load_workflow
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

import fractal_tasks_core
from fractal_tasks_core.channels import get_channel_from_image_zarr
@@ -59,7 +59,7 @@ class OutOfTaskScopeError(NotImplementedError):
pass


@validate_arguments
@validate_call
def napari_workflows_wrapper(
*,
# Fractal parameters
@@ -394,7 +394,6 @@ def napari_workflows_wrapper(
# Loop over label outputs and (1) set zattrs, (2) create zarr group
output_label_zarr_groups: dict[str, Any] = {}
for name, out_params in label_outputs:

# (1a) Rescale OME-NGFF datasets (relevant for level>0)
if not ngff_image_meta.multiscale.axes[0].name == "c":
raise ValueError(
@@ -404,7 +403,8 @@ def napari_workflows_wrapper(
)
new_datasets = rescale_datasets(
datasets=[
ds.dict() for ds in ngff_image_meta.multiscale.datasets
ds.model_dump()
for ds in ngff_image_meta.multiscale.datasets
],
coarsening_xy=coarsening_xy,
reference_level=level,
@@ -423,7 +423,7 @@ def napari_workflows_wrapper(
"name": label_name,
"version": __OME_NGFF_VERSION__,
"axes": [
ax.dict()
ax.model_dump()
for ax in ngff_image_meta.multiscale.axes
if ax.type != "channel"
],
35 changes: 10 additions & 25 deletions poetry.lock
3 changes: 1 addition & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -31,7 +31,7 @@ numpy = "<2"
pandas = ">=1.2.0,<2"
defusedxml = "^0.7.1"
lxml = "^4.9.1"
pydantic = ">=1.10.16"
pydantic = ">2"
docstring-parser = "^0.15"
anndata = ">=0.8.0,<0.11.0"
filelock = "3.13.*"
@@ -60,7 +60,6 @@ pre-commit = "^2.19.0"
pytest = "^7.1.2"
bumpver = "^2022.1118"
coverage = {extras = ["toml"], version = "^6.5.0"}
pytest-pretty = "^1.1.0"
jsonschema = "^4.16.0"
mypy = "^1.3.0"
requests = ">=2.28.0"
Original file line number Diff line number Diff line change
@@ -4,70 +4,29 @@
"name": "Task Name",
"executable": "tasks/my_task.py",
"args_schema": {
"title": "MyTask",
"type": "object",
"properties": {
"x": {
"title": "X",
"type": "integer",
"description": "Missing description"
},
"y": {
"$ref": "#/definitions/OmeroChannel",
"description": "Missing description"
},
"z": {
"$ref": "#/definitions/CustomModel",
"description": "Missing description"
}
},
"required": [
"x",
"y",
"z"
],
"additionalProperties": false,
"definitions": {
"Window": {
"title": "Window",
"description": "Custom class for Omero-channel window, based on OME-NGFF v0.4.",
"type": "object",
"$defs": {
"CustomModel": {
"description": "Short description",
"properties": {
"min": {
"title": "Min",
"type": "integer",
"description": "Do not change. It will be set to ``0`` by default."
},
"max": {
"title": "Max",
"type": "integer",
"description": "Do not change. It will be set according to bit-depth of the images by default (e.g. 65535 for 16 bit images)."
},
"start": {
"title": "Start",
"type": "integer",
"description": "Lower-bound rescaling value for visualization."
},
"end": {
"title": "End",
"x": {
"title": "X",
"type": "integer",
"description": "Upper-bound rescaling value for visualization."
"description": "Description of `x`"
}
},
"required": [
"start",
"end"
]
"x"
],
"title": "CustomModel",
"type": "object"
},
"OmeroChannel": {
"title": "OmeroChannel",
"description": "Custom class for Omero channels, based on OME-NGFF v0.4.",
"type": "object",
"properties": {
"wavelength_id": {
"title": "Wavelength Id",
"type": "string",
"description": "Unique ID for the channel wavelength, e.g. ``A01_C01``."
"description": "Unique ID for the channel wavelength, e.g. `A01_C01`."
},
"index": {
"title": "Index",
@@ -77,58 +36,107 @@
"label": {
"title": "Label",
"type": "string",
"description": "Name of the channel"
"description": "Name of the channel."
},
"window": {
"$ref": "#/definitions/Window",
"description": "Optional ``Window`` object to set default display settings for napari."
"allOf": [
{
"$ref": "#/$defs/Window"
}
],
"title": "Window",
"description": "Optional `Window` object to set default display settings for napari."
},
"color": {
"title": "Color",
"type": "string",
"description": "Optional hex colormap to display the channel in napari (e.g. ``00FFFF``)."
"description": "Optional hex colormap to display the channel in napari (it must be of length 6, e.g. `00FFFF`)."
},
"active": {
"title": "Active",
"default": true,
"title": "Active",
"type": "boolean",
"description": "Should this channel be shown in the viewer?"
},
"coefficient": {
"title": "Coefficient",
"default": 1,
"title": "Coefficient",
"type": "integer",
"description": "Do not change. Omero-channel attribute. "
"description": "Do not change. Omero-channel attribute."
},
"inverted": {
"title": "Inverted",
"default": false,
"title": "Inverted",
"type": "boolean",
"description": "Do not change. Omero-channel attribute."
}
},
"required": [
"wavelength_id"
]
],
"title": "OmeroChannel",
"type": "object"
},
"CustomModel": {
"title": "CustomModel",
"type": "object",
"Window": {
"description": "Custom class for Omero-channel window, based on OME-NGFF v0.4.",
"properties": {
"x": {
"title": "X",
"min": {
"title": "Min",
"type": "integer",
"description": "Do not change. It will be set to `0` by default."
},
"max": {
"title": "Max",
"type": "integer",
"description": "Do not change. It will be set according to bit-depth of the images by default (e.g. 65535 for 16 bit images)."
},
"start": {
"title": "Start",
"type": "integer",
"description": "Lower-bound rescaling value for visualization."
},
"end": {
"title": "End",
"type": "integer",
"description": "Missing description"
"description": "Upper-bound rescaling value for visualization."
}
},
"required": [
"x"
]
"start",
"end"
],
"title": "Window",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"x": {
"title": "X",
"type": "integer",
"description": "Missing description"
},
"y": {
"$ref": "#/$defs/OmeroChannel",
"title": "Y",
"description": "Missing description"
},
"z": {
"$ref": "#/$defs/CustomModel",
"title": "Z",
"description": "Missing description"
}
}
},
"required": [
"x",
"y",
"z"
],
"type": "object",
"title": "MyTask"
}
}
],
"has_args_schemas": true,
"args_schema_version": "pydantic_v1"
"args_schema_version": "pydantic_v2"
}
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
from pydantic.v1 import BaseModel
from pydantic import BaseModel


class CustomModel(BaseModel):
"""
Short description
Attributes:
x: Description of `x`
"""

x: int
Original file line number Diff line number Diff line change
@@ -35,7 +35,7 @@

# Set global properties of manifest
manifest["has_args_schemas"] = True
manifest["args_schema_version"] = "pydantic_v1"
manifest["args_schema_version"] = "pydantic_v2"

# Loop over tasks and set args schemas
task_list = manifest["task_list"]
14 changes: 10 additions & 4 deletions tests/data/generate_zarr_ones.py
Original file line number Diff line number Diff line change
@@ -6,9 +6,9 @@
import numpy as np
import pandas as pd
import zarr
from anndata._io.specs import write_elem

from fractal_tasks_core.roi import prepare_FOV_ROI_table
from fractal_tasks_core.tables import write_table


num_C = 2
@@ -74,7 +74,7 @@
"axes": axes,
"datasets": [
{
"path": level,
"path": str(level),
cT: [
{
"type": "scale",
@@ -146,5 +146,11 @@
FOV_ROI_table = prepare_FOV_ROI_table(df)
print(FOV_ROI_table.to_df())

group_tables = zarr.group(f"{zarrurl}{component}/tables")
write_elem(group_tables, "FOV_ROI_table", FOV_ROI_table)
image_group = zarr.group(f"{zarrurl}{component}")
write_table(
image_group,
"FOV_ROI_table",
FOV_ROI_table,
overwrite=True,
table_attrs=dict(fractal_table_version="1", type="roi_table"),
)
2 changes: 1 addition & 1 deletion tests/data/ngff_examples/dataset.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
2 changes: 1 addition & 1 deletion tests/data/ngff_examples/dataset_error_1.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "translation",
2 changes: 1 addition & 1 deletion tests/data/ngff_examples/dataset_error_2.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
10 changes: 5 additions & 5 deletions tests/data/ngff_examples/image.json
Original file line number Diff line number Diff line change
@@ -24,7 +24,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
@@ -38,7 +38,7 @@
]
},
{
"path": 1,
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
@@ -52,7 +52,7 @@
]
},
{
"path": 2,
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
@@ -66,7 +66,7 @@
]
},
{
"path": 3,
"path": "3",
"coordinateTransformations": [
{
"type": "scale",
@@ -80,7 +80,7 @@
]
},
{
"path": 4,
"path": "4",
"coordinateTransformations": [
{
"type": "scale",
10 changes: 5 additions & 5 deletions tests/data/ngff_examples/image_CYX.json
Original file line number Diff line number Diff line change
@@ -19,7 +19,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
@@ -32,7 +32,7 @@
]
},
{
"path": 1,
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
@@ -45,7 +45,7 @@
]
},
{
"path": 2,
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
@@ -58,7 +58,7 @@
]
},
{
"path": 3,
"path": "3",
"coordinateTransformations": [
{
"type": "scale",
@@ -71,7 +71,7 @@
]
},
{
"path": 4,
"path": "4",
"coordinateTransformations": [
{
"type": "scale",
10 changes: 5 additions & 5 deletions tests/data/ngff_examples/image_ZYX.json
Original file line number Diff line number Diff line change
@@ -20,7 +20,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
@@ -33,7 +33,7 @@
]
},
{
"path": 1,
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
@@ -46,7 +46,7 @@
]
},
{
"path": 2,
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
@@ -59,7 +59,7 @@
]
},
{
"path": 3,
"path": "3",
"coordinateTransformations": [
{
"type": "scale",
@@ -72,7 +72,7 @@
]
},
{
"path": 4,
"path": "4",
"coordinateTransformations": [
{
"type": "scale",
20 changes: 10 additions & 10 deletions tests/data/ngff_examples/image_error.json
Original file line number Diff line number Diff line change
@@ -9,7 +9,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
@@ -18,7 +18,7 @@
]
},
{
"path": 1,
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
@@ -27,7 +27,7 @@
]
},
{
"path": 2,
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
@@ -36,7 +36,7 @@
]
},
{
"path": 3,
"path": "3",
"coordinateTransformations": [
{
"type": "scale",
@@ -45,7 +45,7 @@
]
},
{
"path": 4,
"path": "4",
"coordinateTransformations": [
{
"type": "scale",
@@ -65,7 +65,7 @@
],
"datasets": [
{
"path": 1,
"path": "1_0",
"coordinateTransformations": [
{
"type": "scale",
@@ -74,7 +74,7 @@
]
},
{
"path": 1,
"path": "1_1",
"coordinateTransformations": [
{
"type": "scale",
@@ -83,7 +83,7 @@
]
},
{
"path": 2,
"path": "1_2",
"coordinateTransformations": [
{
"type": "scale",
@@ -92,7 +92,7 @@
]
},
{
"path": 3,
"path": "1_3",
"coordinateTransformations": [
{
"type": "scale",
@@ -101,7 +101,7 @@
]
},
{
"path": 4,
"path": "1_4",
"coordinateTransformations": [
{
"type": "scale",
6 changes: 3 additions & 3 deletions tests/data/ngff_examples/image_error_coarsening_1.json
Original file line number Diff line number Diff line change
@@ -24,7 +24,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
@@ -38,7 +38,7 @@
]
},
{
"path": 1,
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
@@ -52,7 +52,7 @@
]
},
{
"path": 2,
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
6 changes: 3 additions & 3 deletions tests/data/ngff_examples/image_error_coarsening_2.json
Original file line number Diff line number Diff line change
@@ -24,7 +24,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
@@ -38,7 +38,7 @@
]
},
{
"path": 1,
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
@@ -52,7 +52,7 @@
]
},
{
"path": 2,
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
2 changes: 1 addition & 1 deletion tests/data/ngff_examples/image_error_pixels.json
Original file line number Diff line number Diff line change
@@ -24,7 +24,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
Original file line number Diff line number Diff line change
@@ -15,7 +15,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
10 changes: 5 additions & 5 deletions tests/data/ngff_examples/multiscale.json
Original file line number Diff line number Diff line change
@@ -22,7 +22,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
@@ -36,7 +36,7 @@
]
},
{
"path": 1,
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
@@ -50,7 +50,7 @@
]
},
{
"path": 2,
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
@@ -64,7 +64,7 @@
]
},
{
"path": 3,
"path": "3",
"coordinateTransformations": [
{
"type": "scale",
@@ -78,7 +78,7 @@
]
},
{
"path": 4,
"path": "4",
"coordinateTransformations": [
{
"type": "scale",
10 changes: 5 additions & 5 deletions tests/data/ngff_examples/multiscale_error.json
Original file line number Diff line number Diff line change
@@ -33,7 +33,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
@@ -47,7 +47,7 @@
]
},
{
"path": 1,
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
@@ -61,7 +61,7 @@
]
},
{
"path": 2,
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
@@ -75,7 +75,7 @@
]
},
{
"path": 3,
"path": "3",
"coordinateTransformations": [
{
"type": "scale",
@@ -89,7 +89,7 @@
]
},
{
"path": 4,
"path": "4",
"coordinateTransformations": [
{
"type": "scale",
39 changes: 39 additions & 0 deletions tests/data/ngff_examples/multiscale_non_unique_axis.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
{
"axes": [
{
"name": "c",
"type": "channel"
},
{
"name": "c",
"type": "channel"
},
{
"name": "y",
"type": "space",
"unit": "micrometer"
},
{
"name": "x",
"type": "space",
"unit": "micrometer"
}
],
"datasets": [
{
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
"scale": [
1.0,
1.0,
0.1625,
0.1625
]
}
]
}
],
"version": "0.4"
}
97 changes: 97 additions & 0 deletions tests/data/ngff_examples/multiscale_with_none.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
{
"coordinateTransformations": null,
"axes": [
{
"name": "c",
"type": "channel"
},
{
"name": "z",
"type": "space",
"unit": "micrometer"
},
{
"name": "y",
"type": "space",
"unit": "micrometer"
},
{
"name": "x",
"type": "space",
"unit": "micrometer"
}
],
"datasets": [
{
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
"scale": [
1.0,
1.0,
0.1625,
0.1625
]
}
]
},
{
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
"scale": [
1.0,
1.0,
0.325,
0.325
]
}
]
},
{
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
"scale": [
1.0,
1.0,
0.65,
0.65
]
}
]
},
{
"path": "3",
"coordinateTransformations": [
{
"type": "scale",
"scale": [
1.0,
1.0,
1.3,
1.3
]
}
]
},
{
"path": "4",
"coordinateTransformations": [
{
"type": "scale",
"scale": [
1.0,
1.0,
2.6,
2.6
]
}
]
}
],
"version": "0.4"
}
10 changes: 5 additions & 5 deletions tests/data/plate_ones.zarr/B/03/0/.zattrs
Original file line number Diff line number Diff line change
@@ -24,7 +24,7 @@
],
"datasets": [
{
"path": 0,
"path": "0",
"coordinateTransformations": [
{
"type": "scale",
@@ -38,7 +38,7 @@
]
},
{
"path": 1,
"path": "1",
"coordinateTransformations": [
{
"type": "scale",
@@ -52,7 +52,7 @@
]
},
{
"path": 2,
"path": "2",
"coordinateTransformations": [
{
"type": "scale",
@@ -66,7 +66,7 @@
]
},
{
"path": 3,
"path": "3",
"coordinateTransformations": [
{
"type": "scale",
@@ -80,7 +80,7 @@
]
},
{
"path": 4,
"path": "4",
"coordinateTransformations": [
{
"type": "scale",
5 changes: 5 additions & 0 deletions tests/data/plate_ones.zarr/B/03/0/tables/.zattrs
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"tables": [
"FOV_ROI_table"
]
}
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"fractal_table_version": "1",
"type": "roi_table",
"encoding-type": "anndata",
"encoding-version": "0.1.0"
"encoding-version": "0.1.0",
"fractal_table_version": "1",
"type": "roi_table"
}
4 changes: 2 additions & 2 deletions tests/dev/test_create_schema_for_single_task.py
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@

import pytest
from devtools import debug
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

from fractal_tasks_core.dev.lib_args_schemas import (
create_schema_for_single_task,
@@ -23,7 +23,7 @@ def test_create_schema_for_single_task_usage_1():
print(json.dumps(schema, indent=2))


@validate_arguments
@validate_call
def task_function(arg_1: int = 1):
"""
Description
4 changes: 2 additions & 2 deletions tests/dev/test_enum_arguments.py
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@
from enum import Enum

from devtools import debug
from pydantic.v1.decorator import validate_arguments
from pydantic import validate_call

from fractal_tasks_core.dev.lib_args_schemas import (
create_schema_for_single_task,
@@ -21,7 +21,7 @@ class ColorA(Enum):
)


@validate_arguments
@validate_call
def task_function(
arg_A: ColorA,
arg_B: ColorB,
48 changes: 48 additions & 0 deletions tests/dev/test_optional_argument.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import json
from typing import Optional

from pydantic.validate_call_decorator import validate_call

from fractal_tasks_core.dev.lib_args_schemas import (
create_schema_for_single_task,
)


@validate_call
def task_function(
arg1: str,
arg2: Optional[str] = None,
arg3: Optional[list[str]] = None,
):
"""
Short task description

Args:
arg1: This is the argument description
arg2: This is the argument description
arg3: This is the argument description
"""
pass


def test_optional_argument():
"""
As a first implementation of the Pydantic V2 schema generation, we are not
supporting the `anyOf` pattern for nullable attributes. This test verifies
that the type of nullable properties is not `anyOf`, and that they are not
required.

Note: future versions of fractal-tasks-core may change this behavior.
"""
schema = create_schema_for_single_task(
task_function=validate_call(task_function),
executable=__file__,
package=None,
verbose=True,
)
print(json.dumps(schema, indent=2, sort_keys=True))
print()
assert schema["properties"]["arg2"]["type"] == "string"
assert "arg2" not in schema["required"]
assert schema["properties"]["arg3"]["type"] == "array"
assert "arg3" not in schema["required"]
36 changes: 36 additions & 0 deletions tests/dev/test_tuple_argument.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import json

from devtools import debug
from pydantic import validate_call

from fractal_tasks_core.dev.lib_args_schemas import (
create_schema_for_single_task,
)


@validate_call
def task_function(arg_A: tuple[int, int] = (1, 1)):
"""
Short task description

Args:
arg_A: Description of arg_A.
"""
pass


def test_tuple_argument():
"""
This test only asserts that `create_schema_for_single_task` runs
successfully. Its goal is also to offer a quick way to experiment
with new task arguments and play with the generated JSON Schema,
without re-building the whole fractal-tasks-core manifest.
"""
schema = create_schema_for_single_task(
task_function=task_function,
executable=__file__,
package=None,
verbose=True,
)
debug(schema)
print(json.dumps(schema, indent=2))
1 change: 0 additions & 1 deletion tests/tasks/test_executable_workflow.py
Original file line number Diff line number Diff line change
@@ -51,7 +51,6 @@ def run_command(cmd: str):


def test_workflow_yokogawa_to_ome_zarr(tmp_path: Path, zenodo_images: str):

# Init
img_path = zenodo_images
zarr_dir = str(tmp_path / "tmp_out/")
2 changes: 1 addition & 1 deletion tests/tasks/test_unit_napari_workflows_wrapper.py
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@

import pytest
from devtools import debug
from pydantic.v1.error_wrappers import ValidationError
from pydantic import ValidationError

from fractal_tasks_core.tasks.napari_workflows_wrapper import (
napari_workflows_wrapper,
2 changes: 1 addition & 1 deletion tests/tasks/test_valid_args_schemas.py
Original file line number Diff line number Diff line change
@@ -191,7 +191,7 @@ def test_args_title():
new_schema = create_schema_for_single_task(
cellvoyager_task["executable_non_parallel"]
)
definitions = new_schema["definitions"]
definitions = new_schema["$defs"]
omero_channel_def = definitions["OmeroChannel"]
# Custom-model-typed attribute of custom-model-typed task argument
window_prop = omero_channel_def["properties"]["window"]
7 changes: 3 additions & 4 deletions tests/tasks/test_valid_task_interface.py
Original file line number Diff line number Diff line change
@@ -26,10 +26,10 @@ def validate_command(cmd: str):
debug(stderr)
# Valid stderr includes pydantic.v1.error_wrappers.ValidationError (type
# match between model and function, but tmp_file_args has wrong arguments)
assert "pydantic.v1.error_wrappers.ValidationError" in stderr
# Valid stderr must include a mention of "unexpected keyword arguments",
assert "ValidationError" in stderr
# Valid stderr must include a mention of "Unexpected keyword argument",
# because we are including some invalid arguments
assert "unexpected keyword arguments" in stderr
assert "Unexpected keyword argument" in stderr
# Invalid stderr includes ValueError
assert "ValueError" not in stderr

@@ -41,7 +41,6 @@ def validate_command(cmd: str):

@pytest.mark.parametrize("task", manifest_dict["task_list"])
def test_task_interface(task, tmp_path):

tmp_file_args = str(tmp_path / "args.json")
tmp_file_metadiff = str(tmp_path / "metadiff.json")
with open(tmp_file_args, "w") as fout:
37 changes: 18 additions & 19 deletions tests/tasks/test_workflows_cellpose_segmentation.py
Original file line number Diff line number Diff line change
@@ -38,6 +38,9 @@
from fractal_tasks_core.tasks.cellpose_utils import (
CellposeChannel1InputModel,
)
from fractal_tasks_core.tasks.cellpose_utils import (
CellposeChannel2InputModel,
)
from fractal_tasks_core.tasks.cellpose_utils import (
CellposeCustomNormalizer,
)
@@ -126,7 +129,6 @@ def patched_segment_ROI_no_labels(
def patched_segment_ROI_overlapping_organoids(
x, label_dtype=None, well_id=None, **kwargs
):

import logging

logger = logging.getLogger("cellpose_segmentation.py")
@@ -170,7 +172,6 @@ def test_failures(
caplog: pytest.LogCaptureFixture,
monkeypatch: MonkeyPatch,
):

monkeypatch.setattr(
"fractal_tasks_core.tasks.cellpose_segmentation.cellpose.core.use_gpu",
patched_cellpose_core_use_gpu,
@@ -240,7 +241,6 @@ def test_workflow_with_per_FOV_labeling(
caplog: pytest.LogCaptureFixture,
monkeypatch: MonkeyPatch,
):

monkeypatch.setattr(
"fractal_tasks_core.tasks.cellpose_segmentation.cellpose.core.use_gpu",
patched_cellpose_core_use_gpu,
@@ -332,7 +332,7 @@ def test_workflow_with_multi_channel_input(
channel = CellposeChannel1InputModel(
wavelength_id="A01_C01", normalize=CellposeCustomNormalizer()
)
channel2 = CellposeChannel1InputModel(
channel2 = CellposeChannel2InputModel(
wavelength_id="A01_C01", normalize=CellposeCustomNormalizer()
)
for zarr_url in zarr_urls:
@@ -364,7 +364,6 @@ def test_workflow_with_per_FOV_labeling_2D(
zenodo_zarr: list[str],
monkeypatch: MonkeyPatch,
):

monkeypatch.setattr(
"fractal_tasks_core.tasks.cellpose_segmentation.cellpose.core.use_gpu",
patched_cellpose_core_use_gpu,
@@ -414,7 +413,6 @@ def test_workflow_with_per_well_labeling_2D(
zenodo_images: str,
monkeypatch: MonkeyPatch,
):

monkeypatch.setattr(
"fractal_tasks_core.tasks.cellpose_segmentation.cellpose.core.use_gpu",
patched_cellpose_core_use_gpu,
@@ -506,7 +504,6 @@ def test_workflow_bounding_box(
caplog: pytest.LogCaptureFixture,
monkeypatch: MonkeyPatch,
):

monkeypatch.setattr(
"fractal_tasks_core.tasks.cellpose_segmentation.cellpose.core.use_gpu",
patched_cellpose_core_use_gpu,
@@ -591,7 +588,6 @@ def test_workflow_bounding_box_with_overlap(
caplog: pytest.LogCaptureFixture,
monkeypatch: MonkeyPatch,
):

monkeypatch.setattr(
"fractal_tasks_core.tasks.cellpose_segmentation.cellpose.core.use_gpu",
patched_cellpose_core_use_gpu,
@@ -783,21 +779,21 @@ def test_workflow_with_per_FOV_labeling_via_script(
with args_path.open("w") as f:
json.dump(this_task_args, f, indent=2)
res = subprocess.run(shlex.split(command), **run_options) # type: ignore
print(res.stdout)
print(res.stderr)
debug(res.stdout)
debug(res.stderr)
# If this check fails after updating the cellpose version, you'll likely
# need to update the manifest to include a changed set of available models
# See https://github.com/fractal-analytics-platform/fractal-tasks-core/issues/401 # noqa E501
error_msg = (
"unexpected value; permitted: 'cyto', 'nuclei', "
"'tissuenet', 'livecell', 'cyto2', 'general', 'CP', 'CPx', "
"'TN1', 'TN2', 'TN3', 'LC1', 'LC2', 'LC3', 'LC4' "
f"(type=value_error.const; given={INVALID_MODEL_TYPE}; "
"permitted=('cyto', 'nuclei', 'tissuenet', 'livecell', "
"'cyto2', 'general', 'CP', 'CPx', 'TN1', 'TN2', 'TN3', "
"'LC1', 'LC2', 'LC3', 'LC4'))"
"Input should be 'cyto', 'nuclei', 'tissuenet', 'livecell', "
"'cyto2', 'general', 'CP', 'CPx', 'TN1', 'TN2', 'TN3', 'LC1', "
"'LC2', 'LC3' or 'LC4' [type=literal_error, "
f"input_value='{INVALID_MODEL_TYPE}', input_type=str]"
)
print(res.stderr)
print(error_msg)
assert error_msg in res.stderr
# assert error_msg in res.stderr
assert "urllib.error.HTTPError" not in res.stdout
assert "urllib.error.HTTPError" not in res.stderr

@@ -916,7 +912,6 @@ def test_workflow_secondary_labeling(
zenodo_zarr: list[str],
monkeypatch: MonkeyPatch,
):

monkeypatch.setattr(
"fractal_tasks_core.tasks.cellpose_segmentation.cellpose.core.use_gpu",
patched_cellpose_core_use_gpu,
@@ -1121,11 +1116,15 @@ def test_workflow_secondary_labeling_two_channels(
debug(zarr_urls[0])

# Secondary segmentation (nuclei)
channel2 = CellposeChannel2InputModel(
wavelength_id=channel.wavelength_id, normalize=channel.normalize
)

for zarr_url in zarr_urls:
cellpose_segmentation(
zarr_url=zarr_url,
channel=channel,
channel2=channel,
channel2=channel2,
level=0,
relabeling=True,
input_ROI_table="organoid_ROI_table",
6 changes: 2 additions & 4 deletions tests/test_unit_channels.py
Original file line number Diff line number Diff line change
@@ -52,7 +52,7 @@ def test_check_well_channel_labels(tmp_path):
channels=[
OmeroChannel(
wavelength_id="id_1", label="non_unique_label"
).dict(),
).model_dump(),
]
)
)
@@ -63,7 +63,7 @@ def test_check_well_channel_labels(tmp_path):
channels=[
OmeroChannel(
wavelength_id="id_1", label="non_unique_label"
).dict(),
).model_dump(),
]
)
)
@@ -78,7 +78,6 @@ def test_check_well_channel_labels(tmp_path):


def test_get_channel_from_list(testdata_path: Path):

# Read JSON data and cast into `OmeroChannel`s
with (testdata_path / "omero/channels_list.json").open("r") as f:
omero_channels_dict = json.load(f)
@@ -227,7 +226,6 @@ def test_color_validator():
],
)
def test_update_omero_channels(old_channels):

# Update partial metadata
print()
print(f"OLD: {old_channels}")
3 changes: 0 additions & 3 deletions tests/test_unit_input_models.py
Original file line number Diff line number Diff line change
@@ -11,7 +11,6 @@


def test_Channel():

# Valid

c = ChannelInputModel(wavelength_id="wavelength_id")
@@ -36,7 +35,6 @@ def test_Channel():


def test_NapariWorkflowsInput():

# Invalid

with pytest.raises(ValueError) as e:
@@ -65,7 +63,6 @@ def test_NapariWorkflowsInput():


def test_NapariWorkflowsOutput():

# Invalid

with pytest.raises(ValueError) as e:
12 changes: 11 additions & 1 deletion tests/test_unit_ngff.py
Original file line number Diff line number Diff line change
@@ -73,13 +73,23 @@ def test_Dataset(ngffdata_path):


def test_Multiscale(ngffdata_path):

# Fail due to global coordinateTransformation
with pytest.raises(NotImplementedError):
_load_and_validate(ngffdata_path / "multiscale_error.json", Multiscale)

# Success
# Fail due to non-unique axis
with pytest.raises(ValueError):
_load_and_validate(
ngffdata_path / "multiscale_non_unique_axis.json", Multiscale
)

# Success with no global `coordinateTransformations`
_load_and_validate(ngffdata_path / "multiscale.json", Multiscale)

# Success with `None` global `coordinateTransformations`
_load_and_validate(ngffdata_path / "multiscale_with_none.json", Multiscale)


def test_NgffImageMeta(ngffdata_path):

0 comments on commit 8ab8499

Please sign in to comment.