Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP]Ports generative networks #7196

Closed
wants to merge 12 commits into from
Closed

[WIP]Ports generative networks #7196

wants to merge 12 commits into from

Conversation

marksgraham
Copy link
Contributor

@marksgraham marksgraham commented Nov 3, 2023

Partially fixes #6676 .

Description

A few sentences describing the changes proposed in this pull request.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

@marksgraham
Copy link
Contributor Author

I'm encountering torchscript errors, which I will document as I implement each network:

  • AutoEncoderKL has an an error in the test_script_save module due to the option to do activation checkpointing using torch.utils.checkpoint. I noticed other networks with activation checkpointing on MONAI, such as SwinUNETR do not test for torchscript saving - maybe it is OK to drop this requirement for networks that use checkpointing?

@KumoLiu
Copy link
Contributor

KumoLiu commented Nov 9, 2023

Hi @marksgraham, yes it's fine to skip the torchscript test.

Copy link
Contributor

@KumoLiu KumoLiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!
I leave a few comments online.
There appear to be some duplicate block names with exist one, is it possible for us to combine some of these?

cc @ericspod @wyli @Nic-Ma

monai/networks/nets/autoencoderkl.py Outdated Show resolved Hide resolved
monai/networks/nets/autoencoderkl.py Outdated Show resolved Hide resolved

def __init__(self, spatial_dims: int, in_channels: int) -> None:
super().__init__()
self.pad = (0, 1) * spatial_dims
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this pad value being hardcoded for a specific reason? To make this class more general, perhaps we could add a parameter or pad_kwargs? What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the padding value that allows for downsampling by a factor of 2 each level; so that the output shape of the network matches the input shape (as long as the input shape is even). I can't think of a use case for changing the padding here, specifically. I suppose we might want to find a way to support a user supplying non-even inputs but I'm not even sure if MONAI's base AutoEncoder network supports that right now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a padding argument for the AutoEncoder which can be the values compatible with the Pytorch Conv classes. This will by default be set to be a value to ensure 1-stride convolutions produce outputs with the same shape as the inputs but it can be set to other things.

The padding you have here is specifically for convolutions having an odd kernel size (eg. 3x3 or 3x3x3)? Instead of having a padding value just don't set padding in Convolution and let that class handle how to do the padding for downsampling.

I'm not sure you need this class anyway since the downsample you want can be done with Convolution with stride=2, kernel_size=3, conv_only=True.

monai/networks/nets/autoencoderkl.py Outdated Show resolved Hide resolved
@marksgraham
Copy link
Contributor Author

Thanks for the PR! I leave a few comments online. There appear to be some duplicate block names with exist one, is it possible for us to combine some of these?

cc @ericspod @wyli @Nic-Ma

Thanks for reviewing - good point on the blocks, I'll see how many of them can be combined.

I've still got a few more networks to add, too, just adding bits to the PR as I go along.

@marksgraham
Copy link
Contributor Author

Thanks for the PR! I leave a few comments online. There appear to be some duplicate block names with exist one, is it possible for us to combine some of these?

cc @ericspod @wyli @Nic-Ma

Would it be OK to defer the merging of some of the duplicate blocks (e.g. Attention, Resblock) until porting the rest of MONAI Generative is done? Until it is fully ported, we will be supporting the MONAI Gen repo too, and keeping the code here tightly coupled to that codebase will make it much easier to push any bugfixes etc from there to here. Once the port is done, MONAI Gen will be archived and users will be redirected here, and then it will be easier to devote our efforts to more tightly integrating the ported code and MONAI Core through better used of shared blocks, etc.

@KumoLiu
Copy link
Contributor

KumoLiu commented Nov 13, 2023

Would it be OK to defer the merging of some of the duplicate blocks (e.g. Attention, Resblock) until porting the rest of MONAI Generative is done?

Yes, that should be fine. We can also create another ticket for refactoring.

@ericspod
Copy link
Member

Would it be OK to defer the merging of some of the duplicate blocks (e.g. Attention, Resblock) until porting the rest of MONAI Generative is done?

Duplicate code/functionality isn't good and doesn't look good either. I get your reasoning for pushing off a refactor so I would suggest that we mark redundant definitions as being private (prefix _ to their names) and write in their docstrings that these shouldn't be used and will be factored our later. What we don't want is users relying on these types directly then they disappear or we replace them with shim types we have to maintain.

@marksgraham
Copy link
Contributor Author

Ticket for the refactoring is up here

@KumoLiu
Copy link
Contributor

KumoLiu commented Nov 23, 2023

Hi @marksgraham, thanks for your hard work on this PR.
But I believe the PR is too large, making it difficult to REVIEW and ensure that the unittests are all covered. Is it possible to split this PR into multiple PRs? Perhaps a network in a PR? What do you think?
cc @ericspod @Nic-Ma

@marksgraham
Copy link
Contributor Author

Yes that seems reasonable to me! I'll split into a PR for each network over the coming week :)

@marksgraham marksgraham closed this Dec 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Move MONAI Generative into core
3 participants