-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
V0.0.1 #13
Conversation
…aybe make this a seperate module/repo
@afoix Fixed deps I think |
|
Fix this |
Pillow=9.5.0 |
@ctr26 the changes suggested above are pushed in that branch now - they were working in the beast |
Does this make a difference? |
scripts/train.py
Outdated
@@ -97,9 +97,9 @@ | |||
# dataloader = DataLoader(train_dataset, batch_size=batch_size, | |||
# shuffle=True, num_workers=2**4, pin_memory=True, collate_fn=my_collate) | |||
|
|||
model = Bio_VAE("VQ_VAE", channels=1, num_residual_layers=8, num_residual_hiddens=64) | |||
model = BioimageEmbed("VQ_VAE", channels=1, num_residual_layers=8, num_residual_hiddens=64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here in in the parameter "VQ_VAE
" would be possible to use the model we want? If yes, where are the other models set (or the list of possible models)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the models I allow are stored in:
https://github.com/ctr26/bioimage_embed/blob/v0.0.1/bioimage_embed/models/factory.py
"VQ_VAE" is deprecated now I think
scripts/train.py
Outdated
|
||
model = Bio_VAE("VQ_VAE", channels=1) | ||
model = BioimageEmbed("VQ_VAE", channels=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you overwrite model soo soon?
scripts/train.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, I like more the new of the classes and I think those are the only changes so far in that file. So it looks good to me 👍🏻
|
||
# Deal with the filesystem | ||
import torch.multiprocessing | ||
torch.multiprocessing.set_sharing_strategy('file_system') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still pending of confirmation if actually makes a different when tested in another computer than the beast.
# "window_size": 64*2, | ||
"num_workers": 1, | ||
"input_dim": (1, window_size, window_size), | ||
# "channels": 3, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you comment channels
isn't it a needed parameter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Channels is (now) handled by "input_dim", channels used to be it's own param in my models because thats been changed now
scripts/shapes/shape_embed.py
Outdated
for idx in range(len(dataset)): | ||
try: | ||
image, label = dataset[idx] | ||
# If the transform works without errors, add the index to the list of valid indices | ||
valid_indices.append(idx) | ||
except Exception as e: | ||
print(f"Error occurred for image {idx}: {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like a lot the try/except that you use here, good practise 🎉
scripts/shapes/shape_embed.py
Outdated
batch_size=args.batch_size, | ||
shuffle=True, | ||
num_workers=args.num_workers, | ||
# transform=transform, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why don't you use transform anymore here? Is it because you are using it previously?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dataset has already been transformed from Images to distance_matrix tensors.
Ordinarily I would put the transform in the dataloader but here I use the transform to check for valid images too as conversion process doesnt always work ln:231
scripts/shapes/shape_embed.py
Outdated
|
||
# dataloader = DataLoader(train_dataset, batch_size=batch_size, | ||
# shuffle=True, num_workers=2**4, pin_memory=True, collate_fn=my_collate) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# dataloader = DataLoader(train_dataset, batch_size=batch_size, | |
# shuffle=True, num_workers=2**4, pin_memory=True, collate_fn=my_collate) |
scripts/shapes/shape_embed.py
Outdated
# model = Mask_VAE("VAE", 1, 64, | ||
# # hidden_dims=[32, 64], | ||
# image_dims=(interp_size, interp_size)) | ||
|
||
# model = Mask_VAE(VAE()) | ||
# %% | ||
# lit_model = LitAutoEncoderTorch(model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# model = Mask_VAE("VAE", 1, 64, | |
# # hidden_dims=[32, 64], | |
# image_dims=(interp_size, interp_size)) | |
# model = Mask_VAE(VAE()) | |
# %% | |
# lit_model = LitAutoEncoderTorch(model) |
scripts/shapes/shape_embed.py
Outdated
# %% | ||
|
||
|
||
# model_name = model._get_name() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# model_name = model._get_name() |
@ctr26 remove all deprecations and commented lines |
@afoix merged |
Still not fixed pytest errors but this is better than it was