Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V0.0.1 #13

Merged
merged 30 commits into from
Sep 25, 2023
Merged

V0.0.1 #13

merged 30 commits into from
Sep 25, 2023

Conversation

ctr26
Copy link
Collaborator

@ctr26 ctr26 commented Aug 30, 2023

Still not fixed pytest errors but this is better than it was

@afoix afoix self-requested a review September 1, 2023 09:18
@afoix afoix added the good first issue Good for newcomers label Sep 1, 2023
@ctr26
Copy link
Collaborator Author

ctr26 commented Sep 2, 2023

@afoix Fixed deps I think

@ctr26 ctr26 marked this pull request as draft September 2, 2023 11:49
@ctr26 ctr26 marked this pull request as ready for review September 2, 2023 11:49
@ctr26
Copy link
Collaborator Author

ctr26 commented Sep 6, 2023

import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')

@afoix

@ctr26
Copy link
Collaborator Author

ctr26 commented Sep 6, 2023

if name=="__main__":

Fix this

@ctr26
Copy link
Collaborator Author

ctr26 commented Sep 6, 2023

Pillow=9.5.0

@afoix
Copy link
Collaborator

afoix commented Sep 15, 2023

@ctr26 the changes suggested above are pushed in that branch now - they were working in the beast

@ctr26
Copy link
Collaborator Author

ctr26 commented Sep 15, 2023

import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')

Does this make a difference?

scripts/train.py Outdated
@@ -97,9 +97,9 @@
# dataloader = DataLoader(train_dataset, batch_size=batch_size,
# shuffle=True, num_workers=2**4, pin_memory=True, collate_fn=my_collate)

model = Bio_VAE("VQ_VAE", channels=1, num_residual_layers=8, num_residual_hiddens=64)
model = BioimageEmbed("VQ_VAE", channels=1, num_residual_layers=8, num_residual_hiddens=64)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here in in the parameter "VQ_VAE" would be possible to use the model we want? If yes, where are the other models set (or the list of possible models)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the models I allow are stored in:

https://github.com/ctr26/bioimage_embed/blob/v0.0.1/bioimage_embed/models/factory.py

"VQ_VAE" is deprecated now I think

scripts/train.py Outdated

model = Bio_VAE("VQ_VAE", channels=1)
model = BioimageEmbed("VQ_VAE", channels=1)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you overwrite model soo soon?

scripts/train.py Outdated
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, I like more the new of the classes and I think those are the only changes so far in that file. So it looks good to me 👍🏻


# Deal with the filesystem
import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still pending of confirmation if actually makes a different when tested in another computer than the beast.

# "window_size": 64*2,
"num_workers": 1,
"input_dim": (1, window_size, window_size),
# "channels": 3,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you comment channels isn't it a needed parameter?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Channels is (now) handled by "input_dim", channels used to be it's own param in my models because thats been changed now

Comment on lines 228 to 234
for idx in range(len(dataset)):
try:
image, label = dataset[idx]
# If the transform works without errors, add the index to the list of valid indices
valid_indices.append(idx)
except Exception as e:
print(f"Error occurred for image {idx}: {e}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like a lot the try/except that you use here, good practise 🎉

batch_size=args.batch_size,
shuffle=True,
num_workers=args.num_workers,
# transform=transform,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why don't you use transform anymore here? Is it because you are using it previously?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dataset has already been transformed from Images to distance_matrix tensors.
Ordinarily I would put the transform in the dataloader but here I use the transform to check for valid images too as conversion process doesnt always work ln:231

Comment on lines 246 to 248

# dataloader = DataLoader(train_dataset, batch_size=batch_size,
# shuffle=True, num_workers=2**4, pin_memory=True, collate_fn=my_collate)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# dataloader = DataLoader(train_dataset, batch_size=batch_size,
# shuffle=True, num_workers=2**4, pin_memory=True, collate_fn=my_collate)

Comment on lines 255 to 261
# model = Mask_VAE("VAE", 1, 64,
# # hidden_dims=[32, 64],
# image_dims=(interp_size, interp_size))

# model = Mask_VAE(VAE())
# %%
# lit_model = LitAutoEncoderTorch(model)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# model = Mask_VAE("VAE", 1, 64,
# # hidden_dims=[32, 64],
# image_dims=(interp_size, interp_size))
# model = Mask_VAE(VAE())
# %%
# lit_model = LitAutoEncoderTorch(model)

# %%


# model_name = model._get_name()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# model_name = model._get_name()

@ctr26
Copy link
Collaborator Author

ctr26 commented Sep 20, 2023

@ctr26 remove all deprecations and commented lines

@ctr26 ctr26 merged commit 73c145d into master Sep 25, 2023
@ctr26
Copy link
Collaborator Author

ctr26 commented Sep 25, 2023

@afoix merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants