-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More about issue 4 #7
Comments
The image size (e.g., 64) in DAT is the input image size during training. Namely, we apply input images of size 64 × 64 to train DAT. But this is just for training convenience. DAT can support images of any size. For example, under the SR-x2 task, you take an input of size |
How important is it to set Also as a sidenote, thanks for making 2x models and models with different inference requirements. |
The img_size equals the patch size. But not mandatory. In fact, the img_size is to simplify the calculation of the mask (for SW-SA) in training. Considering that the input size does not change during training, we preserve the mask value corresponding to img_size to speed up the calculation. If the input image size is not equal to img_size, the mask needs to be recalculated (https://github.com/zhengchen1999/DAT/blob/main/basicsr/archs/dat_arch.py#L398). |
Thanks for the quick reply. |
I have the same doubt as the author of issue 4, since DAT sets the image size to 64. For example, if I have a 256×256 image as input, how do I preprocess it? Just resize?
The text was updated successfully, but these errors were encountered: