-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where is 3-layer 1 D Resnet? #22
Comments
I modified the num head, which was originally 2. Speaking of which, I'd have to say that your code is really poorly written. I used a new dataset with more channels than seq_len, and you ended up swapping dimensions and it all went haywire. Another point is that you use the torch.cuda package in the data augmentation and then provide an option to not use the GPU. hhh I guess this is a very early version, right? Can you provide the final version? |
Can your model solve multivariate time series forecasting? If not, why do you provide the option of multiple channels? |
I think you can find it in the diff of 6ed4dae. |
This seems to be simple conv operations stacked in sequence. Do you happen to find where the residual connection is? |
In the paper, you mentioned using ResNet, but there is a Transformer in the code.
The built-in implementation of Pytorch is still used, where the input is supposed to be (seq_len, N, D), but you just (N, 1, seq_len) tensor.
Is this the reason why Transformer is not as good as ResNet?
The text was updated successfully, but these errors were encountered: