You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all,
Congratulations on winning the NeurIPS award!! I've read your paper, and since I'm a beginner it's taken me more than a month to understand the paper. I'm still a long way behind on this, but I've tried to implement your paper in pytorch, with help from ChatGPT. I've been testing it on several PINN cases and it works fine.
The major difference is, since pytorch doesn't have random jets, I have to repeat the autograd call that treat each partial derivative in sequence. (to think of it, this might cause higher computational overhead).
But will there be any pytorch support in the future?
The text was updated successfully, but these errors were encountered:
Thanks for your interest in this work! For your question, supporting high-order forward AD is non-trivial, and I'm unsure whether the PyTorch team has this in their dev roadmap.
Hi,
Greetings from across the bridge (huhu)
First of all,
Congratulations on winning the NeurIPS award!! I've read your paper, and since I'm a beginner it's taken me more than a month to understand the paper. I'm still a long way behind on this, but I've tried to implement your paper in pytorch, with help from ChatGPT. I've been testing it on several PINN cases and it works fine.
The major difference is, since pytorch doesn't have random jets, I have to repeat the autograd call that treat each partial derivative in sequence. (to think of it, this might cause higher computational overhead).
But will there be any pytorch support in the future?
The text was updated successfully, but these errors were encountered: