-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quick question on how this performs compared to Pytorch on M1 #72
Comments
Hi there! At the moment, I haven't done any speed comparisons between RTNeural and inferencing engines that run on GPUs (or other hardware devices). With real-time audio systems, there are some constraints that can make it difficult to process real-time signals on a GPU, although there has recently been some new and exciting progress made in that area (see, e.g. this paper, or the things that GPU Audio is working on). Personally, I haven't tried experimenting with real-time processing on the GPU just yet. That said, it probably would be useful to do some performance comparison measurements with inferencing engines that run on a GPU, particularly for the size of networks that I usually use with RTNeural. I imagine that information might be interesting for folks outside of the audio realm as well! |
Hi there, I'm also interested in the speed comparison results. Have you conducted any performance comparison tests and would you be willing to share the results? Thank you. |
Sure thing! The latest performance comparisons that I've done are in this repository. The plots being shown currently are from a 2018 Intel Mac. I've had a few thoughts recently about improving the performance comparisons, but haven't had time to get started on them yet. If you're interested in contributing, I'd be happy to chat! |
Hi, just wondering if you know from the top of your head if RTNeural runs faster than the accelerated M1 Pytorch mode or faster than using TensorRT on NVIDIA GPUs? I wanna test out if I can make a super fast Transformer to use with the GuitarML repos, which for now are using RTNeural as the speedy inference engine. Would be cool to try whatever complicated architecture super fast on Macbooks so neural plugins work with Logic.
Accelerated M1 PyTorch: https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
NVIDIA TensorRT: https://docs.nvidia.com/deeplearning/tensorrt/index.html
The text was updated successfully, but these errors were encountered: