You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PyTorch includes CUDA streams, which let multiple GPU requests run in parallel.
However it appears that TorchSharp does not support CUDA streams. I searched the codebase and can't find anything like PyTorch's torch.cuda.Stream class, or C# wrappers for e.g. the wait_stream(), default_stream() and record_stream() methods.
The text was updated successfully, but these errors were encountered:
Great, thanks for considering this. Streams can be pretty important for good performance when performing inference on multiple threads, so I'd be very happy to see them supported in TorchSharp.
PyTorch includes CUDA streams, which let multiple GPU requests run in parallel.
However it appears that TorchSharp does not support CUDA streams. I searched the codebase and can't find anything like PyTorch's torch.cuda.Stream class, or C# wrappers for e.g. the wait_stream(), default_stream() and record_stream() methods.
The text was updated successfully, but these errors were encountered: