You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TL;DR: Is it possible to expose a fire-and-forget-like version of the async read/write functions if a CUstream functions are given?
FileHandle::read_async() (and write_async()) state that these functions return a StreamFuture which
must be kept alive until all data has been read to disk
Ostensibly this is so that stream synchronization can be RAII-ified (StreamFuture indeed synchronizes the stream in its dtor).
But, if I pass a non-null CUstream then why must I keep the StreamFuture alive? If I have given kvikio a stream, then surely kvikio may assume that I will perform other things on the same stream.
For reference, I am thinking of the following possible workflow:
In this example, if I want to keep the GPU pipeline alive, I also need to ferry around the StreamFuture object...
The other overload taking pointers is also not ergonomic, because it requires that bytes_read_p is non-NULL. I know the values of all my parameters right now, and I don't care about how many bytes were read by the GPU task (because I'm passing the pointer on to the kernel).
The text was updated successfully, but these errors were encountered:
The other overload taking pointers is also not ergonomic, because it requires that bytes_read_p is non-NULL.
This is the reason why KvikIO introduce the StreamFuture object. The only async API cuFile provides takes the arguments as pointers. There is no way to provide the arguments by value :/
This is the reason why KvikIO introduce the StreamFuture object. The only async API cuFile provides takes the arguments as pointers. There is no way to provide the arguments by value :/
Hmmm, perhaps kvikio could provide a "default" pointer to use here in that case. Some kind of internal static pointer that kvikio registers appropriately with cuFile which can be used as a default if this pointer is NULL.
But to handle errors, we would need to have a bytes_read_p per call. We could use cuStreamAddCallback to clean up bytes_read_p et al. but what about the FileHandle?
In your example, the clean up of FileHandle also needs to be delayed. Is this required?
TL;DR: Is it possible to expose a fire-and-forget-like version of the async read/write functions if a
CUstream
functions are given?FileHandle::read_async()
(andwrite_async()
) state that these functions return aStreamFuture
whichOstensibly this is so that stream synchronization can be RAII-ified (
StreamFuture
indeed synchronizes the stream in its dtor).But, if I pass a non-null
CUstream
then why must I keep theStreamFuture
alive? If I have givenkvikio
a stream, then surelykvikio
may assume that I will perform other things on the same stream.For reference, I am thinking of the following possible workflow:
In this example, if I want to keep the GPU pipeline alive, I also need to ferry around the
StreamFuture
object...The other overload taking pointers is also not ergonomic, because it requires that
bytes_read_p
is non-NULL. I know the values of all my parameters right now, and I don't care about how many bytes were read by the GPU task (because I'm passing the pointer on to the kernel).The text was updated successfully, but these errors were encountered: