You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Highly torch-like, so implementing the backend wouldn't be difficult syntactically
Simple to add accelerators and supports accelerators such as WebGPU, AMD, etc. which aren't common. It isn't as CUDA/cuBLAS-dependent as PyTorch or TensorFlow. This could help bring accelerated DL and inference to a higher number of users with consumer grade hardware and web apps.
Lightweight with minimal dependencies
Some of the difficulties would potentially include:
Fewer core ops meaning some common ops in torch, numpy, jax and tensorflow would need a bit of extra implementation to add
Adding another supported backend inherently increases the maintenance cost of the library
If approved, I'm volunteering to implement this.
The text was updated successfully, but these errors were encountered:
Thoughts on supporting tinygrad as a backend?
GitHub: https://github.com/tinygrad/tinygrad
Some of the reasons would include:
Some of the difficulties would potentially include:
If approved, I'm volunteering to implement this.
The text was updated successfully, but these errors were encountered: