Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Int4/Int8.. Type #162

Open
AntiAnimeGeneral opened this issue Oct 10, 2024 · 1 comment
Open

Support Int4/Int8.. Type #162

AntiAnimeGeneral opened this issue Oct 10, 2024 · 1 comment

Comments

@AntiAnimeGeneral
Copy link
Contributor

AntiAnimeGeneral commented Oct 10, 2024

It is difficult to run LLM with f32/f16 on pc, To perform inference of LLM on the edge, it is almost necessary to use Q4 quantization. Perhaps Int4 can be used as a built-in type

@nathanielsimard
Copy link
Member

We can't upload int8 or int4 to the GPU, but @laggui is working on quantization on Burn. We will probably create abstractions making it easier to create quantized kernels

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants