Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA: compress-mode size #12029

Merged
merged 1 commit into from
Mar 1, 2025
Merged

Conversation

Green-Sky
Copy link
Collaborator

@Green-Sky Green-Sky commented Feb 22, 2025

This patch sets cuda compression mode to size for >= 12.8

cuda 12.8 added the option to specify stronger compression for binaries.

I ran some tests in CI with the new ubuntu 12.8 docker image:

89-real arch

In this scenario, it appears it is not compressing by default at all?

mode ggml-cuda.so
none 64M
speed (default) 64M
balance 64M
size 18M

60;61;70;75;80 arches

mode ggml-cuda.so
none 994M
speed (default) 448M
balance 368M
size 127M

I did not test the runtime load overhead this should incur. But for most ggml-cuda usecases, the processes are usually long(er) lived, so the trade-off seems reasonable to me.

@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning labels Feb 22, 2025
@Green-Sky Green-Sky marked this pull request as ready for review February 24, 2025 12:21
@slaren
Copy link
Member

slaren commented Feb 26, 2025

994M

That's quite a lot, I didn't realize that the build with all supported archs has gotten so bad. In the windows releases it seems to be 500M, so it's not that bad, but still pretty bad.

I am not exactly sure what may be the downsides of enabling this option, it would be preferable if this was optional. Enabling it by default should be ok, though.

@Green-Sky
Copy link
Collaborator Author

994M

That's quite a lot, I didn't realize that the build with all supported archs has gotten so bad. In the windows releases it seems to be 500M, so it's not that bad, but still pretty bad.

And so it is for linux. Even before 12.8 it was compressing by default. Either with a speed equivalent or it's the same code and they just decided to give more control over the compression algorithm. Before 12.8 there only existed an option to disable compression, which I don't think anyone uses.

I am not exactly sure what may be the downsides of enabling this option, it would be preferable if this was optional. Enabling it by default should be ok, though.

They say it costs startup time, which I think would be ok for almost all ml usecases that use cuda anyway. I just hope it's not for every kernel launch. I don't have a setup right now where I can test that myself, so if anyone can help here, that would be nice.

Ok, I will make it an ggml option and enable it by default. Or should I make the option a string and just pass that? (none, speed, balance, size)

@slaren
Copy link
Member

slaren commented Feb 27, 2025

Or should I make the option a string and just pass that?

Yes, that sounds good to me.

cuda 12.8 added the option to specify stronger compression for binaries.
@Green-Sky Green-Sky merged commit 80c41dd into ggml-org:master Mar 1, 2025
47 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants