-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add build files #41
base: main
Are you sure you want to change the base?
Add build files #41
Conversation
Serve must be launched with docker run --gpus all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the config files! This looks like a good improvement towards productionization. One question on the published asset.
COPY --from=builder /build/libscuda.so . | ||
ENV LD_PRELOAD=/cuda/libscuda.so | ||
WORKDIR / | ||
CMD [ "nvidia-smi" ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this client file is not quite as useful for anything outside testing as one would likely want to embed the .so
file into one of their Dockerfiles instead of using this container as a base. Could we publish the .so
file onto GitHub Releases for the client?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my idea was to offer a drop-in replacement for local development, example you got a dev env and instead of using cuda as a base you can just drop in scuda.
I see your point, but i also see no harm in releasing this.
But before we merge this please validate that it is working for you, i got some issues (but seems to be my host) since the (locally) build scuda does not work either. |
okay I got it working, but only on my local machine my plan was to rent a GPU machine on runpod and forward cuda to local, but I can't get it to work, I'm not sure if it is a connection issue tho(I'm still testing). on my Linux laptop it works (with docker/ my pre built binaries from GitHub). But on my windows /wsl machine I can't get it to work,always returning invalid version even if I host scuda on the same machine. |
I tried to use the Dockerfiles to build it locally, but I needed to do some changes, e.g. install cudnn9-cuda-12, as I use cuda 12.2 on my host system, which forced me to downgrade to ubuntu 22.04 modified files can be found here:
and client side shows an the normal nvidia-smi screen, but without any GPU
on client side - but no activity on server side. I tested the network functionality to do a curl to the $SCUDA_SERVER on port 14833, which shows some activity on the server - which means the address is available. Any ideas? |
@kevmo314 Do you know if it makes any differences using hostnames instead of IP addresse on your side to target the server address? |
It should not, although we have been primarily testing with IP addresses. Are you able to get hostnames working outside the docker container? |
we need debug logging for this. also: could you maybe try my earhtly build in my fork, and see if it works natively https://github.com/K0IN/scuda/blob/main/Earthfile |
I haven't used earthly yet. As we are on GitHub, we could just use github workflows? |
since earthly is repeatable and can run in GH actions with cross compilation and local so you got a build system that works on your machine and on GitHub, I would like to see if a locally built binary works for you |
I will give a try once I am back to work next year. Do you also have a working build in earthly for cuda 12.2 ? |
Add build files (Docker), as stated in #31, i would like to use earthly to build binaries (docker does not seem to be the correct tool to do this).
We can also discuss switching, to make / cmake for a standardized build system (so people can hack more easily).
i also added a Dockerfile.client so people can use it as a base image for projects (switching out their base image with cuda client)
Lets also talk about: