Skip to content

Issues: meta-llama/llama-stack

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Support AMD ROCm GPU distribution
#320 opened Oct 25, 2024 by AlexHe99
docker image name and GPU issue
#304 opened Oct 24, 2024 by stevegrubb
I keep getting 405 forbidden
#273 opened Oct 21, 2024 by whiteSkar
pytorch CUDA not found in host that has CUDA with working pytorch question Further information is requested
#257 opened Oct 16, 2024 by nikolaydubina
wrong UNIX filesystem root
#255 opened Oct 16, 2024 by nikolaydubina
docker images are too large
#254 opened Oct 16, 2024 by nikolaydubina
missing target image architecture good first issue Good for newcomers
#253 opened Oct 16, 2024 by nikolaydubina
Tool Registry for Agents enhancement New feature or request
#234 opened Oct 10, 2024 by onkarbhardwaj
Deleting a stack good first issue Good for newcomers
#225 opened Oct 9, 2024 by anandhuh1234
Add top_k output tokens w/ corresponding logprobs enhancement New feature or request
#214 opened Oct 8, 2024 by yanxi0830
vllm: expand configuration support enhancement New feature or request
#208 opened Oct 7, 2024 by russellb
vllm: improve container support enhancement New feature or request
#200 opened Oct 6, 2024 by russellb
vllm: test and fix tool support enhancement New feature or request
#199 opened Oct 6, 2024 by russellb
ProTip! no:milestone will show everything without a milestone.