Skip to content

Commit

Permalink
Exclude llama-cpp-python 0.3.6 in testcases (#1096)
Browse files Browse the repository at this point in the history
Latest llama-cpp-python 0.3.6 breaks some of our testcases.

Relevant issue: ggerganov/llama.cpp#11197
(Different results returned using same prompt with temperature 0)

Temporarily exclude this version until it's fixed

Signed-off-by: Loc Huynh <[email protected]>
Co-authored-by: Loc Huynh <[email protected]>
  • Loading branch information
JC1DA and lochuynh1412 authored Jan 13, 2025
1 parent 46340aa commit 71f1a68
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/action_gpu_basic_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ jobs:
pip install accelerate
echo "=============================="
pip uninstall -y llama-cpp-python
CMAKE_ARGS="-DGGML_CUDA=on" pip install "llama-cpp-python!=0.2.58,!=0.2.75,!=0.2.84"
CMAKE_ARGS="-DGGML_CUDA=on" pip install "llama-cpp-python!=0.2.58,!=0.2.75,!=0.2.84,!=0.3.6"
- name: Check GPU available
run: |
python -c "import torch; assert torch.cuda.is_available()"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/action_plain_basic_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ jobs:
pip install sentencepiece
echo "=============================="
pip uninstall -y llama-cpp-python
pip install "llama-cpp-python!=0.2.58,!=0.2.79,!=0.2.84"
pip install "llama-cpp-python!=0.2.58,!=0.2.79,!=0.2.84,!=0.3.6"
echo "=============================="
pip uninstall -y transformers
pip install "transformers!=4.43.0,!=4.43.1,!=4.43.2,!=4.43.3" # Issue 965
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/ci_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ jobs:
- name: GPU pip installs
run: |
pip install accelerate
CMAKE_ARGS="-DGGML_CUDA=on" pip install "llama-cpp-python!=0.2.58,!=0.2.75,!=0.2.84"
CMAKE_ARGS="-DGGML_CUDA=on" pip install "llama-cpp-python!=0.2.58,!=0.2.75,!=0.2.84,!=0.3.6"
- name: Check GPU available
run: |
python -c "import torch; assert torch.cuda.is_available()"
Expand Down Expand Up @@ -153,7 +153,7 @@ jobs:
echo "======================"
nvcc --version
echo "======================"
CMAKE_ARGS="-DGGML_CUDA=on" pip install "llama-cpp-python!=0.2.58,!=0.2.75"
CMAKE_ARGS="-DGGML_CUDA=on" pip install "llama-cpp-python!=0.2.58,!=0.2.75,!=0.3.6"
- name: Check GPU available
run: |
python -c "import torch; assert torch.cuda.is_available()"
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/notebook_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ jobs:
- name: GPU pip installs
run: |
pip install accelerate
CMAKE_ARGS="-DGGML_CUDA=on" pip install "llama-cpp-python!=0.2.58,!=0.2.75,!=0.2.84"
CMAKE_ARGS="-DGGML_CUDA=on" pip install "llama-cpp-python!=0.2.58,!=0.2.75,!=0.2.84,!=0.3.6"
- name: Check GPU available
run: |
python -c "import torch; assert torch.cuda.is_available()"
Expand Down

0 comments on commit 71f1a68

Please sign in to comment.