Skip to content

llama-cpp multi server support #190

llama-cpp multi server support

llama-cpp multi server support #190

Triggered via pull request October 21, 2024 17:07
@cdoerncdoern
synchronize #316
Status Cancelled
Total duration 4m 54s
Artifacts

e2e-nvidia-t4-x1.yml

on: pull_request_target
Start external EC2 runner
2m 41s
Start external EC2 runner
Stop external EC2 runner
3s
Stop external EC2 runner
e2e-workflow-complete
0s
e2e-workflow-complete
Fit to window
Zoom out
Zoom in

Annotations

2 errors
E2E Test
Canceling since a higher priority waiting request for 'E2E (NVIDIA Tesla T4 x1)-316' exists
E2E Test
The operation was canceled.