Skip to content

Actions: yisonzhu/vllm-fork

mypy

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
8 workflow runs
8 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Add padding to encoder_seq_lens (#610)
mypy #8: Commit 449a89d pushed by yisonzhu
December 13, 2024 04:51 45s habana_main
December 13, 2024 04:51 45s
Fix TP>1 in encoder-decoder models (#607)
mypy #7: Commit def7ac2 pushed by yisonzhu
December 11, 2024 02:45 44s habana_main
December 11, 2024 02:45 44s
Support mllama (llama 3.2) model for HPU (#491)
mypy #6: Commit 239739c pushed by yisonzhu
December 10, 2024 11:56 45s habana_main
December 10, 2024 11:56 45s
Set vllm-hpu-extension to 4312768 (#604)
mypy #5: Commit 3473bc1 pushed by yisonzhu
December 10, 2024 06:48 45s habana_main
December 10, 2024 06:48 45s
Add multiprocessing HPU executor (#559)
mypy #4: Commit e0e47ed pushed by yisonzhu
December 6, 2024 11:42 44s habana_main
December 6, 2024 11:42 44s
Enable DeepseekV2 Lite/Chat models (#516)
mypy #3: Commit f6865f4 pushed by yisonzhu
December 4, 2024 15:23 47s habana_main
December 4, 2024 15:23 47s
CI fix (#563)
mypy #2: Commit d83b62f pushed by yisonzhu
November 28, 2024 14:41 43s habana_main
November 28, 2024 14:41 43s
Set vllm-hpu-extension to a69bb99 (#521)
mypy #1: Commit 2f43ebf pushed by yisonzhu
November 19, 2024 08:25 1m 25s habana_main
November 19, 2024 08:25 1m 25s