Skip to content

Actions: pytorch/FBGEMM

FBGEMM_GPU-CUDA CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
7,327 workflow runs
7,327 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

[fbgemm_gpu] Break down fbgemm_gpu_tbe_training_backward module further, pt 3
FBGEMM_GPU-CUDA CI #7981: Pull request #3694 synchronize by q10
February 15, 2025 20:10 1h 29m 45s q10:bm/cmake-breakdown-3
February 15, 2025 20:10 1h 29m 45s
avoid extra copy in PackedGemmMatrixB constructor
FBGEMM_GPU-CUDA CI #7980: Pull request #3691 synchronize by helloguo
February 15, 2025 16:04 1h 22m 11s helloguo:export-D69564913
February 15, 2025 16:04 1h 22m 11s
FBGEMM_GPU-CUDA CI
FBGEMM_GPU-CUDA CI #7979: Scheduled
February 15, 2025 12:52 1h 24m 38s main
February 15, 2025 12:52 1h 24m 38s
Save built docs as GHA artifact (#3695)
FBGEMM_GPU-CUDA CI #7978: Commit 183c718 pushed by facebook-github-bot
February 15, 2025 05:43 1h 26m 17s main
February 15, 2025 05:43 1h 26m 17s
Paged Attention Support
FBGEMM_GPU-CUDA CI #7977: Pull request #3698 opened by xw285cornell
February 15, 2025 05:27 1h 27m 3s xw285cornell:export-D68105630
February 15, 2025 05:27 1h 27m 3s
avoid extra copy in PackedGemmMatrixB constructor
FBGEMM_GPU-CUDA CI #7976: Pull request #3691 synchronize by helloguo
February 15, 2025 05:22 1h 23m 22s helloguo:export-D69564913
February 15, 2025 05:22 1h 23m 22s
Rename sources to avoid internal build issue (#3697)
FBGEMM_GPU-CUDA CI #7975: Commit 0c5f838 pushed by facebook-github-bot
February 15, 2025 05:04 39m 42s main
February 15, 2025 05:04 39m 42s
custom reduce scatter
FBGEMM_GPU-CUDA CI #7974: Pull request #3686 synchronize by xw285cornell
February 15, 2025 04:16 1h 23m 11s xw285cornell:export-D69364062
February 15, 2025 04:16 1h 23m 11s
Backing out on-device TMA store. (#3688)
FBGEMM_GPU-CUDA CI #7973: Commit e024eb7 pushed by facebook-github-bot
February 15, 2025 03:16 1h 49m 3s main
February 15, 2025 03:16 1h 49m 3s
adding an option to skip zeroing output tensor for f8f8bf16_rowwise_grouped_dynamic
FBGEMM_GPU-CUDA CI #7972: Pull request #3685 synchronize by mxz297
February 15, 2025 02:17 1h 23m 26s mxz297:export-D69380351
February 15, 2025 02:17 1h 23m 26s
adding an option to skip zeroing output tensor for f8f8bf16_rowwise_grouped_dynamic
FBGEMM_GPU-CUDA CI #7971: Pull request #3685 synchronize by mxz297
February 15, 2025 02:15 1m 38s mxz297:export-D69380351
February 15, 2025 02:15 1m 38s
[fbgemm_gpu] Break down fbgemm_gpu_tbe_training_backward module further, pt 3
FBGEMM_GPU-CUDA CI #7970: Pull request #3694 synchronize by q10
February 15, 2025 00:30 1h 24m 10s q10:bm/cmake-breakdown-3
February 15, 2025 00:30 1h 24m 10s
Fix clang vla warning
FBGEMM_GPU-CUDA CI #7969: Pull request #2736 synchronize by cyyever
February 15, 2025 00:23 Action required cyyever:vla
February 15, 2025 00:23 Action required
Fix clang vla warning
FBGEMM_GPU-CUDA CI #7968: Pull request #2736 synchronize by cyyever
February 15, 2025 00:19 Action required cyyever:vla
February 15, 2025 00:19 Action required
[fbgemm_gpu] Save built docs as GHA artifact
FBGEMM_GPU-CUDA CI #7967: Pull request #3695 synchronize by q10
February 15, 2025 00:05 1h 20m 32s q10:bm/docs-download
February 15, 2025 00:05 1h 20m 32s
Rename sources to avoid internal build issue
FBGEMM_GPU-CUDA CI #7966: Pull request #3697 synchronize by q10
February 15, 2025 00:05 1h 23m 43s q10:export-D69675574
February 15, 2025 00:05 1h 23m 43s
Rename sources to avoid internal build issue
FBGEMM_GPU-CUDA CI #7965: Pull request #3697 opened by q10
February 14, 2025 23:54 12m 58s q10:export-D69675574
February 14, 2025 23:54 12m 58s
GroupedGEMM interface takes m_sizes instead of m_offsets.
FBGEMM_GPU-CUDA CI #7964: Pull request #3696 synchronize by levendlee
February 14, 2025 23:36 1h 20m 51s levendlee:export-D69686252
February 14, 2025 23:36 1h 20m 51s
avoid extra copy in PackedGemmMatrixB constructor
FBGEMM_GPU-CUDA CI #7963: Pull request #3691 synchronize by helloguo
February 14, 2025 23:35 1h 21m 7s helloguo:export-D69564913
February 14, 2025 23:35 1h 21m 7s
GroupedGEMM interface takes m_sizes instead of m_offsets.
FBGEMM_GPU-CUDA CI #7962: Pull request #3696 opened by levendlee
February 14, 2025 23:34 3m 36s levendlee:export-D69686252
February 14, 2025 23:34 3m 36s
[fbgemm_gpu] Save built docs as GHA artifact
FBGEMM_GPU-CUDA CI #7961: Pull request #3695 synchronize by q10
February 14, 2025 23:28 39m 6s q10:bm/docs-download
February 14, 2025 23:28 39m 6s
[fbgemm_gpu] Break down fbgemm_gpu_tbe_training_backward module further, pt 3
FBGEMM_GPU-CUDA CI #7960: Pull request #3694 synchronize by q10
February 14, 2025 23:25 26m 26s q10:bm/cmake-breakdown-3
February 14, 2025 23:25 26m 26s
[fbgemm_gpu] Save built docs as GHA artifact
FBGEMM_GPU-CUDA CI #7959: Pull request #3695 synchronize by q10
February 14, 2025 23:06 22m 27s q10:bm/docs-download
February 14, 2025 23:06 22m 27s
Numerical Fix.
FBGEMM_GPU-CUDA CI #7958: Pull request #3688 synchronize by levendlee
February 14, 2025 23:03 1h 21m 44s levendlee:export-D69602533
February 14, 2025 23:03 1h 21m 44s
[fbgemm_gpu] Save built docs as GHA artifact
FBGEMM_GPU-CUDA CI #7957: Pull request #3695 opened by q10
February 14, 2025 22:51 16m 46s q10:bm/docs-download
February 14, 2025 22:51 16m 46s