Skip to content

Actions: microsoft/DeepSpeed

Formatting

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
5,170 workflow runs
5,170 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Autotp training
Formatting #16126: Pull request #6922 synchronize by delock
January 24, 2025 07:34 Action required inkcherry:autotp_training
January 24, 2025 07:34 Action required
Formatting
Formatting #16122: Scheduled
January 24, 2025 00:20 1m 22s master
January 24, 2025 00:20 1m 22s
Tecorigin sdaa accelerator
Formatting #16121: Pull request #6903 synchronize by loadams
January 23, 2025 23:49 Action required siqi654321:Tecorigin-SDAA-accelerator
January 23, 2025 23:49 Action required
generalize deepspeed linear and implement it for non cuda systems
Formatting #16120: Pull request #6932 synchronize by loadams
January 23, 2025 18:35 1m 18s oelayan7:linear
January 23, 2025 18:35 1m 18s
Formatting
Formatting #16119: Merge group checks requested
January 23, 2025 16:42 1m 31s
January 23, 2025 16:42 1m 31s
Autotp training
Formatting #16118: Pull request #6922 synchronize by inkcherry
January 23, 2025 07:50 1m 23s inkcherry:autotp_training
January 23, 2025 07:50 1m 23s
Formatting
Formatting #16117: Scheduled
January 23, 2025 00:20 1m 15s master
January 23, 2025 00:20 1m 15s
[DEBUG] Add diagnostics for cpu-torch-latest intermittent hang
Formatting #16116: Pull request #6942 synchronize by loadams
January 22, 2025 23:14 1m 24s loadams/cpu-runner-debug
January 22, 2025 23:14 1m 24s
Tecorigin sdaa accelerator
Formatting #16114: Pull request #6903 synchronize by tjruwase
January 22, 2025 22:25 Action required siqi654321:Tecorigin-SDAA-accelerator
January 22, 2025 22:25 Action required
Update sharded_moe.py to support top2 gate with Tutel
Formatting #16110: Pull request #6948 synchronize by loadams
January 22, 2025 17:16 1m 22s xenshinu:patch-1
January 22, 2025 17:16 1m 22s
Precisely track nvme optimizer offload
Formatting #16109: Pull request #6963 synchronize by tjruwase
January 22, 2025 15:54 1m 24s olruwase/ds_4998
January 22, 2025 15:54 1m 24s
Autotp training
Formatting #16108: Pull request #6922 synchronize by inkcherry
January 22, 2025 05:40 1m 31s inkcherry:autotp_training
January 22, 2025 05:40 1m 31s
Enabled configurable auto Tensor Parallelism (TP) for the inference of diverse models
Formatting #16107: Pull request #6553 synchronize by gyou2021
January 22, 2025 03:03 Action required gyou2021:configurable_autoTP
January 22, 2025 03:03 Action required
Formatting
Formatting #16106: Scheduled
January 22, 2025 00:20 1m 21s master
January 22, 2025 00:20 1m 21s
Explicitly use the linalg.vector_norm call in comm/
Formatting #16105: Pull request #6960 synchronize by loadams
January 21, 2025 22:35 1m 18s loadams/fix-torch-linalg-norm
January 21, 2025 22:35 1m 18s
generalize deepspeed linear and implement it for non cuda systems
Formatting #16104: Pull request #6932 synchronize by loadams
January 21, 2025 22:34 1m 22s oelayan7:linear
January 21, 2025 22:34 1m 22s
Update version.txt after 0.16.3 release
Formatting #16103: Pull request #6965 opened by loadams
January 21, 2025 22:31 1m 19s AutoPR/0.16.3
January 21, 2025 22:31 1m 19s
generalize deepspeed linear and implement it for non cuda systems
Formatting #16101: Pull request #6932 synchronize by loadams
January 21, 2025 21:54 Action required oelayan7:linear
January 21, 2025 21:54 Action required