Support for Heterogeneous Parallelism in Multimodal Training #1374
Unanswered
swiftomkar
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have been using MegatronLM to train multimodal models and successfully followed the example under examples/multimodal. However, for efficient training, multimodal models often require different parallelism strategies for each component, as vision models are typically smaller than the LLM in such setups.
Does MegatronLM support heterogeneous parallelism strategies, where different models within a multimodal system can use distinct parallelization techniques? If not, are there any recommended workarounds?
Beta Was this translation helpful? Give feedback.
All reactions