Jakiro is an advanced approach designed to enhance speculative decoding (SD) for large language models. By integrating Mixture of Experts (MoE), Jakiro enables independent experts to generate diverse predictions, effectively decoupling correlations among candidates and addressing a key limitation of traditional tree-based sampling. Jakiro significantly boosts prediction accuracy and inference speed, setting a new state-of-the-art (SOTA) in speculative decoding. Extensive experiments across various models demonstrate its robustness and effectiveness in real-world applications.
The following shows the actual measured inference speeds of Jakiro and EAGLE-2 on a single RTX 4090 GPU with 24GB of memory using the Vicuna 7B model. As shown, Jakiro has a faster decoding speed and a higher compression ratio.
![]() |
![]() |
The code is currently being organized and will be released soon. Stay tuned!
For technical details and full experimental results, please check the paper of Jakiro.
@misc{huang2025jakiroboostingspeculativedecoding,
title={Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE},
author={Haiduo Huang and Fuwei Yang and Zhenhua Liu and Yixing Xu and Jinze Li and Yang Liu and Xuanwu Yin and Dong Li and Pengju Ren and Emad Barsoum},
year={2025},
eprint={2502.06282},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.06282},
}
This project has been influenced by many excellent projects in the LLM community, such as EAGLE, Medusa, FastChat, and others. The logo is designed by GPT-4o.