-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tokenization is extremely slow--am I doing something wrong? #22
Comments
Hi @andersonbcdefg, sorry for the extremely late reply. So the tokenizer in MLX Data is actually quite fast when it comes to smallish documents. It optimizes over the whole passed document so it is quite slower when passed such a huge text like the one above (while it obviously doesn't make sense to check the whole graph). For example the wikitext benchmark (https://github.com/ml-explore/mlx-data/blob/c1204bce12ce495add1ed68338543cb4b5c5a595/benchmarks/comparative/wikitext/mlx_data.py) on my Mac tokenizes a few millions of tokens per second which should be more than enough for any use case. |
Hmm, well the document in my example is only a few hundred characters. It's a batch of 500 of the same doc, but the doc is short so I'm not sure that optimizing over a large graph would explain the disparity in speed. |
Oh sorry I kinda misunderstood the code snippet. Having said that, I wouldn't say it is significantly slower than SPM. Running your benchmark with varying document size on my M2 air laptop I get the following comparison table with SPM
Keep in mind that this is single core. So >1M tok/s per core I think is pretty reasonable for almost all use cases. We would of course appreciate PRs that improve that to reach the speed of SPM which is probably somewhere around 2M-3M tok/s per core on my machine. |
Yeah I hope that it's able to be sped up! A 10x difference in speed makes a big difference esp. for offline data processing type workflows (I understand 1M tok/s is fine if you're feeding an LLM in real time, but tokenization is also important for batch processing!) |
Sure I understand, and we should work on it however, this is still single core. When using the following pipeline on my M2 air it is 3x slower than SPM
|
Under what circumstances is MLX supposed to provide a speedup over sentencepiece? In a naive test with the same SPM .model file, I'm able to tokenize 1000 batches in 13 seconds with sentencepiece, and it takes over 5 minutes with MLX. Hardware is M2 Macbook Pro with 64GB unified memory. Is the CharTrie tokenization only useful when paired with key_transform? Are there plans to add a "tokenize_batch" with better parallelization/concurrency?
Code for reference:
The text was updated successfully, but these errors were encountered: