The easiest way to get started is to install the mlx-lm
package:
With pip
:
pip install mlx-lm
With conda
:
conda install -c conda-forge mlx-lm
The mlx-lm
package also has:
You can use mlx-lm
as a module:
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Mistral-7B-Instruct-v0.3-4bit")
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
To see a description of all the arguments you can do:
>>> help(generate)
Check out the generation example to see how to use the API in more detail.
The mlx-lm
package also comes with functionality to quantize and optionally
upload models to the Hugging Face Hub.
You can convert models in the Python API with:
from mlx_lm import convert
repo = "mistralai/Mistral-7B-Instruct-v0.3"
upload_repo = "mlx-community/My-Mistral-7B-Instruct-v0.3-4bit"
convert(repo, quantize=True, upload_repo=upload_repo)
This will generate a 4-bit quantized Mistral 7B and upload it to the repo
mlx-community/My-Mistral-7B-Instruct-v0.3-4bit
. It will also save the
converted model in the path mlx_model
by default.
To see a description of all the arguments you can do:
>>> help(convert)
For streaming generation, use the stream_generate
function. This returns a
generator object which streams the output text. For example,
from mlx_lm import load, stream_generate
repo = "mlx-community/Mistral-7B-Instruct-v0.3-4bit"
model, tokenizer = load(repo)
prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
for t in stream_generate(model, tokenizer, prompt, max_tokens=512):
print(t, end="", flush=True)
print()
You can also use mlx-lm
from the command line with:
mlx_lm.generate --model mistralai/Mistral-7B-Instruct-v0.3 --prompt "hello"
This will download a Mistral 7B model from the Hugging Face Hub and generate text using the given prompt.
For a full list of options run:
mlx_lm.generate --help
To quantize a model from the command line run:
mlx_lm.convert --hf-path mistralai/Mistral-7B-Instruct-v0.3 -q
For more options run:
mlx_lm.convert --help
You can upload new models to Hugging Face by specifying --upload-repo
to
convert
. For example, to upload a quantized Mistral-7B model to the
MLX Hugging Face community you can do:
mlx_lm.convert \
--hf-path mistralai/Mistral-7B-Instruct-v0.3 \
-q \
--upload-repo mlx-community/my-4bit-mistral
MLX LM has some tools to scale efficiently to long prompts and generations:
- A rotating fixed-size key-value cache.
- Prompt caching
To use the rotating key-value cache pass the argument --max-kv-size n
where
n
can be any integer. Smaller values like 512
will use very little RAM but
result in worse quality. Larger values like 4096
or higher will use more RAM
but have better quality.
Caching prompts can substantially speedup reusing the same long context with
different queries. To cache a prompt use mlx_lm.cache_prompt
. For example:
cat prompt.txt | mlx_lm.cache_prompt \
--model mistralai/Mistral-7B-Instruct-v0.3 \
--prompt - \
--kv-cache-file mistral_prompt.safetensors
Then use the cached prompt with mlx_lm.generate
:
mlx_lm.generate \
--kv-cache-file mistral_prompt.safetensors \
--prompt "\nSummarize the above text."
The cached prompt is treated as a prefix to the supplied prompt. Also notice when using a cached prompt, the model to use is read from the cache and need not be supplied explicitly.
MLX LM supports thousands of Hugging Face format LLMs. If the model you want to run is not supported, file an issue or better yet, submit a pull request.
Here are a few examples of Hugging Face models that work with this example:
- mistralai/Mistral-7B-v0.1
- meta-llama/Llama-2-7b-hf
- deepseek-ai/deepseek-coder-6.7b-instruct
- 01-ai/Yi-6B-Chat
- microsoft/phi-2
- mistralai/Mixtral-8x7B-Instruct-v0.1
- Qwen/Qwen-7B
- pfnet/plamo-13b
- pfnet/plamo-13b-instruct
- stabilityai/stablelm-2-zephyr-1_6b
- internlm/internlm2-7b
Most Mistral, Llama, Phi-2, and Mixtral style models should work out of the box.
For some models (such as Qwen
and plamo
) the tokenizer requires you to
enable the trust_remote_code
option. You can do this by passing
--trust-remote-code
in the command line. If you don't specify the flag
explicitly, you will be prompted to trust remote code in the terminal when
running the model.
For Qwen
models you must also specify the eos_token
. You can do this by
passing --eos-token "<|endoftext|>"
in the command
line.
These options can also be set in the Python API. For example:
model, tokenizer = load(
"qwen/Qwen-7B",
tokenizer_config={"eos_token": "<|endoftext|>", "trust_remote_code": True},
)