Skip to content

Running on mac computer #21

Answered by iofu728
nogaeps asked this question in Q&A
Nov 23, 2023 · 1 comments · 3 replies
Discussion options

You must be logged in to vote

Hi @nogaeps, since macOS does not have CUDA,
you can use mps, please refer to the following code:

from llmlingua import PromptCompressor

llm_lingua = PromptCompressor(device_map="mps")
# or using other models
llm_lingua = PromptCompressor("lgaalves/gpt2-dolly", device_map="mps")

compressed_prompt = llm_lingua.compress_prompt(prompt, instruction="", question="", target_token=200)

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@gitkaz
Comment options

@iofu728
Comment options

@iofu728
Comment options

Answer selected by iofu728
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
question Further information is requested
3 participants