Bearer Authorization with Open AIlike #15442
Replies: 4 comments 9 replies
-
To add Bearer Authorization to OpenAIlike for a local LLM behind a litellm proxy, you can modify the def _get_credential_kwargs(self, **kwargs: Any) -> Dict[str, Any]:
if self.use_azure_ad:
self._azure_ad_token = refresh_openai_azuread_token(self._azure_ad_token)
self.api_key = self._azure_ad_token.token
headers = {
"Authorization": f"Bearer {self.api_key}",
**self.default_headers,
}
return {
"api_key": self.api_key,
"max_retries": self.max_retries,
"timeout": self.timeout,
"azure_endpoint": self.azure_endpoint,
"azure_deployment": self.azure_deployment,
"api_version": self.api_version,
"default_headers": headers,
"http_client": self._http_client,
**kwargs,
} Additionally, if you are using the LlamaIndex framework and need to pass headers while initializing an embedding model, you can do so by specifying the headers in the model initialization. Here is an example: from llama_index.embeddings.openai import OpenAIEmbedding
self.llm = OpenAIEmbedding(
model=self.model_name,
temperature=self.temperature,
max_tokens=self.max_tokens,
frequency_penalty=self.frequency_penalty,
top_p=self.top_p,
headers={
"Authorization": f"Bearer {self.api_key}",
**self.default_headers,
}
)
# LLM Predictor
self.llm_predictor = LLMPredictor(llm=self.llm) This modification ensures that the Bearer token is included in the headers for authorization when making requests to the local LLM behind the litellm proxy, and it also shows how to pass headers when initializing an embedding model in LlamaIndex [1][2]. |
Beta Was this translation helpful? Give feedback.
-
after updating a CustomOpenAILike class with that function, running llm.complete gets me this error File ~/miniconda3/envs/testRequests/lib/python3.12/site-packages/llama_index/llms/openai_like/base.py:88, in OpenAILike.complete(self, prompt, formatted, **kwargs) File ~/miniconda3/envs/testRequests/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py:260, in Dispatcher.span..wrapper(func, instance, args, kwargs) TypeError: 'NoneType' object is not a mapping |
Beta Was this translation helpful? Give feedback.
-
Hi @mackenziedott , thanks for using LiteLLM. Any chance we can hop on a call to learn how we can improve LiteLLM Proxy for you ? We’re planning roadmap and I’d love to get your feedback
|
Beta Was this translation helpful? Give feedback.
-
I am planning to replicate other people's llama-index research agent by switching llamaindex OpenAI into llamaindex OpenAILike via LiteLLM proxy. The original idea is from pyquantnews blog: https://www.pyquantnews.com/the-pyquant-newsletter/build-powerful-ai-agent-makes-research-reports. For llama-index OpenAI, it did work successfully as below: However, for llama-index OpenAILike, although I could verify the LiteLLM connection is working, as shown below: the agent is not working as below, as it didn't output anything (or error code). I'm surprised because I just switch from llama-index OpenAI to llama-index OpenAILike only.... Here are the examples of the jupyter notebook files I've mentioned above. (Note: need to change their file format from .txt to .ipynb for these two files)
Hope for help. Thanks a lot for the tools. |
Beta Was this translation helpful? Give feedback.
-
My local llm is now behind a litellm proxy with a Bearer Authorization in the form of 'Authorization':'Bearer SECRETKEY'} in headers. How would i add this to OpenAIlike?
Beta Was this translation helpful? Give feedback.
All reactions