Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support of instrumenting Anthropic models #110

Closed
samuelcolvin opened this issue May 3, 2024 · 8 comments · Fixed by #181
Closed

Support of instrumenting Anthropic models #110

samuelcolvin opened this issue May 3, 2024 · 8 comments · Fixed by #181
Assignees

Comments

@samuelcolvin
Copy link
Member

Originally posted by @salahaz as a discussion #88

Is there going to be in the near future support of instrumenting other AI models such Anthropic?

@alexmojaki
Copy link
Contributor

Related to #109 since they already support Anthropic.

@willbakst
Copy link
Contributor

@salahaz #98 shows how you can already do this with Mirascope in the meantime while we patiently wait for this feature

@samuelcolvin I'd love to take this feature request on if you'd like :)

We've already implemented the majority of it in our library following how you instrument openai, so shouldn't be too difficult to port over (and we can use the llm tag so it shows up pretty in the UI with no additional changes)

@alexmojaki
Copy link
Contributor

@samuelcolvin I'd love to take this feature request on if you'd like :)

@willbakst I think that'd be very appreciated

@willbakst
Copy link
Contributor

@alexmojaki Do you have a contribution guide I can follow?

@alexmojaki
Copy link
Contributor

https://github.com/pydantic/logfire/blob/main/CONTRIBUTING.md

@willbakst
Copy link
Contributor

@alexmojaki I just ran through the guide:

  1. There are 7 skipped tests and 2 xfailed tests. Is this expected?
  2. Running make docs aborts with 9 warning in strict mode. Is this expected?
    (I can turn strict mode to false locally to get it to work).

@alexmojaki
Copy link
Contributor

Yes and yes. Our docs use a closed source version of mkdocs with special features for sponsors/insiders, so external contributors can't actually build them fully. The contribution guide should really reflect this.

@willbakst
Copy link
Contributor

Update: I have a locally working version with tests, but a lot of the code is duplicated given the similarity with OpenAI. Going to spend some time refactoring to reduce code/test duplication and hopefully make it easier to instrument additional providers in the future (assuming they use the same SDK generation provider).

This will take a little longer to get done, but I believe well worth it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants