Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(llmobs): [MLOB-1944] generalize helper for extracting token metrics #12026

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

ncybul
Copy link
Contributor

@ncybul ncybul commented Jan 22, 2025

This PR generalizes the helper method used to extract token metrics from an APM span to be attached to an LLMObs span. Before, Anthropic, Bedrock, and Open AI had specific methods on each of their integration classes to accomplish this. Now, there is one get_llmobs_metrics_tags utils function adapted from the google-specific get_llmobs_metrics_tags_google function that gets reused across these integrations as well as Vertex AI and Gemini. The Langchain integration was excluded from this change since its logic for extracting token metrics varies significantly compared to the other integrations.

Testing

Anthropic

import anthropic
client = anthropic.Anthropic()
message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Hello, Claude"}
    ]
)

Running the code above produced this APM span and this LLM span with the following metrics:

image

Bedrock

import boto3
import json
session = boto3.Session(profile_name='601427279990_account-admin', region_name="us-east-1")
brt = session.client(service_name='bedrock-runtime')
modelId = 'amazon.titan-text-lite-v1'
accept = 'application/json'
contentType = 'application/json'
input_text = "Explain black holes to 8th graders."
body = {
    "inputText": input_text
}
response = brt.invoke_model(body=json.dumps(body), modelId=modelId, accept=accept, contentType=contentType)

Running the code above produced this APM span and this LLM span with the following metrics:

image

Gemini

import google.generativeai as genai
model = genai.GenerativeModel("gemini-1.5-flash")
response = model.generate_content("Explain how AI works", generation_config={"temperature": 0.5, "max_output_tokens": 100})

Running the code above produced this APM span and this LLM span with the following metrics:

Open AI

import os
from openai import OpenAI
oai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))
completion = oai_client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": "Explain how a black holes work to an 8th grader."},
    ],
)

Running the code above produced this APM span and this LLM span with the following metrics:

Vertex AI

import vertexai
from vertexai.generative_models import GenerativeModel
PROJECT_ID = "datadog-sandbox"
vertexai.init(project=PROJECT_ID, location="us-central1")
model = GenerativeModel("gemini-1.5-flash-002")
response = model.generate_content("Explain how a black hole works to an 8th grader.", generation_config={"temperature":0.9, "max_output_tokens":20})

Running the code above produced this APM span and this LLM span with the following metrics:

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

Copy link
Contributor

github-actions bot commented Jan 22, 2025

CODEOWNERS have been resolved as:

ddtrace/llmobs/_integrations/anthropic.py                               @DataDog/ml-observability
ddtrace/llmobs/_integrations/bedrock.py                                 @DataDog/ml-observability
ddtrace/llmobs/_integrations/gemini.py                                  @DataDog/ml-observability
ddtrace/llmobs/_integrations/openai.py                                  @DataDog/ml-observability
ddtrace/llmobs/_integrations/utils.py                                   @DataDog/ml-observability
ddtrace/llmobs/_integrations/vertexai.py                                @DataDog/ml-observability

@pr-commenter
Copy link

pr-commenter bot commented Jan 22, 2025

Benchmarks

Benchmark execution time: 2025-01-24 20:56:12

Comparing candidate commit 52f0391 in PR branch nicole-cybul/generalize-metric-helpers with baseline commit 4611816 in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 394 metrics, 2 unstable metrics.

@ncybul ncybul added the changelog/no-changelog A changelog entry is not required for this PR. label Jan 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog A changelog entry is not required for this PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant