Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve custom agentchat agent docs with model clients (gemini example) and serialization #5468

Merged
merged 13 commits into from
Feb 11, 2025

Conversation

victordibia
Copy link
Collaborator

@victordibia victordibia commented Feb 10, 2025

This PR improves documentation on custom agents

  • Shows example on how to create a custom agent that directly uses a model client. In this case an example of a GeminiAssistantAgent that directly uses the Gemini SDK model client.
  • Shows that that CustomAgent can be easily added to any agentchat team
  • Shows how the same CustomAgent can be made declarative by inheriting the Component interface and implementing the required methods.
import os
from typing import AsyncGenerator, Sequence

from autogen_agentchat.agents import BaseChatAgent
from autogen_agentchat.base import Response
from autogen_agentchat.messages import AgentEvent, ChatMessage
from autogen_core import CancellationToken, Component
from pydantic import BaseModel


class GeminiAssistantAgentConfig(BaseModel):
    name: str
    description: str = "An agent that provides assistance with ability to use tools."
    model: str = "gemini-1.5-flash-002"
    system_message: str | None = None


class GeminiAssistant(BaseChatAgent, Component[GeminiAssistantAgentConfig]):
    component_config_schema = GeminiAssistantAgentConfig
    # component_provider_override = "mypackage.agents.GeminiAssistant"

    def __init__(
        self,
        name: str,
        description: str = "An agent that provides assistance with ability to use tools.",
        model: str = "gemini-1.5-flash-002",
        api_key: str = os.environ["GEMINI_API_KEY"],
        system_message: str = "You are a helpful assistant that can respond to messages. Reply with TERMINATE when the task has been completed.",
    ):
        super().__init__(name=name, description=description)
        self._model_context = UnboundedChatCompletionContext()
        self._model_client = genai.Client(api_key=api_key)
        self._system_message = system_message
        self._model = model

    @property
    def produced_message_types(self) -> Sequence[type[ChatMessage]]:
        return (TextMessage,)

    async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:
        async for message in self.on_messages_stream(messages, cancellation_token):
            if isinstance(message, Response):
                return message
        raise AssertionError("The stream should have returned the final result.")

    async def on_messages_stream(
        self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken
    ) -> AsyncGenerator[AgentEvent | ChatMessage | Response, None]:
        # Add messages to the model context
        for msg in messages:
            await self._model_context.add_message(UserMessage(content=msg.content, source=msg.source))

        # Get conversation history
        history = [msg.source + ": " + msg.content + "\n" for msg in await self._model_context.get_messages()]

        # Generate response using Gemini
        response = self._model_client.models.generate_content(
            model=self._model,
            contents=f"History: {history}\nGiven the history, please provide a response",
            config=types.GenerateContentConfig(
                system_instruction=self._system_message,
                temperature=0.3,
            ),
        )

        # Create usage metadata
        usage = RequestUsage(
            prompt_tokens=response.usage_metadata.prompt_token_count,
            completion_tokens=response.usage_metadata.candidates_token_count,
        )

        # Add response to model context
        await self._model_context.add_message(AssistantMessage(content=response.text, source=self.name))

        # Yield the final response
        yield Response(
            chat_message=TextMessage(content=response.text, source=self.name, models_usage=usage),
            inner_messages=[],
        )

    async def on_reset(self, cancellation_token: CancellationToken) -> None:
        """Reset the assistant by clearing the model context."""
        await self._model_context.clear()

    @classmethod
    def _from_config(cls, config: GeminiAssistantAgentConfig) -> "GeminiAssistant":
        return cls(
            name=config.name, description=config.description, model=config.model, system_message=config.system_message
        )

    def _to_config(self) -> GeminiAssistantAgentConfig:
        return GeminiAssistantAgentConfig(
            name=self.name,
            description=self.description,
            model=self._model,
            system_message=self._system_message,
        )
gemini_assistant = GeminiAssistant("gemini_assistant")
config = gemini_assistant.dump_component()
print(config.model_dump_json(indent=2))
loaded_agent = GeminiAssistant.load_component(config)
print(loaded_agent)
team = RoundRobinGroupChat([primary_agent, gemini_critic_agent], termination_condition=text_termination)

Why are these changes needed?

Related issue number

Closes #5450

Checks

Copy link

codecov bot commented Feb 10, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 75.09%. Comparing base (2612796) to head (00c5560).
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #5468   +/-   ##
=======================================
  Coverage   75.09%   75.09%           
=======================================
  Files         167      167           
  Lines        9906     9906           
=======================================
  Hits         7439     7439           
  Misses       2467     2467           
Flag Coverage Δ
unittests 75.09% <100.00%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@victordibia victordibia changed the title Improve docs with examples on custom agentchat agents with model clients and serialization Improve custom agentchat agent docs with model clients (gemini example) and serialization Feb 10, 2025
@ekzhu ekzhu merged commit cd085e6 into main Feb 11, 2025
64 of 66 checks passed
@ekzhu ekzhu deleted the custom_agentchat_update_vd branch February 11, 2025 00:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Python] Improve Custom Agent Sample in AgentChat
3 participants