Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LiquidAI copilot #24

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions liquid-copilot/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Liquid LFM 40B MoE Copilot

This example provides a basic copilot that utilizes the OpenRouter API for natural language processing and generation, and utilizes Liquid LFM 40B MoE (more information [here](https://openrouter.ai/liquid/lfm-40b:free/api)).

## Overview

This implementation uses a FastAPI application as the backend for the copilot. The core functionality is powered by the OpenRouter API, which offers advanced language models and various capabilities.

You can adapt this implementation to suit your needs or preferences. The key is to adhere to the schema defined by the `/query` endpoint and the specifications in `copilot.json`.

## Getting Started

Follow these steps to set up and run your OpenRouter (Liquid LFM 40B MoE) AI-powered copilot:

### Prerequisites

- Python 3.7 or higher
- Poetry (for dependency management)
- A OpenRouter API key (sign up at https://openrouter.ai if you don't have one)

### Installation and Running

1. Clone this repository to your local machine.
2. Set the [OpenRouter API key](https://openrouter.ai/settings/keys) as an environment variable in your .bashrc or .zshrc file:

``` sh
# in .zshrc or .bashrc
export OPENROUTER_API_KEY=<your-api-key>
```

3. Install the necessary dependencies:

``` sh
poetry install --no-root
```

4.Start the API server:

``` sh
poetry run uvicorn liquid_copilot.main:app --port 7777 --reload
```

This command runs the FastAPI application, making it accessible on your network.

### Testing the Copilot

The example copilot has a small, basic test suite to ensure it's
working correctly. As you develop your copilot, you are highly encouraged to
expand these tests.

You can run the tests with:

``` sh
pytest tests
```

### Accessing the Documentation

Once the API server is running, you can view the documentation and interact with the API by visiting: http://localhost:7777/docs
Empty file.
13 changes: 13 additions & 0 deletions liquid-copilot/liquid_copilot/copilots.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"liquid_copilot": {
"name": "Liquid: LFM 40B MoE Copilot",
"description": "Liquid's 40.3B Mixture of Experts (MoE) model. Liquid Foundation Models (LFMs) are large neural networks built with computational units rooted in dynamic systems.",
"image": "https://github.com/user-attachments/assets/12c547a9-5eb9-45b9-9d4e-fe1cdc89621b",
"hasStreaming": true,
"hasDocuments": false,
"hasFunctionCalling": false,
"endpoints": {
"query": "http://localhost:7777/v1/query"
}
}
}
116 changes: 116 additions & 0 deletions liquid-copilot/liquid_copilot/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
import os
import re
import json
import httpx
from pathlib import Path
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from pathlib import Path
from typing import AsyncGenerator

from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from openai import OpenAI
from sse_starlette.sse import EventSourceResponse

from dotenv import load_dotenv
from liquid_copilot.models import (
AgentQueryRequest
)
from liquid_copilot.prompts import SYSTEM_PROMPT


load_dotenv(".env")
app = FastAPI()

origins = [
"http://localhost",
"http://localhost:1420",
"http://localhost:5050",
"https://pro.openbb.dev",
"https://pro.openbb.co",
]

app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)

def sanitize_message(message: str) -> str:
"""Sanitize a message by escaping forbidden characters."""
cleaned_message = re.sub(r"(?<!\{)\{(?!{)", "{{", message)
cleaned_message = re.sub(r"(?<!\})\}(?!})", "}}", cleaned_message)
return cleaned_message


async def create_message_stream(
content: AsyncGenerator[str, None],
) -> AsyncGenerator[dict, None]:
async for chunk in content:
yield {"event": "copilotMessageChunk", "data": json.dumps({"delta": chunk})}


@app.get("/copilots.json")
def get_copilot_description():
"""Widgets configuration file for the OpenBB Terminal Pro"""
return JSONResponse(
content=json.load(open((Path(__file__).parent.resolve() / "copilots.json")))
)

@app.post("/v1/query")
async def query(request: AgentQueryRequest) -> EventSourceResponse:
"""Query the Copilot."""

messages = [{"role": "system", "content": SYSTEM_PROMPT}]
for message in request.messages:
role = message.role.lower()
if role not in ['system', 'user', 'assistant']:
role = 'user' # Default to 'user' if an invalid role is provided
messages.append({
"role": role,
"content": sanitize_message(message.content)
})

async def generate() -> AsyncGenerator[str, None]:
async with httpx.AsyncClient() as client:
response = await client.post(
"https://openrouter.ai/api/v1/chat/completions",
headers={
"Authorization":
f"Bearer {os.environ['OPENROUTER_API_KEY']}",
"HTTP-Referer": "pro.openbb.co",
"X-Title": "OpenBB",
},
json={
"model": "liquid/lfm-40b:free",
"messages": messages,
"stream": True
},
timeout=None
)

async for line in response.aiter_lines():
if line.startswith("data: "):
line = line.removeprefix("data: ").strip()
if line == "[DONE]":
break
if line: # Only try to parse non-empty lines
try:
data = json.loads(line)
if "choices" in data and len(data["choices"]) > 0:
delta = data["choices"][0].get("delta", {})
if "content" in delta:
yield delta["content"]
except json.JSONDecodeError as e:
print(f"JSON decode error: {e}")
print(f"Problematic line: {line}")
continue

return EventSourceResponse(
content=create_message_stream(generate()),
media_type="text/event-stream",
)
48 changes: 48 additions & 0 deletions liquid-copilot/liquid_copilot/models.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
from typing import Any
from uuid import UUID
from pydantic import BaseModel, Field, field_validator
from enum import Enum


class RoleEnum(str, Enum):
ai = "ai"
human = "human"


class LlmMessage(BaseModel):
role: RoleEnum = Field(
description="The role of the entity that is creating the message"
)
content: str = Field(description="The content of the message")


class BaseContext(BaseModel):
uuid: UUID = Field(description="The UUID of the widget.")
name: str = Field(description="The name of the widget.")
description: str = Field(
description="A description of the data contained in the widget"
)
content: Any = Field(description="The data content of the widget")
metadata: dict[str, Any] | None = Field(
default=None,
description="Additional widget metadata (eg. the selected ticker, etc)",
)


class AgentQueryRequest(BaseModel):
messages: list[LlmMessage] = Field(
description="A list of messages to submit to the copilot."
)
context: list[BaseContext] | None = Field(
default=None,
description="Additional context.",
)
use_docs: bool = Field(
default=None, description="Set True to use uploaded docs when answering query."
)

@field_validator("messages")
def check_messages_not_empty(cls, value):
if not value:
raise ValueError("messages list cannot be empty.")
return value
19 changes: 19 additions & 0 deletions liquid-copilot/liquid_copilot/prompts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
SYSTEM_PROMPT = """\n
You are a helpful financial assistant working for Example Co.
Your name is "Liquid: LFM 40B MoE Copilot", and you were created by Example Co.
You will do your best to answer the user's query.

Use the following guidelines:
- Formal and Professional Tone: Maintain a business-like, sophisticated tone, suitable for a professional audience.
- Clarity and Conciseness: Keep explanations clear and to the point, avoiding unnecessary complexity.
- Focus on Expertise and Experience: Emphasize expertise and real-world experiences, using direct quotes to add a personal touch.
- Subject-Specific Jargon: Use industry-specific terms, ensuring they are accessible to a general audience through explanations.
- Narrative Flow: Ensure a logical flow, connecting ideas and points effectively.
- Incorporate Statistics and Examples: Support points with relevant statistics, examples, or case studies for real-world context.

## Context
Use the following context to help formulate your answer:

{context}

"""
Loading