🦉 OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation
🦉 OWL is a cutting-edge framework for multi-agent collaboration that pushes the boundaries of task automation, built on top of the CAMEL-AI Framework.
Our vision is to revolutionize how AI agents collaborate to solve real-world tasks. By leveraging dynamic agent interactions, OWL enables more natural, efficient, and robust task automation across diverse domains.
- [2025.03.07]: We open-source the codebase of 🦉 OWL project.
371254613005d51d73c82424e56a1d22.mp4
d106cfbff2c7b75978ee9d5631ebeb75.mp4
- Real-time Information Retrieval: Leverage Wikipedia, Google Search, and other online sources for up-to-date information.
- Multimodal Processing: Support for handling internet or local videos, images, and audio data.
- Browser Automation: Utilize the Playwright framework for simulating browser interactions, including scrolling, clicking, input handling, downloading, navigation, and more.
- Document Parsing: Extract content from Word, Excel, PDF, and PowerPoint files, converting them into text or Markdown format.
- Code Execution: Write and execute Python code using interpreter.
- Built-in Toolkits: Access to a comprehensive set of built-in toolkits including ArxivToolkit, AudioAnalysisToolkit, CodeExecutionToolkit, DalleToolkit, DataCommonsToolkit, ExcelToolkit, GitHubToolkit, GoogleMapsToolkit, GoogleScholarToolkit, ImageAnalysisToolkit, MathToolkit, NetworkXToolkit, NotionToolkit, OpenAPIToolkit, RedditToolkit, SearchToolkit, SemanticScholarToolkit, SymPyToolkit, VideoAnalysisToolkit, WeatherToolkit, WebToolkit, and many more for specialized tasks.
# Clone github repo
git clone https://github.com/camel-ai/owl.git
# Change directory into project directory
cd owl
# Install uv if you don't have it already
pip install uv
# Create a virtual environment and install dependencies
# We support using Python 3.10, 3.11, 3.12
uv venv .venv --python=3.10
# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate
# Install CAMEL with all dependencies
uv pip install -e .
# Exit the virtual environment when done
deactivate
# Clone github repo
git clone https://github.com/camel-ai/owl.git
# Change directory into project directory
cd owl
# Create a virtual environment
# For Python 3.10 (also works with 3.11, 3.12)
python3.10 -m venv .venv
# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate
# Install from requirements.txt
pip install -r requirements.txt
# Clone github repo
git clone https://github.com/camel-ai/owl.git
# Change directory into project directory
cd owl
# Create a conda environment
conda create -n owl python=3.10
# Activate the conda environment
conda activate owl
# Option 1: Install as a package (recommended)
pip install -e .
# Option 2: Install from requirements.txt
pip install -r requirements.txt
# Exit the conda environment when done
conda deactivate
In the owl/.env_template
file, you will find all the necessary API keys along with the websites where you can register for each service. To use these API services, follow these steps:
- Copy and Rename: Duplicate the
.env_template
file and rename the copy to.env
.
cp owl/.env_template .env
- Fill in Your Keys: Open the
.env
file and insert your API keys in the corresponding fields. (For the minimal example (run_mini.py
), you only need to configure the LLM API key (e.g., OPENAI_API_KEY).) - For using more other models: please refer to our CAMEL models docs:https://docs.camel-ai.org/key_modules/models.html#supported-model-platforms-in-camel
Note: For optimal performance, we strongly recommend using OpenAI models. Our experiments show that other models may result in significantly lower performance on complex tasks and benchmarks.
# Clone the repository
git clone https://github.com/camel-ai/owl.git
cd owl
# Configure environment variables
cp owl/.env_template owl/.env
# Edit the .env file and fill in your API keys
# Option 1: Using docker-compose directly
cd .container
docker-compose up -d
# Run OWL inside the container
docker-compose exec owl bash -c "xvfb-python run.py"
# Option 2: Build and run using the provided scripts
cd .container
chmod +x build_docker.sh
./build_docker.sh
# Run OWL inside the container
./run_in_docker.sh "your question"
For more detailed Docker usage instructions, including cross-platform support, optimized configurations, and troubleshooting, please refer to DOCKER_README.md.
Run the following demo case:
python owl/run.py
OWL supports various LLM backends. You can use the following scripts to run with different models:
# Run with Qwen model
python owl/run_qwen.py
# Run with Deepseek model
python owl/run_deepseek.py
# Run with other OpenAI-compatible models
python owl/run_openai_compatiable_model.py
For a simpler version that only requires an LLM API key, you can try our minimal example:
python owl/run_mini.py
You can run OWL agent with your own task by modifying the run.py
script:
# Define your own task
question = "Task description here."
society = construct_society(question)
answer, chat_history, token_count = run_society(society)
print(f"\033[94mAnswer: {answer}\033[0m")
For uploading files, simply provide the file path along with your question:
# Task with a local file (e.g., file path: `tmp/example.docx`)
question = "What is in the given DOCX file? Here is the file path: tmp/example.docx"
society = construct_society(question)
answer, chat_history, token_count = run_society(society)
print(f"\033[94mAnswer: {answer}\033[0m")
OWL will then automatically invoke document-related tools to process the file and extract the answer.
Example tasks you can try:
- "Find the latest stock price for Apple Inc."
- "Analyze the sentiment of recent tweets about climate change"
- "Help me debug this Python code: [your code here]"
- "Summarize the main points from this research paper: [paper URL]"
OWL now includes a web-based user interface that makes it easier to interact with the system. To start the web interface, run:
python run_app.py
The web interface provides the following features:
- Easy Model Selection: Choose between different models (OpenAI, Qwen, DeepSeek, etc.)
- Environment Variable Management: Configure your API keys and other settings directly from the UI
- Interactive Chat Interface: Communicate with OWL agents through a user-friendly interface
- Task History: View the history and results of your interactions
The web interface is built using Gradio and runs locally on your machine. No data is sent to external servers beyond what's required for the model API calls you configure.
To reproduce OWL's GAIA benchmark score of 58.18:
- Switch to the
gaia58.18
branch:
git checkout gaia58.18
- Run the evaluation script:
python run_gaia_roleplaying.py
- Write a technical blog post detailing our exploration and insights in multi-agent collaboration in real-world tasks.
- Enhance the toolkit ecosystem with more specialized tools for domain-specific tasks.
- Develop more sophisticated agent interaction patterns and communication protocols
The source code is licensed under Apache 2.0.
If you find this repo useful, please cite:
@misc{owl2025,
title = {OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation},
author = {{CAMEL-AI.org}},
howpublished = {\url{https://github.com/camel-ai/owl}},
note = {Accessed: 2025-03-07},
year = {2025}
}
Join us for further discussions!
Q: Why don't I see Chrome running locally after starting the example script?
A: If OWL determines that a task can be completed using non-browser tools (such as search or code execution), the browser will not be launched. The browser window will only appear when OWL determines that browser-based interaction is necessary.