Multi-modal AI Assistant
VT.ai is a multi-modal AI Chatbot Assistant, offering a chat interface to interact with Large Language Models (LLMs) from various providers. Both via remote API or running locally with Ollama.
The application supports multi-modal conversations, seamlessly integrating text, images, and vision processing with LLMs.
[Beta] Multi-modal AI Assistant support via OpenAI's Assistant API function calling.
- [Beta] Assistant support: Enjoy the assistance of Multi-modal AI Assistant through OpenAI's Assistant API. It can write and run code to answer math questions.
- Multi-Provider Support: Choose from a variety of LLM providers including OpenAI, Anthropic, and Google, with more to come.
- Multi-Modal Conversations: Experience rich, multi-modal interactions by uploading text and image files. You can even drag and drop images for the model to analyze.
- Real-time Responses: Stream responses from the LLM as they are generated.
- Dynamic Settings: Customize model parameters such as temperature and top-p during your chat session.
- Clean and Fast Interface: Built using Chainlit, ensuring a smooth and intuitive user experience.
- Advanced Conversation Routing: Utilizes SemanticRouter for accurate and efficient modality selection.
- Python 3.7 or higher
- (Optional -- Recommended)
rye
as the Python dependencies manager (installation guide below) - For using local models with Ollama:
- Download the Ollama client from https://ollama.com/download
- Download the desired Ollama models from https://ollama.com/library (e.g.,
ollama pull llama3
) - Follow the Ollama installation and setup instructions
- You can use native Python
pip
to install packages dependencies without installingrye
. If so, you can skip these steps and proceed to the Usage section below. - [Recommended] If want to use
rye
, and had it installed from the Prerequisites step, you can skip these steps and proceed to the Usage section below. Otherwise you can installrye
by following these steps:
a. Install
rye
(Python packages manager):
curl -sSf https://rye-up.com/get | bash
b. Source the Rye env file to update PATH (add this to your shell configuration file, e.g.,
.zprofile
or.zshrc
):
source "$HOME/.rye/env"
- Rename the
.env.example
file to.env
and configure your desired LLM provider API keys. If using Ollama, you can leave the API keys blank. - Create Python virtual environment:
python3 -m venv .venv
- Activate the Python virtual environment:
source .venv/bin/activate
- Packages management:
- Using pip, start dependencies sync, by running this command:
pip install -r requirements.txt
- [Recommended] If you use
rye
, start dependencies sync, by running this command:rye sync
- (Optional) Run semantic trainer once.
python src/router/trainer.py
- Run the app with optional hot reload:
chainlit run src/app.py -w
- Open the provided URL in your web browser (e.g.,
localhost:8000
). - Select an LLM model and start chatting or uploading files for multi-modal processing. If using Ollama, select the
Ollama
option from the model dropdown. - To run Ollama server for serving local LLM models, you can use the following commands:
- Example to use Meta's Llama 3 model locally from Ollama:
ollama pull llama3
to download thellama3
model (replace with the desired model name) ollama serve
to start the Ollama serverollama --help
for more options and details
- Chainlit: A powerful library for building chat applications with LLMs, providing a clean and fast front-end.
- LiteLLM: A versatile library for interacting with LLMs, abstracting away the complexities of different providers.
- SemanticRouter: A high-performance library for accurate conversation routing, enabling dynamic modality selection.
Contributions are welcome! Here's how you can contribute:
- Fork the repository
- Create a new branch:
git checkout -b my-new-feature
- Make your changes and commit them:
git commit -m 'Add some feature'
- Push to the branch:
git push origin my-new-feature
- Submit a pull request
See releases tags
This project is licensed under the MIT License.
For questions, suggestions, or feedback, feel free to reach out:
- Twitter: @vinhnx