- Currently there are some good options to expose your ChatGPT chatbot with a nice user interface, such as Gradio and Streamlit. However, customizing the look and feel of the UI for these frameworks is not that straightforward.
- larry.ai was created with two main principles:
- Ease of use: just install it, configure your Open AI token, and voila - you have a sleak chatbot frontend
- Flexibility: want to use larry.ai as a simple (internal) proxy and communicate with it via your own frontend? You're welcome to do so via the exposed REST API endpoints.
pip install larry-ai
- Make sure to properly set the environment variable containing your Open AI Token (
OPENAI_API_TOKEN
), and then:
larry
INFO:root:Starting Larry server...
INFO: Started server process [2856]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
- Just fire up your browser and head to https://localhost:8000.
- The default/root endpoint (
/
) shows the ReactJS frontend, but other endpoints are also available:/generate
: REST API endpoint that communicates with Open AI./docs
: FastAPI Swagger UI documentation.
We also have some exciting features in the roadmap, namely:
- Ability to easily change color themes
- Prompt Injection protection
- Caching GPT API calls
- Rate limiting
- Authentication & Authorization
- API Key Management
- Have a cool idea? Feel free to create an issue and submit a PR!
- You can have a look at the current issues here