Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inital commit #86

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
121 changes: 74 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,84 +9,111 @@ AIxplora is your new personal assistant, an open-source project that opens up un
It leverages AI and LLMs to understand all types of documents, unrestricted by their length or format.

**Imagine being able to query PDF files, MP3 audio, videos, and other types of documents with equal ease and proficiency.
Yes, that's the limitless world AIxplora is inviting you into!***
Yes, that's the limitless world AIxplora is inviting you into!**


## How Can You Contribute?
There are several ways you can contribute as I just released a PoC of the project:
## 🚀 Highlighted Features

- Code: Write, refactor, optimize - every line of code matters!
- **Universal File Integration**: Accepts any file type without restrictions on length.
- **Open-Source Transparency**: Complete access to the source code, granting unparalleled flexibility and trust.
- **Flexible Privacy Options**:
- Use official OpenAI and ChatGPT models while ensuring data confidentiality.
- **Option to utilize open-source models for an added layer of privacy. (Everything will run on your machine, no third party API usage)**
- **Innovative Summarization**: Harness a unique approach to transform your files into concise summaries.
- **Interactive File Indexing**: Engage in dynamic conversations with your indexed files, or detach the "AIxplora brain"-interface for a pure ChatGPT experience.

- Documentation: Help us make our project more understandable and user-friendly.
## 💡 Roadmap

- Testing: Every bug found is a step towards perfection.
- **AIxplora-Cloud**: Share your knowledge seamlessly, perfect for businesses, friends, or families aiming for collaborative learning.
- **AIxplora Integration**: Embed your AIxplora brain as a Chat-widget on your website, enabling instant AI-backed responses to user queries.
- **AIxplora Executable**: Simplified usage for all; install AIxplora just like any standard application, no technical expertise needed.
- **Stay Tuned!**: More exciting updates are on the horizon.

- Suggest Features: We believe in the power of ideas, no matter where they come from.
## 🎥 Demo video

- Spread the Word: Share our project within your networks. The more people know about AIxplora, the better it can become!
https://github.com/grumpyp/aixplora/assets/75830792/7302684f-2c1f-4849-9f10-c6254be1009d

## Demo and introduction videos
[![Summary function showcase](https://img.youtube.com/vi/8x9HhWjjNtY/hqdefault.jpg)](https://youtu.be/8x9HhWjjNtY)


more videos on YouTube:

https://youtu.be/8x9HhWjjNtY
https://youtu.be/2lNNKLM0o7U
https://youtu.be/eKLmhJobVvc


## How to run locally
## 🛠 How to Run Locally

1. Clone the repo and Install dependencies
```
git clone [email protected]:grumpyp/aixplora.git
```
2. Install dependencies
```
pip install -r backend/requirements.txt
cd frontend && npm install
cd ..
```
3. Run the backend and the frontend
```
python backend/main.py
cd frontend
npm start
```
1. **Clone the Repository & Install Dependencies**
```
git clone [email protected]:grumpyp/aixplora.git
```

**Having issues installing frontend? See this [debugging guide](https://github.com/electron-react-boilerplate/electron-react-boilerplate/issues/400)**
2. **Install Dependencies**
```
pip install -r backend/requirements.txt
cd frontend && npm install
cd ..
```

## How to run using Docker Compose
3. **Launch the Backend & Frontend**
```
python backend/main.py
cd frontend
npm start
```

🔍 **Troubleshooting**: Encountering frontend installation problems? Consult this [debugging guide](https://github.com/electron-react-boilerplate/electron-react-boilerplate/issues/400).

1. Clone the repo and Install dependencies
## 🐳 How to Run using Docker Compose

1. **Clone the Repository**
```
git clone [email protected]:grumpyp/aixplora.git
```
2. Build Docker image and run containers

2. **Build Docker Image & Spin Up Containers**
```
install=true docker compose up --build
```
3. When running the above command for the first time, make sure `frontend/node_modules` folder does not exist. The initial build might take some time since it will install all the required dependencies.

4. Once the build and the package installation is finished, it should show an error in the console `app exited with code null, waiting for change to restart it` (We have to work on that issues).
3. **Initial Build**
- Ensure the `frontend/node_modules` folder is absent on the first command execution.
- The initial building process might be prolonged due to dependency installation.

5. Navigate to the UI on `http://localhost:1212/`.
4. **Post-Build Notification**
- After the build and package installation concludes, an error might appear in the console: `app exited with code null, waiting for change to restart it`. This is a known issue we're addressing.

6. Next time when starting the app you can simply use the following command
5. **Access the UI**
- Visit `http://localhost:1212/`.

6. **Subsequent Launches**
```
docker compose up
```
**Note that**
- After adding new packages in `requirements.txt` you'll have to run `docker compose up --build`
- After adding new packages in `package.json` you'll have to run `install=true docker compose up` to install the new packages.
- If you want to just run frontend run `docker compose up frontend`
- If you want to just run backend run `docker compose up backend`
📝 **Notes**:
- After appending new packages in `requirements.txt`, execute `docker compose up --build`.
- Post adding fresh packages in `package.json`, use `install=true docker compose up` for new package installations.
- To solely launch the frontend: `docker compose up frontend`.
- To solely launch the backend: `docker compose up backend`.



## 🤝 How Can You Contribute?

With the recent release of a PoC for the project, your involvement is pivotal. Here's how you can be a part of our journey:

## Roadmap
- **Code**: Dive deep into our codebase! Whether it's writing, refactoring, or optimizing, every line contributes to our collective vision.

- **Documentation**: Illuminate our project's essence. Assist in crafting clearer and more user-centric guidelines and explanations.

- **Testing**: Become our frontline in quality assurance. Each bug identified is a stride towards unparalleled product excellence.

- **Suggest Features**: Your imagination is our canvas. We deeply value ideas, irrespective of their origin.

- **Spread the Word**: Amplify our message. Introduce AIxplora to your network and watch it evolve and flourish with increased collective insight.

- Build a community around the project
- Release a standalone desktop app
- Bugfixes and improvements to scale the project
- Add more features (custom LLMs, more file types, etc.)
- Cloud deployment
- Integrations (Google Drive, Dropbox, etc.)
## Star history

[![Star History Chart](https://api.star-history.com/svg?repos=grumpyp/aixplora&type=Date)](https://star-history.com/#grumpyp/aixplora&Date)

36 changes: 24 additions & 12 deletions backend/embeddings/index_files.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
from langchain.schema import Document
from database.database import Database
from sqlalchemy import text
from embeddings.utils import openai_ask
from embeddings.utils import openai_ask, openai_ask_no_aixplora_brain
import random
from qdrant_client import QdrantClient
from qdrant_client.http import models
Expand Down Expand Up @@ -244,24 +244,36 @@ def query(self, query_embedding: List[List[float]] = None, query_texts: str = No
else:
relevant_docs = [doc.payload["chunk"] for doc in results]
meta_data = [doc.payload["metadata"] for doc in results]
print("halloooo")
print(self.openai_model)
if not self.openai_model[0].startswith("gpt"):
print(f"Using local model: {self.openai_model[0]}")
# TODO: refactor this path to be global
models_dir = os.path.join(os.getcwd(), "llmsmodels")
gptj = GPT4All(model_name=self.openai_model[0], model_path=models_dir)
messages = [
{"role": "user",
"content": f"Answer the following question: {query_texts} based on that context: {relevant_docs},"
" Make sure that the answer of you is in the same language then the question. if you can't just answer: I don't know"}
]
if use_brain:
messages = [
{"role": "user",
"content": f"Answer the following question: {query_texts} based on that context: {relevant_docs},"
" Make sure that the answer of you is in the same language then the question. if you can't just answer: I don't know"}
]
else:
messages = [
{"role": "user",
"content": f"{query_texts}"}]
answer = gptj.chat_completion(messages, streaming=False)["choices"][0]["message"]["content"]
else:
if self.openai_model[0].startswith("gpt"):
print(f"Using openai model: {self.openai_model[0]}")
answer = openai_ask(context=relevant_docs, question=query_texts, openai_api_key=self.openai_api_key[0],
openai_model=self.openai_model[0])
if use_brain:
if self.openai_model[0].startswith("gpt"):
print(f"Using openai model: {self.openai_model[0]}")
answer = openai_ask(context=relevant_docs, question=query_texts, openai_api_key=self.openai_api_key[0],
openai_model=self.openai_model[0])
else:
answer = openai_ask(context=relevant_docs, question=query_texts,
openai_api_key=self.openai_api_key[0],
openai_model=self.openai_model[0])
else:
if self.openai_model[0].startswith("gpt"):
answer = openai_ask_no_aixplora_brain(question=query_texts, openai_api_key=self.openai_api_key[0],
openai_model=self.openai_model[0])
_answer = {"answer": answer, "meta_data": meta_data}
print(meta_data)
return _answer
20 changes: 20 additions & 0 deletions backend/embeddings/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ def openai_ask(context: str = None, pages: List[int] = None, question: str = Non
print(pages)
# TODO: make answer to same language
if openai_model.startswith("gpt"):
openai.api_key = openai_api_key
completion = openai.ChatCompletion.create(
model=f"{openai_model}",
messages=[
Expand All @@ -34,3 +35,22 @@ def openai_ask(context: str = None, pages: List[int] = None, question: str = Non
return gptj.chat_completion(messages=messages, streaming=False)


def openai_ask_no_aixplora_brain(question: str, openai_api_key: str = None, openai_model: str = "gpt-3.5-turbo"):
if openai_model.startswith("gpt"):
openai.api_key = openai_api_key
completion = openai.ChatCompletion.create(
model=f"{openai_model}",
messages=[
{"role": "user", "content": f"{question}"}
]
)
return completion["choices"][0]["message"]["content"]
else:
print(f"Using local model: {openai_model}")
models_dir = os.path.join(os.getcwd(), "llmsmodels")
gptj = GPT4All(model_name=openai_model, model_path=models_dir)
messages = [
{"role": "user",
"content": f"{question}"}
]
return gptj.chat_completion(messages=messages, streaming=False)
3 changes: 2 additions & 1 deletion backend/main.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import uvicorn
from fastapi import FastAPI, File, UploadFile, Request, Header
from typing import Optional
from fastapi.middleware.cors import CORSMiddleware
from sqlalchemy import text
from typing import List
Expand Down Expand Up @@ -206,10 +207,10 @@ def chat(request: Request, question: Question, document: Document):
apikey = request.headers.get("apikey", False)
email = request.headers.get("email", False)
genie = Genie()
print(email, apikey)
if apikey and email:
genie = Genie(remote_db=True, apikey=apikey, email=email)
answer = genie.query(query_texts=question.question, specific_doc=document.document)

print(answer)
return {"question": question.question, "answer": answer["answer"], "meta_data": answer["meta_data"]}

Expand Down
11 changes: 10 additions & 1 deletion frontend/src/renderer/pages/Chat/Chat.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import {Message} from 'renderer/utils';
import Question from './components/Question';
import Answer from './components/Answer';
import {IconSend, IconTrash, IconArrowUp} from '@tabler/icons-react';
import {Modal, Select} from '@mantine/core';
import {Modal, Select, Switch} from '@mantine/core';
import {useDisclosure} from '@mantine/hooks';
import Block from './components/Block';

Expand All @@ -22,6 +22,7 @@ function Chat() {
const [error, setError] = useState(false);
const [input, setInput] = useState('');
const discussionRef = useRef<HTMLDivElement>(null);
const [useBrain, setUseBrain] = useState(true);


// used for the specific_document chat
Expand Down Expand Up @@ -79,6 +80,7 @@ function Chat() {
document: {
document: selectedFile,
},
usebrain: useBrain,
});
const data = response.data;

Expand Down Expand Up @@ -146,6 +148,13 @@ function Chat() {
<div onClick={up} className="arrow_up">
<IconArrowUp color="white"/>
</div>
<div className={'switch'}>
AIxplora brain
<Switch
onLabel="ON" offLabel="OFF" defaultChecked="true"
onClick={() => { setUseBrain(!useBrain) }}
/>
</div>
<Modal.Root opened={opened} onClose={close}>
<Modal.Overlay/>
<Modal.Content>
Expand Down
10 changes: 10 additions & 0 deletions frontend/src/renderer/pages/Chat/chat.css
Original file line number Diff line number Diff line change
Expand Up @@ -185,6 +185,16 @@ p {
cursor: pointer;
}

.switch {
background: rgba(34, 139, 230, 0.3);
position: fixed;
top: 15em;
left: 0;
width: 65px;
border-top-right-radius: 10px;
border-bottom-right-radius: 10px;
}

.metadata {
display: flex;
flex-direction: row;
Expand Down