This project aims to create an end-to-end medical chatbot using Meta's Llama2 model, LangChain, Flask, and Pinecone for vector storage. The chatbot is designed to provide users with medical information based on the input it receives.
You can find the project repository at the following link: Project Repo
First, clone the repository to your local machine.
git clone https://github.com/your-repository-url
cd your-repository-folder
Create and activate a new conda environment with Python 3.8.
conda create -n mchatbot python=3.8 -y
conda activate mchatbot
Install the required Python packages from the requirements.txt
file.
pip install -r requirements.txt
Create a .env
file in the root directory of the project and add your Pinecone credentials.
echo "PINECONE_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxx" >> .env
echo "PINECONE_API_ENV=xxxxxxxxxxxxxxxxxxxxxxxxxxxxx" >> .env
Download the Llama 2 quantized model and place it in the model
directory.
- Model file:
llama-2-7b-chat.ggmlv3.q4_0.bin
- Download link: Llama 2 Model on Hugging Face
Run the following command to store the index using Pinecone.
python store_index.py
Finally, run the following command to start the Flask application.
python app.py
Open your web browser and navigate to localhost
to access the medical chatbot.
- Python
- LangChain
- Flask
- Meta Llama2
- Pinecone
Feel free to submit issues, fork the repository, and send pull requests. For major changes, please open an issue first to discuss what you would like to change.
This project is licensed under the MIT License - see the LICENSE file for details.