Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: nilsherzig/LLocalSearch
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.1
Choose a base ref
...
head repository: nilsherzig/LLocalSearch
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
Loading
Showing with 4,208 additions and 1,149 deletions.
  1. +14 −0 .github/FUNDING.yml
  2. +27 −0 .github/ISSUE_TEMPLATE/bug_report.md
  3. +10 −0 .github/ISSUE_TEMPLATE/custom.md
  4. +20 −0 .github/ISSUE_TEMPLATE/feature_request.md
  5. +10 −15 Dockerfile
  6. +31 −48 Makefile
  7. +46 −0 OLLAMA_GUIDE.md
  8. +51 −42 README.md
  9. +73 −0 README.md.old
  10. +104 −76 backend/agentChain.go
  11. +175 −64 backend/apiServer.go
  12. +13 −0 backend/e2e/e2e_suite_test.go
  13. +125 −0 backend/e2e/simple_question_test.go
  14. +19 −6 backend/go.mod
  15. +71 −58 backend/go.sum
  16. +129 −0 backend/llm_tools/simple_websearch.go
  17. +0 −48 backend/llm_tools/tool_download_website.go
  18. +0 −48 backend/llm_tools/tool_planer.go
  19. +38 −45 backend/llm_tools/tool_search_vector_db.go
  20. +139 −0 backend/llm_tools/tool_webscrape.go
  21. +0 −123 backend/llm_tools/tool_websearch.go
  22. +159 −0 backend/lschains/custom_structured_parser.go
  23. +131 −0 backend/lschains/format_sources_chain.go
  24. +194 −0 backend/lschains/ollama_functioncall.go
  25. +47 −0 backend/lschains/source_chain_example.go
  26. +60 −0 backend/lschains/test_chain.go
  27. +61 −3 backend/main.go
  28. +9 −12 backend/utils/customHandler.go
  29. +11 −4 backend/utils/llm_backends.go
  30. +49 −0 backend/utils/load_localfiles.go
  31. +0 −7 backend/utils/prompts.go
  32. +60 −38 backend/utils/types.go
  33. +36 −15 backend/utils/vector_db_handler.go
  34. +23 −0 custom-server.js
  35. +11 −13 docker-compose.dev.yaml
  36. +8 −13 docker-compose.yaml
  37. +7 −0 env-example
  38. +4 −0 go.work
  39. +101 −0 go.work.sum
  40. +36 −0 metrics/Dockerfile
  41. +3 −0 metrics/go.mod
  42. +100 −0 metrics/main.go
  43. +1 −0 metrics/tmp/build-errors.log
  44. BIN metrics/tmp/main
  45. +30 −0 nginx.conf
  46. +102 −11 package-lock.json
  47. +2 −0 package.json
  48. +8 −8 src/app.d.ts
  49. +1 −1 src/app.html
  50. +20 −7 src/lib/bottom_bar.svelte
  51. +15 −0 src/lib/chatHistory.svelte
  52. +16 −0 src/lib/chatList.svelte
  53. +29 −0 src/lib/chatListItemElem.svelte
  54. +55 −13 src/lib/chat_button.svelte
  55. +16 −0 src/lib/clickOutside.js
  56. +39 −0 src/lib/loading_message.svelte
  57. +65 −64 src/lib/log_item.svelte
  58. +0 −1 src/lib/model_switch_window.svelte
  59. +23 −0 src/lib/new_chat_button.svelte
  60. +70 −0 src/lib/settings_field.svelte
  61. +189 −0 src/lib/settings_window.svelte
  62. +48 −0 src/lib/show_logs_button.svelte
  63. +59 −0 src/lib/sidebar.svelte
  64. +25 −0 src/lib/sidebar_history_toggle.svelte
  65. +26 −0 src/lib/sidebar_sources_toggle.svelte
  66. +29 −0 src/lib/sources.svelte
  67. +38 −44 src/lib/toggle_darkmode_button.svelte
  68. +0 −86 src/lib/toggle_logs_button.svelte
  69. +23 −0 src/lib/toggle_settings_button.svelte
  70. +44 −0 src/lib/toggle_sidebar_button.svelte
  71. +36 −0 src/lib/topbar_button.svelte
  72. +40 −3 src/lib/types/types.ts
  73. +1 −0 src/routes/+layout.ts
  74. +5 −0 src/routes/+page.server.ts
  75. +3 −231 src/routes/+page.svelte
  76. +335 −0 src/routes/+page.svelte.old
  77. +383 −0 src/routes/chat/[slug]/+page.svelte
  78. +15 −0 src/routes/chat/[slug]/handle_darkmode.ts
  79. +78 −0 src/routes/chat/[slug]/load_functions.ts
  80. BIN static/favicon.png
  81. +120 −0 static/favicon.svg
  82. +14 −2 svelte.config.js
14 changes: 14 additions & 0 deletions .github/FUNDING.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# These are supported funding model platforms

github: nilsherzig # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: nilsherzig # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
polar: # Replace with a single Polar username
buy_me_a_coffee: # Replace with a single Buy Me a Coffee username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
27 changes: 27 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''

---

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

**Expected behavior**
A clear and concise description of what you expected to happen.

**Screenshots**
If applicable, add screenshots to help explain your problem.

**Additional context**
Add any other context about the problem here.
10 changes: 10 additions & 0 deletions .github/ISSUE_TEMPLATE/custom.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
name: Custom issue template
about: Describe this issue template's purpose here.
title: ''
labels: question
assignees: ''

---


20 changes: 20 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.
25 changes: 10 additions & 15 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,18 +1,13 @@
FROM node:20-alpine3.19 AS builder
FROM node:alpine as build
ARG PUBLIC_VERSION="0"
ENV PUBLIC_VERSION=${PUBLIC_VERSION}

ADD . /app
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .

RUN npm install
RUN npm run build
# RUN npm prune --production
EXPOSE 4173
CMD ["npm", "run", "preview", "--", "--host"]

# FROM node:20-alpine3.19
# WORKDIR /app
# COPY --from=builder /app/build build/
# COPY --from=builder /app/node_modules node_modules/
# COPY package.json .
# EXPOSE 3000
# ENV NODE_ENV=production
# CMD [ "node", "build" ]
FROM nginx:stable-alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/build /usr/share/nginx/html
79 changes: 31 additions & 48 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,7 +1,30 @@
DOCKER_HUB_USER ?= nilsherzig
BACKEND_NAME ?= llocalsearch-backend
FRONTEND_NAME ?= llocalsearch-frontend
GIT_HASH := $(shell git rev-parse --short HEAD)
#TODO use this as a version
GIT_HASH := $(shell git rev-parse --short HEAD)

LATEST_TAG := $(shell git describe --tags --abbrev=0)
CURRENT_TIMESTAMP := $(shell date +%s)

# used for local testing, so i can save the platform build time
PHONY: build-containers
build-containers:
(cd ./metrics/ && docker buildx build --build-arg="VERSION=$(CURRENT_TIMESTAMP)" . -t nilsherzig/lsm:latest --load)
docker buildx build --build-arg="PUBLIC_VERSION=$(CURRENT_TIMESTAMP)" . -t nilsherzig/llocalsearch-frontend:latest --load
(cd ./backend/ && docker buildx build . -t nilsherzig/llocalsearch-backend:latest --load)

# containers which will be published
PHONY: build-containers-multi
build-containers-multi:
(cd ./metrics/ && docker buildx build --build-arg="VERSION=$(CURRENT_TIMESTAMP)" . -t nilsherzig/lsm:latest --push --platform linux/amd64,linux/arm64)
docker buildx build --build-arg="PUBLIC_VERSION=$(CURRENT_TIMESTAMP)" . -t nilsherzig/llocalsearch-frontend:latest --push --platform linux/amd64,linux/arm64
(cd ./backend/ && docker buildx build . -t nilsherzig/llocalsearch-backend:latest --push --platform linux/amd64,linux/arm64)

PHONY: new-release
new-release: build-containers-multi
@echo "New release pushed to Docker Hub"

PHONY: e2e-backend
e2e-backend:
(cd ./backend && ginkgo -v -r ./...)

# dev run commands
PHONY: build-dev
@@ -10,48 +33,8 @@ build-dev:

PHONY: dev
dev: build-dev
docker-compose -f ./docker-compose.dev.yaml up
docker-compose -f ./docker-compose.dev.yaml up $(ARGS)

# normal hosting commands
PHONY: run
run:
docker-compose up -d

PHONY: stop
stop:
docker-compose -f ./docker-compose.dev.yaml down

PHONY: update
update:
git pull

PHONY: upgrade
upgrade: stop update run

# release / docker build commands
release-stable: build-stable tag-git-hash push
@echo "Release stable version with git hash $(GIT_HASH)"

release-latest: build-latest tag-git-hash push
@echo "Release latest version with git hash $(GIT_HASH)"

build-latest:
docker build -t $(DOCKER_HUB_USER)/$(FRONTEND_NAME):latest .
(cd ./backend && docker build -t $(DOCKER_HUB_USER)/$(BACKEND_NAME):latest .)

build-stable:
docker build -t $(DOCKER_HUB_USER)/$(FRONTEND_NAME):stable .
(cd ./backend && docker build -t $(DOCKER_HUB_USER)/$(BACKEND_NAME):stable .)

tag-git-hash:
docker tag $(DOCKER_HUB_USER)/$(BACKEND_NAME):latest $(DOCKER_HUB_USER)/$(BACKEND_NAME):$(GIT_HASH)
docker tag $(DOCKER_HUB_USER)/$(FRONTEND_NAME):latest $(DOCKER_HUB_USER)/$(FRONTEND_NAME):$(GIT_HASH)

push:
@echo "Pushing images to Docker Hub"
# docker push $(DOCKER_HUB_USER)$/$(BACKEND_NAME):$(GIT_HASH)
# docker push $(DOCKER_HUB_USER)$/$(FRONTEND_NAME):$(GIT_HASH)
# docker push $(DOCKER_HUB_USER)$/$(BACKEND_NAME):latest
# docker push $(DOCKER_HUB_USER)$/$(FRONTEND_NAME):latest
docker push $(DOCKER_HUB_USER)/$(BACKEND_NAME):stable
docker push $(DOCKER_HUB_USER)/$(FRONTEND_NAME):stable
PHONY: dev-bg
dev-bg: build-dev
docker-compose -f ./docker-compose.dev.yaml up -d
46 changes: 46 additions & 0 deletions OLLAMA_GUIDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Instructions on how to get LLocalSearch working with your Ollama instance

<!--toc:start-->
- [Instructions on how to get LLocalSearch working with your Ollama instance](#instructions-on-how-to-get-llocalsearch-working-with-your-ollama-instance)
- [You're running Ollama on your host machine (without docker)](#youre-running-ollama-on-your-host-machine-without-docker)
- [You're using Linux or macOS](#youre-using-linux-or-macos)
- [You're using Windows](#youre-using-windows)
- [You're running Ollama in a docker container on the same machine as LLocalSearch](#youre-running-ollama-in-a-docker-container-on-the-same-machine-as-llocalsearch)
- [You're running Ollama on a Server or different machine](#youre-running-ollama-on-a-server-or-different-machine)
<!--toc:end-->

## You're running Ollama on your host machine (without docker)

### You're using Linux or macOS

1. Make sure Ollama is listening on all interfaces (`0.0.0.0`, or at least the docker network).
2. Add the following to the `.env` file (create one if it doesn't exist) in the root of the project:

```yaml
OLLAMA_HOST=host.docker.internal:11434
```

> [!WARNING]
> Some linux users reported that this solution requires docker desktop to be installed. Please report back if that's the case for you. I don't have this issue on NixOS or my Ubuntu 22.04 test box.
### You're using Windows

Try the above and tell me if it worked, I will update these docs.

## You're running Ollama in a docker container on the same machine as LLocalSearch

1. Make sure your exposing Ollama on port 11434.
2. Add the following to the `.env` file (create one if it doesn't exist) in the root of the project:

```yaml
OLLAMA_HOST=host.docker.internal:11434
```

## You're running Ollama on a Server or different machine

1. Make sure Ollama is reachable from the container.
2. Add the following to the `.env` file (create one if it doesn't exist) in the root of the project:

```yaml
OLLAMA_HOST=ollama-server-ip:11434
```
93 changes: 51 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,70 +1,79 @@
# LLocalSearch

## What it is
## What it is and what it does

This is a completely locally running search engine using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed.
LLocalSearch is a wrapper around locally running `Large Language Models` (like ChatGTP, but a lot smaller and less "smart") which allows them to choose from a set of tools. These tools allow them to search the internet for current information about your question. This process is recursive, which means, that the running LLM can freely choose to use tools (even multiple times) based on the information its getting from you and other tool calls.

Now with follow-up questions:
[demo.webm](https://github.com/nilsherzig/LLocalSearch/assets/72463901/e13e2531-05a8-40af-8551-965ed9d24eb4)

https://github.com/nilsherzig/LLocalSearch/assets/72463901/8323be25-de2a-4ddf-853b-4a01557b5599
### Why would I want to use this and not something from `xy`?

The long term plan, which OpenAI is [selling](https://www.adweek.com/media/openai-preferred-publisher-program-deck/) to big media houses:

![image](https://github.com/nilsherzig/LLocalSearch/assets/72463901/9f6497aa-8047-4d11-9a12-66aff65d3faa)
> Additionally, members of the program receive priority placement and “richer brand expression” in chat conversations, and their content benefits from more prominent link treatments.
## Features
If you dislike the idea of getting manipulated by the highest bidder, you might want to try some less discriminatory alternatives, like this project.

- 🕵️ Completely local (no need for API keys)
- 💸 Runs on "low end" LLM Hardware (demo video uses a 7b model)
- 🤓 Progress logs, allowing for a better understanding of the search process
- 🤔 Follow-up questions
- 📱 Mobile friendly interface
- 🚀 Fast and easy to deploy with Docker Compose
- 🌐 Web interface, allowing for easy access from any device
- 💮 Handcrafted UI with light and dark mode
### Features

## Status
- 🕵‍♀ Completely local (no need for API keys) and thus a lot more privacy respecting
- 💸 Runs on "low end" hardware (the demo video uses a 300€ GPU)
- 🤓 Live logs and links in the answer allow you do get a better understanding about what the agent is doing and what information the answer is based on. Allowing for a great starting point to dive deeper into your research.
- 🤔 Supports follow up questions
- 📱 Mobile friendly design
- 🌓 Dark and light mode

This project is still in its very early days. Expect some bugs.

## How it works
## Road-map

Please read [infra](https://github.com/nilsherzig/LLocalSearch/issues/17) to get the most up-to-date idea.
### I'm currently working on 👷

## Self-hosting & Development
#### Support for LLama3 🦙

### Requirements
The langchain library im using does not respect the LLama3 stop words, which results in LLama3 starting to hallucinate at the end of a turn. I have a working patch (checkout the experiments branch), but since im unsure if my way is the right way to solve this, im still waiting for a response from the [langchaingo](https://github.com/tmc/langchaingo) team.

- A running [Ollama](https://ollama.com/) server, reachable from the container
- GPU is not needed, but recommended
- Docker Compose
#### Interface overhaul 🌟

### Run the latest release
An Interface overhaul, allowing for more flexible panels and more efficient use of space.
Inspired by the current layout of [Obsidian](https://obsidian.md)

Recommended, if you don't intend to develop on this project.
#### Support for chat histories / recent conversations 🕵‍♀

```bash
git clone https://github.com/nilsherzig/LLocalSearch.git
cd ./LLocalSearch
# 🔴 check the env vars inside the compose file and add your ollama servers host:port
docker-compose up
```
Still needs a lot of work, like refactoring a lot of the internal data structures to allow for more better and more flexible ways to expand the functionality in the future without having to rewrite the whole data transmission and interface part again.


### Planned (near future)

#### User Accounts 🙆

Groundwork for private information inside the rag chain, like uploading your own documents, or connecting LLocalSearch to services like Google Drive, or Confluence.

#### Long term memory 🧠

🎉 You should now be able to open the web interface on http://localhost:3000. Nothing else is exposed by default.
Not sure if there is a right way to implement this, but provide the main agent chain with information about the user, like preferences and having an extra Vector DB Namespace per user for persistent information.

### Run the current git version
## Install Guide

Newer features, but potentially less stable.
### Docker 🐳

1. Clone the GitHub Repository

```bash
git clone https://github.com/nilsherzig/LLocalsearch.git
# 1. make sure to check the env vars inside the `docker-compose.dev.yaml`.
# 2. Make sure you've really checked the dev compose file not the normal one.
git@github.com:nilsherzig/LLocalSearch.git
cd LLocalSearch
```

# 3. build the containers and start the services
make dev
# Both front and backend will hot reload on code changes.
2. Create and edit an `.env` file, if you need to change some of the default settings. This is typically only needed if you have Ollama running on a different device or if you want to build a more complex setup (for more than your personal use f.ex.). Please read [Ollama Setup Guide](./Ollama_Guide.md) if you struggle to get the Ollama connection running.

```bash
touch .env
code .env # open file with vscode
nvim .env # open file with neovim
```

If you don't have `make` installed, you can run the commands inside the Makefile manually.
3. Run the containers

```bash
docker-compose up -d
```

Now you should be able to access the frontend on [http://localhost:3000](http://localhost:3000).
Loading