This repository contains the files and instructions needed to run a fully local, private, and self-hosted version of InsightsLM. Now, you can run your own AI research assistant completely offline, ensuring total privacy and control over your data.
This project builds upon the incredible work of Cole Medin and his local-ai-packaged repository, which provides the core self-hosted infrastructure.
While the original InsightsLM connects to cloud services, this version has been re-architected to run entirely on your local machine. It's the perfect solution for individuals and companies looking to leverage AI for internal knowledge management without relying on external cloud infrastructure.
The backend, powered by N8N, has been re-engineered to work with local models for inference, embeddings, transcription, and text-to-speech, all without requiring any changes to the frontend application.
- Run Completely Offline: All services run locally in Docker containers. Your data never leaves your machine.
- Chat with Your Documents: Upload PDFs, text files, and audio files to get instant, context-aware answers from a local LLM.
- Verifiable Citations: Jump directly to the source of the information within your documents to verify the AI's responses.
- Local Podcast Generation: Create audio summaries from your source materials using local text-to-speech models.
- Local Audio Transcription: Transcribe audio files using a local Whisper container.
- Total Control & Privacy: You have complete control over the entire stack, from the models to the data.
For a full demonstration of this local version, an overview of its architecture, and a step-by-step guide on how to set it up, check out our YouTube video:
This project runs a suite of services locally using Docker. The core components include:
- Frontend App:
- React / Vite / TypeScript
- Backend & Automation:
- Database & Storage:
- Supabase (running locally)
- AI / ML Services (Local):
- LLM Inference & Embeddings: Olama (using models like
qwen3:8b-q4_K_M
) - Audio Transcription: Whisper ASR
- Text-to-Speech: Coqui TTS
- LLM Inference & Embeddings: Olama (using models like
This guide provides the steps to get the fully local version of InsightsLM up and running.
I recommend you follow along from 10:48 in our YouTube video for a detailed visual guide.
- Python
- Git or GitHub Desktop
- Docker or Docker Desktop
- A code editor like VS Code is highly recommended.
-
Clone the Base Local AI Package Repo
- Open your terminal or VS Code and clone Cole Medin's local-ai-packaged repository. This forms the foundation of our local setup.
git clone https://github.com/coleam00/local-ai-packaged.git cd local-ai-packaged
-
Clone the InsightsLM Local Package
- Inside the
local-ai-packaged
directory, clone this repository.
git clone https://github.com/theaiautomators/insights-lm-local-package.git
- Inside the
-
Configure Environment Variables
- In the root of the
local-ai-packaged
directory, make a copy of.env.example
and rename it to.env
. - Populate the necessary secrets (n8n secrets, postgres password, etc.) as described in Cole's repo. You can use a password generator for secure keys.
- Generate the Supabase
JWT_SECRET
,ANON_KEY
, andSERVICE_ROLE_KEY
using the instructions in the documentation. - Open the
.env.copy
file located ininsights-lm-local-package/
and copy all the variables from it. - Paste these new variables at the end of your main
.env
file. These include URLs for the local services and a webhook auth key which you need to set.
- In the root of the
-
Update Docker Compose
- Open the main
docker-compose.yml
file in the root directory. - Open the
docker-compose.yml.copy
file located insideinsights-lm-local-package/
. - Copy the
whisper-cache
volume and the three services (insights-lm
,coqui-tts
,whisper-asr
) from the.copy
file. - Paste them into the main
docker-compose.yml
file in their respectivevolumes
andservices
sections. - Note: Both Coqui and Whisper are configured in these services to work on a GPU. If you are using Apple Silicon or a CPU then check out their respective documentation on how adjust these.
- Update the Olama model to
qwen3:8b-q4_K_M
on line 55.
- Open the main
-
Start the Services
- Open up Docker Desktop
- Run the start script provided in Cole's repository. Use the command appropriate for your system (e.g., Nvidia GPU). This will download all the necessary Docker images and start the containers. This may take a while.
# Example for Nvidia GPU python start_services.py --profile gpu-nvidia
-
Import Supabase Script
- Login to your Supabase instance (usually at
http://localhost:8000
) - Go to SQL Editor
- Paste the contents of the file located in
insights-lm-local-package/supabase-migration.sql
into the SQL editor and click Run - You should get a "Success. No rows returned" message
- Login to your Supabase instance (usually at
-
Move Supabase Functions
- Once the Docker images have downloaded and the service are up, navigate to the
insights-lm-local-package/supabase/functions/
directory. - Copy all the function folders within it.
- Paste them into the
supabase/volumes/functions/
directory.
- Once the Docker images have downloaded and the service are up, navigate to the
-
Update Supabase Docker Configuration
- Open the
supabase/docker/docker-compose.yml
file. - Copy the environment variables listed in
insights-lm-local-package/supabase/docker-compose.yml.copy
. - Paste these variables under the
functions
service in thesupabase/docker/docker-compose.yml
file. This gives the edge functions access to your N8N webhook URLs.
- Open the
-
Restart Docker Containers
- Shut down all running services using the provided command.
# Example for Nvidia GPU docker compose -p localai -f docker-compose.yml --profile gpu-nvidia down
- Restart the services using the
start_services.py
script again to apply all changes.
-
Import and Configure N8N Workflows
- Access N8N in your browser (usually at
http://localhost:5678
). - Create a new workflow and import the
Import_Insights_LM_Workflows.json
file from theinsights-lm-local-package/n8n/
directory. - Follow the steps in the video to configure the importer workflow.
- Create an N8N API Key and set the credential in the N8N node
- URL should be "http://localhost:5678/api/v1"
- Create credential for Webhook Header Auth and copy the credential ID into the "Enter User Values" node
- Name must be "Authorization" and value is what was set at the bottom of the ENV file
- Create credential for Supabase API and copy the credential ID into the "Enter User Values" node
- Host should be "http://kong:8000"
- Service Role Secret is in your ENV file
- Create credential for Olama and copy the credential ID into the "Enter User Values" node
- Base URL should be "http://ollama:11434"
- Create an N8N API Key and set the credential in the N8N node
- Run the importer workflow to automatically set up all the required InsightsLM workflows in your N8N instance.
- Access N8N in your browser (usually at
-
Activate Workflows & Test
- Go to your N8N dashboard, find the newly imported workflows, and activate them all (Except the "Extract Text" workflow).
- Access the InsightsLM frontend (usually at
http://localhost:3010
). - Create a user in your local Supabase dashboard (under Authentication) to log in.
- You're all set! Start uploading documents and chatting with your private AI assistant.
Contributions make the open-source community an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This codebase is distributed under the MIT License.
While InsightsLM is fully open-sourced and Supabase is also open source, it's important to be aware that n8n, which powers much of the backend automation, is not open source in the traditional sense.
n8n is distributed under a Sustainable Use License. This license allows free usage for internal business purposes, including hosting workflows within your company or organization.
However, if you plan to use InsightsLM as part of a commercial SaaS offering—such as reselling access or hosting a public version for multiple clients—you may need to obtain an n8n Enterprise License. We’re not lawyers, so we recommend that you review the n8n license and contacting their team if your use case falls into a commercial category.
Alternatives: If your use case is restricted by the n8n license, one potential option is to convert key workflows into Supabase Edge Functions. This would allow you to fully avoid using n8n in production.