This repository contains an example project for building a private Retrieval-Augmented Generation (RAG) application using Llama3.2, Ollama, and PostgreSQL. It demonstrates how to set up a RAG pipeline that does not rely on external API calls, ensuring that sensitive data remains within your infrastructure.
- Docker
- Python, psycopg
- Ollama
- PostgreSQL, pgai
-
Create a network through which the Ollama and PostgreSQL containers will interact:
docker network docker network create rag-net
-
Ollama docker container: (Note:
--network
tag to make sure that the container runs on the network defined)docker run -d --network rag-net -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
- Llama3.2:
docker exec -it ollama ollama pull llama3.2
- Nomic Embed v1.5:
docker exec -it ollama ollama pull nomic-embed-text
- Llama3.2:
-
TimescaleDB:
docker run -d --network rag-net --name timescaledb -p 5432:5432 -e POSTGRES_PASSWORD=password timescale/timescaledb-ha:pg16