- Clone the repo
- Add a file
LLModel/.env
containingOPENAI_API_KEY = '<API_KEY>'
- Grab the Modelica Standard Library source so it can be embedded (
cd LLModel && mkdir data && cd data && git clone [email protected]:modelica/ModelicaStandardLibrary.git
) - Build the container (
docker build -t om-python .
) - Run the container (
./run
) - Run the LLM chain within the container (
./run-llm <PROMPT>
)
Upon first execution, a HuggingFace transformer will be downloaded and used to embed the Modelica Standard Library. The resulting embedding vectors will be persisted in a Chroma vector database at data/om_embeddings
. This step took ~1 hour on my machine; YMMV.