A serverless AI Slack bot service using Embedchain deployed to AWS Lambda using Pulumi.
Load and interrogate your data using an Artificial Intelligence RAG microservice built on Embedchain, providing CLI, REST API and Slack interfaces, with an option to deploy to AWS Lambda using Pulumi.
Optional configuration for various data sources, LLMs, vector databases, embedding models, and evaluation.
- Docker
- Python 3
cp -R /path/to/my/assets ./assets
cp sample.env .env
Ensure you have required dependencies installed and have followed the prerequisite steps.
Populate the .env
file with, at minimum, the OPENAI_API_KEY
is required for arti to answer questions.
docker compose run --rm arti ask "What do these files contain?"
python3 -m venv venv
. venv/bin/activate
make install
arti ask "What do these files contain?"
Configuration is found in environment variables, documented in the table below.
- Docker
- Python 3
-
Copy
sample.env
to.env
, then edit.env
to replace the sample with your desired settings:cp sample.env .env
Variable Description AWS_REGION
AWS region OPENAI_API_KEY
OpenAI Key (takes precedence over secret) OPENAI_API_KEY_SECRET_NAME
OpenAI Key secret name PINECONE_API_KEY
Pinecone Key secret (takes precedence over secret) PINECONE_API_KEY_SECRET_NAME
Pinecone Key secret name SLACK_BOT_TOKEN
Slack bot token (takes precedence over secret) SLACK_BOT_TOKEN_SECRET_NAME
Slack bot token secret name SLACK_BOT_SIGNING_SECRET
Slack bot signing secret LOG_LEVEL
Log level -
Create a Python virtual environment and activate it (first run only)
make venv . venv/bin/activate
-
Configure the project for development and install dependencies
make develop
-
Populate the dataset in
./assets
with files such as PDFs, Docx, CSV, HTML, Text and more -
Run the application
arti arti ask "what can you tell me about fruits and vegetables?" # alternatively, run everything using docker compose docker compose run --rm arti ask "what can you tell me about fruits and vegetables?"
-
To stop, [CTRL]-C the application
This project implements pre-commit to manage git hooks. This isn't required, but it will help you catch any issues before you push your commits.
Install pre-commit on MacOS using Homebrew:
brew install pre-commit
Once you have pre-commit installed, install the git hook scripts:
pre-commit install
The slack integration uses Bolt.
Follow their instructions to create a new Slack app.
For local usage, populate your .env
file with the appropriate Slack tokens.
For reference, these are approximately the expected scopes, depending on your use of the bot.
chat:write
channels:read
commands
im:read
im:write
users:read
users:write
message.channels
message.im
A cli entrypoint is added as an example. By default, the assets
directory will be loaded into the vector database for search.
Say for example, I had a document containing information about fruits and vegetables:
arti ask "what can you tell me about fruits and vegetables?"
arti -h
A Makefile
is provided to ease some common tasks, such as linting and deploying.
To see usage instructions:
make help
Deployment happens through the Makefile for convenience. The stack configuration can be found in deploy/pulumi.
Create the following keys in Secrets Manager. These names are configurable in deploy/pulumi/Pulumi.<stack>.yaml
. Slack and Pinecone are only necessary if they are configured for use.
Secret Name | Schema |
---|---|
/catmeme/cloud-platform/sandbox/arti/access-token/openai | { "apiKey": "" } |
/catmeme/cloud-platform/sandbox/arti/access-token/pinecone | { "apiKey": "" } |
/catmeme/cloud-platform/sandbox/arti/access-token/slack | { "apiKey": "", "signingSecret": "" } |
Ensure you have a matching AWS profile name to the one in the stack. The Makefile assumes <environment>-deployment
make deploy DEPLOY_ENVIRONMENT=dev
The above example would expect a dev-deployment
AWS profile configured.