Skip to main content
We’ve spent a lot of effort making Skald easy to self-host, but we’re still very early. New features are being added multiple times a day and our documentation can’t always keep up. We also don’t have a self-hosted release schedule yet, and all changes are just automatically available.If you’re interested in self-hosting, we highly recommend you talk to us on Slack so we can help you out and work together to get you a solid Skald deployment. We’re happy to help as much as possible.
Skald is licensed under the MIT license and we provide a standard Docker Compose deployment if you want to self-host. The self-hosted deployment has all the features of our Cloud version, except for the fact that it is single-tenant, meaning you can only create one organization per instance (you can create unlimited projects, however).

Deployment types

The only supported way to deploy Skald today is with Docker Compose, but we have separate instructions depending on the type of deploy you want to do. Refer to the links below depending on your use case:
  1. Local testing: use the Quickstart setup below
  2. Production self-hosted deploy
  3. Deployment with no third-party services (experimental)

Quickstart

The best way to try out the Skald self-hosted version is the following:
git clone https://github.com/skaldlabs/skald
cd skald
echo "OPENAI_API_KEY=<your_key>" > .env
docker-compose up
The UI will be available at http://localhost:3000 and the API at http://localhost:8000. This setup will get you started quickly while only requiring one API key for an external service. We’ll spin up and configure all other services for you, including RabbitMQ and Postgres. The one caveat of this deploy is that it relies entirely on OpenAI for the whole stack, which makes for a slightly slower API. The reason for this is that OpenAI doesn’t provide a reranking API, so we use a slower mechanism that uses LLM calls to rerank chunks. If you don’t understand what this means, that’s ok — you don’t have to. But just know that your API will be slower if you use OpenAI exclusively. In our Cloud version, we use Voyage AI for both embeddings and reranking, and that’s what we recommend you do as well for the best performance (reranking is faster and better and the embedding models are arguably better too). That means also setting VOYAGE_API_KEY=<your_key> and EMBEDDING_PROVIDER=voyage (this will also apply to re-ranking).

Configuration

LLM

You can configure Skald to use multiple LLM providers, but you still need to set an LLM_PROVIDER environment variable. This is likely to change in the future, or transform into DEFAULT_LLM_PROVIDER but as of today, the provider defined by this env var will be used for:
  • Chat responses as the default provider if not overridden in the rag_config
  • Extracting the summary and tags for new memos
  • LLM-as-a-Judge feature in Experiments
If you set additional environment variables for providers that are not your LLM_PROVIDER, those will be available for use in chat.
# default: openai
LLM_PROVIDER=<openai|anthropic|groq|local>

# if you selected openai
OPENAI_API_KEY=<your_openai_key>

# if you selected anthropic
ANTHROPIC_API_KEY=<your_anthropic_key>

# if you selected groq
GROQ_API_KEY=<your_groq_key>

# if you selected local
LOCAL_LLM_BASE_URL=<url_of_self_hosted_llm_server>
LOCAL_LLM_MODEL=<model_name> # e.g. llama-3.1-8b-instruct
The OpenAI, Anthropic, and Groq keys are self-explanatory, and if you’re interested in the local LLM setup, please refer to the docs for a deployment with no third-party services.

Embeddings

Configuration for embeddings works similarly to the LLM config, with the following vars:
# default: openai
EMBEDDING_PROVIDER=<openai|voyage|local>

# if you selected openai
OPENAI_API_KEY=<your_openai_key>

# if you selected voyage
VOYAGE_API_KEY=<your_voyageai_key>
Note that while the default provider is set to openai, we actually recommend using voyage. The reason this is not the default is simply because most people already have an OpenAI account these days, while VoyageAI is not used as widely. However, we use Voyage on our Cloud deployment and strongly recommend it — the embedding models are great. The additional benefit of using Voyage embeddings is that you also will get the Voyage re-ranker, which is both really good and really fast. We currently don’t support configuring a re-ranker separately to the embedding provider but may do so in the future.
Changing the embedding provider on a running deployment is not currently supported and will invalidate all memos ingested up until the change was made. If you do this, you should delete all existing memos from the database and re-process them.

Document extraction

If you want to use document extraction features, you need to set appropriate environment variables for connecting to S3 or an S3-compatible object storage service. This is where documents will be stored.
AWS_REGION=<your_s3_region>
AWS_ACCESS_KEY_ID=<your_aws_access_key_id>
AWS_SECRET_ACCESS_KEY=<your_aws_secret_access_key>
S3_BUCKET_NAME=<your_s3_bucket>
For the document extraction itself, you have two options:
  1. You can set DATALAB_API_KEY (from https://datalab.to)
  2. You can set DOCUMENT_EXTRACTION_PROVIDER=docling and run the stack with the local profile. This will spin up a local Docling server that works very well for document extraction, but not as well as Datalab. Docling is MIT-licensed though, and will be running on your own infrastructure.

Postgres

By default we will spin up a Postgres instance as part of the Docker Compose stack for you, and we will install pgvector on it. If you’re running a production deploy you would ideally host and manage Postgres yourself. If you do so, you just need to set the DATABASE_URL env var to point to your instance, and run the stack without starting the Postgres service.
If you do host Postgres elsewhere, the one thing you need to remember is to install the pgvector extension on the instance.

RabbitMQ

The same concepts that apply to Postgres apply to RabbitMQ. Ideally you’d host this yourself in a prod deployment, and for that you should spin up the stack without the RabbitMQ service and set the following vars:
RABBITMQ_HOST
RABBITMQ_PORT
RABBITMQ_USER
RABBITMQ_PASSWORD
RABBITMQ_VHOST

Help this isn’t working

We’re early and so you may encounter quirks when deploying Skald! If that happens, please submit an issue and we’ll be happy to help you. Even better if you want to submit a PR.