This image contains the official Open Data Hub Llama Stack distribution, with all the packages and configuration needed to run a Llama Stack server in a containerized environment.
The image is currently shipping with the Open Data Hub version of Llama Stack version 0.7.1
You can see an overview of the APIs and Providers the image ships with in the table below.
| API | Provider | External? | Enabled by default? | How to enable |
|---|---|---|---|---|
| batches | inline::reference | No | ✅ | N/A |
| datasetio | inline::localfs | No | ✅ | N/A |
| datasetio | remote::huggingface | No | ✅ | N/A |
| eval | inline::trustyai_ragas | Yes (version 0.6.1) | ❌ | Set the TRUSTYAI_EMBEDDING_MODEL environment variable |
| eval | remote::trustyai_garak | Yes (version 0.3.1) | ❌ | Set the ENABLE_KUBEFLOW_GARAK environment variable |
| eval | remote::trustyai_lmeval | Yes (version 0.5.0) | ✅ | N/A |
| eval | remote::trustyai_ragas | Yes (version 0.6.1) | ❌ | Set the ENABLE_KUBEFLOW_RAGAS environment variable |
| files | inline::localfs | No | ✅ | N/A |
| files | remote::s3 | No | ❌ | Set the ENABLE_S3 environment variable |
| inference | inline::sentence-transformers | No | ❌ | Set the ENABLE_SENTENCE_TRANSFORMERS environment variable |
| inference | remote::azure | No | ❌ | Set the AZURE_API_KEY environment variable |
| inference | remote::bedrock | No | ❌ | Set the AWS_BEARER_TOKEN_BEDROCK environment variable |
| inference | remote::openai | No | ❌ | Set the OPENAI_API_KEY environment variable |
| inference | remote::vertexai | No | ❌ | Set the VERTEX_AI_PROJECT environment variable |
| inference | remote::vllm | No | ❌ | Set the VLLM_URL environment variable |
| inference | remote::vllm | No | ❌ | Set the VLLM_EMBEDDING_URL environment variable |
| inference | remote::watsonx | No | ❌ | Set the WATSONX_API_KEY environment variable |
| responses | inline::builtin | No | ✅ | N/A |
| safety | remote::passthrough | No | ❌ | Set the PASSTHROUGH_SAFETY_URL environment variable |
| safety | remote::trustyai_fms | Yes (version 0.4.0) | ✅ | N/A |
| scoring | inline::basic | No | ✅ | N/A |
| scoring | inline::braintrust | No | ✅ | N/A |
| scoring | inline::llm-as-judge | No | ✅ | N/A |
| tool_runtime | inline::file-search | No | ✅ | N/A |
| tool_runtime | remote::brave-search | No | ✅ | N/A |
| tool_runtime | remote::model-context-protocol | No | ✅ | N/A |
| tool_runtime | remote::tavily-search | No | ✅ | N/A |
| vector_io | inline::faiss | No | ❌ | Set the ENABLE_FAISS environment variable |
| vector_io | inline::milvus | No | ❌ | Set the ENABLE_INLINE_MILVUS environment variable. Incompatible with multi-worker deployments |
| vector_io | remote::milvus | No | ❌ | Set the MILVUS_ENDPOINT environment variable |
| vector_io | remote::pgvector | No | ❌ | Set the ENABLE_PGVECTOR environment variable |
| vector_io | remote::qdrant | No | ❌ | Set the ENABLE_QDRANT environment variable |