HARRIER ships three container variants. They all install the package
from the same pyproject.toml, just with different base images and
entrypoints.
| Image | Base | Purpose | Backend |
|---|---|---|---|
robotflowlabs/harrier-cuda |
anima-base:jazzy |
Training + serving on the GPU server | cuda |
robotflowlabs/harrier-mlx |
anima-base-mlx:jazzy |
Apple Silicon development + inference | mlx |
robotflowlabs/harrier-serve |
anima-serve:jazzy |
Production serving (no training deps) | cuda / cpu |
# From the repo root
docker compose -f docker/docker-compose.yml build harrier-cuda
docker compose -f docker/docker-compose.yml --profile mlx build harrier-mlx
docker compose -f docker-compose.serve.yml build harrier-servedocker compose -f docker-compose.serve.yml up -d
curl http://localhost:8010/health
curl http://localhost:8010/readyWeights are mounted read-only from
/mnt/artifacts-datai/models/project_harrier. That is the single
source of truth for trained artifacts (see
.claude/rules/save_checkpoint.md).
All three images accept:
| Variable | Purpose |
|---|---|
HARRIER_WEIGHTS |
Absolute path to best.pt inside the container. |
HARRIER_BACKEND |
auto / cuda / mlx / cpu. |
ANIMA_BACKEND |
Legacy alias honoured by anima_harrier.device. |
Dockerfile.serve ships with a built-in HTTP healthcheck hitting
/health. Compose uses the same probe so the orchestrator can tell the
difference between "container alive" and "service reachable".