This repository includes Docker support for both production and development workflows.
- Docker installed on your system
- Docker Compose (usually comes with Docker Desktop)
- Make (optional, but recommended)
Production mode creates a self-contained image with all code baked in. Use this for:
- Testing the final package
- Deployment
- Sharing with reviewers
# Build the production image
make build
# Run production container with Jupyter
make run
# Access Jupyter at http://localhost:8888Development mode mounts your source code as volumes, so changes are reflected immediately. Use this for:
- Active development
- Debugging
- Testing changes without rebuilding
# Start development environment
make dev
# Access Jupyter at http://localhost:8889
# Your code changes in src/ will be immediately available!Run make help to see all available commands:
make helpmake build- Build production Docker imagemake run- Run production container with Jupytermake test- Run tests in production containermake rebuild- Clean and rebuild from scratch
make dev- Start development environment with live code mountingmake dev-bash- Start development container with bash shell (for debugging)make dev-test- Run tests in development mode
make stop- Stop all running containersmake clean- Remove containers and imagesmake logs- Show container logs
-
Start the development environment:
make dev
-
Make changes to your code in
src/spac/- changes are immediately available in the container! -
Test your changes:
make dev-test
-
Need to debug? Open a shell in the container:
make dev-bash
-
When done:
make stop
Dockerfile(root) - Production image (code copied at build time)docker/Dockerfile.dev- Development image (code mounted as volume)docker/docker-compose.dev.yml- Development orchestrationMakefile(root) - Convenient commands for both modes
Both modes mount these directories:
./data→/data(input data)./results→/results(output results)
Development mode additionally mounts:
./src→/home/reviewer/SCSAWorkflow/src(source code)./tests→/home/reviewer/SCSAWorkflow/tests(test files)./notebooks→/workspace(Jupyter notebooks)
- Production: http://localhost:8888
- Development: http://localhost:8889
If you prefer not to use Make:
# Build
docker build -t spac:latest .
# Run
docker run -d --name spac-prod -p 8888:8888 \
-v $(pwd)/data:/data \
-v $(pwd)/results:/results \
spac:latest# Build
docker build -f docker/Dockerfile.dev -t spac:dev .
# Run
docker-compose -f docker/docker-compose.dev.yml up -dIf you get a port conflict, stop the other container:
make stopMake sure you're running in development mode (make dev) and editing files in the src/ directory.
If you've changed dependencies or environment.yml:
make rebuild # Production
# or
docker-compose -f docker/docker-compose.dev.yml build --no-cache # DevelopmentGalaxy can use the Docker container to run SPAC tools. The container is configured so that:
- The
spacconda environment is active by default - Python and all SPAC commands work without special setup
- Both direct commands and bash scripts work correctly
<tool id="spac_analysis" name="SPAC Analysis" version="0.9.0">
<requirements>
<container type="docker">spac:latest</container>
</requirements>
<command><![CDATA[
python '$__tool_directory__/your_script.py'
--input '$input'
--output '$output'
]]></command>
<inputs>
<param name="input" type="data" format="h5ad" label="Input file"/>
</inputs>
<outputs>
<data name="output" format="h5ad" label="Output file"/>
</outputs>
</tool>Test that commands work as Galaxy would run them:
# Test direct Python command (how Galaxy runs tools)
docker run --rm spac:latest python -c "import spac; print(spac.__version__)"
# Test with a script
docker run --rm -v $(pwd):/work spac:latest python /work/your_script.py
# Test bash command
docker run --rm spac:latest bash -c "python -c 'import spac; print(spac.__version__)'"All three methods should work without needing to activate conda manually.
For CI/CD pipelines, use the production mode:
docker build -t spac:latest .
docker run --rm spac:latest pytest tests/ -v