chore: Add caching to CI/CD to reduce time for deployment and github actions used minutes#310
chore: Add caching to CI/CD to reduce time for deployment and github actions used minutes#310hassaanalansary wants to merge 1 commit intostagingfrom
Conversation
📝 WalkthroughWalkthroughRefactors CI: replaces manual docker build/push in Changes
Sequence Diagram(s)sequenceDiagram
participant GH as GitHub Actions
participant Buildx as docker/build-push-action@v6
participant Registry as Container Registry
GH->>Buildx: trigger build (uses Buildx + cache refs, tags from step outputs)
Buildx->>Registry: push image (cache refs: buildcache-<branch>, tags output)
Registry-->>GH: confirm push (outputs image_repo, image_tag)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.github/workflows/test_lint.yml (1)
56-63: Cache will go stale: same key + exact hit means no refresh at job end.The cache key (Line 60) only changes when
Dockerfile.backend,pyproject.toml, oruv.lockchange. Withactions/cache@v4, an exact-key cache hit causes the post-step to skip saving—so the freshly rotated/tmp/.buildx-cache(Lines 110–112) is discarded on every run with unchanged dependencies. The cache effectively freezes until those three files change, missing updated base-image layers and transitive dependencies.Two solutions:
♻️ Option A (recommended): use BuildKit's GHA cache backend
This delegates cache lifecycle to BuildKit, which handles per-layer keys and updates correctly.
- - name: Cache Docker layers - uses: actions/cache@v4 - with: - path: /tmp/.buildx-cache - key: buildx-${{ github.ref_name }}-${{ hashFiles('deployment/docker/Dockerfile.backend', 'pyproject.toml', 'uv.lock') }} - restore-keys: | - buildx-${{ github.ref_name }}- - buildx- - - name: Build Django service run: | docker buildx build \ - --cache-from type=local,src=/tmp/.buildx-cache \ - --cache-to type=local,dest=/tmp/.buildx-cache-new,mode=max \ + --cache-from type=gha,scope=test-lint-${{ github.ref_name }} \ + --cache-to type=gha,mode=max,scope=test-lint-${{ github.ref_name }} \ --build-arg DEPENDENCY_GROUP=dev \ --load \ -t itqan-cms-backend-django \ -f deployment/docker/Dockerfile.backend . - # Rotate cache to prevent unbounded growth - rm -rf /tmp/.buildx-cache - mv /tmp/.buildx-cache-new /tmp/.buildx-cacheNote: This requires either
docker/build-push-action@v7+or manual env setup viacrazy-max/ghaction-github-runtimeto expose GHA cache credentials.♻️ Option B: add `github.run_id` to force per-run key + use restore fallbacks
This makes the cache key unique per run (so
cache-hitis always false and saves happen), whilerestore-keysfallback retrieves the most recent matching cache.- key: buildx-${{ github.ref_name }}-${{ hashFiles('deployment/docker/Dockerfile.backend', 'pyproject.toml', 'uv.lock') }} + key: buildx-${{ github.ref_name }}-${{ hashFiles('deployment/docker/Dockerfile.backend', 'pyproject.toml', 'uv.lock') }}-${{ github.run_id }} restore-keys: | + buildx-${{ github.ref_name }}-${{ hashFiles('deployment/docker/Dockerfile.backend', 'pyproject.toml', 'uv.lock') }}- buildx-${{ github.ref_name }}- buildx-Option A is the modern idiomatic approach and avoids local rotation entirely. Option B is simpler but still uses the legacy local cache method.
Also applies to: 101-112
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test_lint.yml around lines 56 - 63, The current cache step ("name: Cache Docker layers" using actions/cache@v4 with key buildx-${{ github.ref_name }}-${{ hashFiles('deployment/docker/Dockerfile.backend', 'pyproject.toml', 'uv.lock') }} and restore-keys) causes exact-key hits to skip saves so the local /tmp/.buildx-cache never gets refreshed; fix by either switching to BuildKit/GHA cache backend (replace the actions/cache step with docker/build-push-action@v7 cache configuration using cache-to/type=gha and cache-from=type=gha so BuildKit manages layer updates) or, if you must keep actions/cache, make the key unique per run (append ${{ github.run_id }} to buildx-${{ github.ref_name }}-${{ hashFiles(...) }} and keep the restore-keys fallback like buildx-${{ github.ref_name }}- and buildx- so restores still find previous caches while saves always run).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/test_lint.yml:
- Around line 114-127: The workflow fails to reuse the pre-built image because
the django service in docker-compose.local.yml only has a build: block and no
explicit image name; edit the docker-compose.local.yml django service (the
service named "django" with the existing build: configuration) and add image:
itqan-cms-backend-django so docker-compose will reference the pre-built tag used
by the workflow (itqan-cms-backend-django) and respect the --no-build flags in
the workflow steps.
---
Nitpick comments:
In @.github/workflows/test_lint.yml:
- Around line 56-63: The current cache step ("name: Cache Docker layers" using
actions/cache@v4 with key buildx-${{ github.ref_name }}-${{
hashFiles('deployment/docker/Dockerfile.backend', 'pyproject.toml', 'uv.lock')
}} and restore-keys) causes exact-key hits to skip saves so the local
/tmp/.buildx-cache never gets refreshed; fix by either switching to BuildKit/GHA
cache backend (replace the actions/cache step with docker/build-push-action@v7
cache configuration using cache-to/type=gha and cache-from=type=gha so BuildKit
manages layer updates) or, if you must keep actions/cache, make the key unique
per run (append ${{ github.run_id }} to buildx-${{ github.ref_name }}-${{
hashFiles(...) }} and keep the restore-keys fallback like buildx-${{
github.ref_name }}- and buildx- so restores still find previous caches while
saves always run).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 36f664c8-e51a-420d-9c8f-f8ab334614d2
📒 Files selected for processing (2)
.github/workflows/ci-cd.yml.github/workflows/test_lint.yml
3a2e304 to
21e1f4a
Compare
…actions used minutes ci-cd.yml Added docker/setup-buildx-action@v3 to enable BuildKit Replaced plain docker build + manual retry push loop with docker/build-push-action@v6, which handles build+push atomically with retries built in Added GHCR registry-based layer caching (cache-from/cache-to) — each branch (staging, main) maintains its own persistent cache image (buildcache-staging, buildcache-main) in GHCR, so unchanged layers (base image, system deps, pip install) are reused across runs Emitted image_repo, image_tag, branch as step outputs ($GITHUB_OUTPUT) in addition to $GITHUB_ENV, so the build-push action can reference them via steps.vars.outputs.* (eliminating the linter warnings) test_lint.yml Fixed actions/checkout@v5 → @v4 (v5 doesn't exist) and actions/setup-python@v6 → @v5 Added cache: 'pip' to the Python setup step (caches pip download cache) Added actions/cache@v4 for pre-commit environments (keyed on .pre-commit-config.yaml) Added actions/cache@v4 for Docker BuildKit layers (keyed on Dockerfile + pyproject.toml + uv.lock) Replaced docker compose build with docker buildx build --cache-from/--cache-to using local cache directory, with cache rotation to prevent unbounded growth Added --no-build to all docker compose run invocations so compose uses the pre-built image instead of rebuilding
21e1f4a to
ae1645d
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
docker-compose.local.yml (1)
18-18: Optional: reuse the named image across the other backend services to dedupe builds.
celery-worker,celery-beat, andfloweruse the samecontext/Dockerfile/DEPENDENCY_GROUP: devasdjango. Naming the image only ondjangomeans Compose will still tag the others with auto-generated names and potentially treat them as separate build targets. Setting the sameimage:on those services lets Compose reuse the built image, further trimming local build time (which aligns with the PR's caching objective).♻️ Suggested change
celery-worker: + image: itqan-cms-backend-django build: context: . dockerfile: deployment/docker/Dockerfile.backend args: DEPENDENCY_GROUP: dev @@ celery-beat: + image: itqan-cms-backend-django build: context: . dockerfile: deployment/docker/Dockerfile.backend args: DEPENDENCY_GROUP: dev @@ flower: + image: itqan-cms-backend-django build: context: . dockerfile: deployment/docker/Dockerfile.backend args: DEPENDENCY_GROUP: dev🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.local.yml` at line 18, The docker-compose services celery-worker, celery-beat, and flower currently build from the same context/Dockerfile with DEPENDENCY_GROUP: dev but lack an explicit image name; add the same image: itqan-cms-backend-django key to each of those service definitions (matching the django service) so Compose will reuse the built image for celery-worker, celery-beat, and flower instead of creating separate auto-generated images—ensure build args (e.g., DEPENDENCY_GROUP) remain identical so the image is truly reusable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@docker-compose.local.yml`:
- Line 18: The docker-compose services celery-worker, celery-beat, and flower
currently build from the same context/Dockerfile with DEPENDENCY_GROUP: dev but
lack an explicit image name; add the same image: itqan-cms-backend-django key to
each of those service definitions (matching the django service) so Compose will
reuse the built image for celery-worker, celery-beat, and flower instead of
creating separate auto-generated images—ensure build args (e.g.,
DEPENDENCY_GROUP) remain identical so the image is truly reusable.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fdac6962-29d0-463e-9771-7a541744b67a
📒 Files selected for processing (3)
.github/workflows/ci-cd.yml.github/workflows/test_lint.ymldocker-compose.local.yml
💤 Files with no reviewable changes (1)
- .github/workflows/test_lint.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/ci-cd.yml
Summary by CodeRabbit