Skip to content

Commit d5ab3d1

Browse files
committed
fix(ci): Improve MCP package caching and Ollama startup reliability
- Pre-install MCP server packages (npm and uvx) before tests run - Increase Ollama health check retries (10 -> 20) and intervals (10s -> 15s) - Add Ollama model warm-up step to ensure model is loaded before tests - This addresses timeout issues from npx/uvx package downloads during tests
1 parent e363494 commit d5ab3d1

File tree

1 file changed

+26
-5
lines changed

1 file changed

+26
-5
lines changed

.github/workflows/integration-tests.yml

Lines changed: 26 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -248,6 +248,20 @@ jobs:
248248
sudo apt-get update
249249
sudo apt-get install -y sqlite3
250250
251+
# Pre-install MCP server packages to avoid download delays during tests
252+
echo "📦 Pre-installing MCP server packages..."
253+
254+
# Install npm-based MCP servers globally
255+
npm install -g @modelcontextprotocol/server-brave-search @modelcontextprotocol/server-filesystem
256+
257+
# Install uv for Python package management (provides uvx)
258+
curl -LsSf https://astral.sh/uv/install.sh | sh
259+
source $HOME/.local/bin/env
260+
261+
# Pre-install Python-based MCP server (used by sqlite tests)
262+
uvx --version || echo "uvx not in path, checking..."
263+
$HOME/.local/bin/uvx mcp-server-sqlite --help || echo "Pre-warming mcp-server-sqlite..."
264+
251265
- name: Cache JBang dependencies
252266
if: steps.check.outputs.should_run == 'true'
253267
uses: actions/cache@v4
@@ -358,9 +372,9 @@ jobs:
358372
- 11434:11434
359373
options: >-
360374
--health-cmd="curl -f http://localhost:11434/api/tags || exit 1"
361-
--health-interval=10s
362-
--health-timeout=5s
363-
--health-retries=10
375+
--health-interval=15s
376+
--health-timeout=10s
377+
--health-retries=20
364378
365379
steps:
366380
- name: Checkout repository
@@ -414,10 +428,17 @@ jobs:
414428
pg_isready -h localhost -p 5432 -U postgres
415429
echo "pgvector is ready"
416430
417-
echo "Checking Ollama..."
431+
echo "Checking Ollama API..."
418432
curl -s http://localhost:11434/api/tags | head -c 200
419433
echo ""
420-
echo "Ollama is ready"
434+
435+
echo "Warming up Ollama model (this may take a moment)..."
436+
# Make a small inference request to ensure the model is fully loaded
437+
curl -s http://localhost:11434/api/generate \
438+
-d '{"model": "flow-judge", "prompt": "Hello", "stream": false}' \
439+
--max-time 120 | head -c 200
440+
echo ""
441+
echo "Ollama is ready with model loaded"
421442
422443
- name: Run rag-with-kotlin test
423444
env:

0 commit comments

Comments
 (0)