Releases: rohitg00/agentmemory
v0.9.3 — DX patch (feature-flag visibility + doctor command)
0.9.3 — 2026-04-24
Developer-experience patch. Every disabled feature flag is now visible in the viewer, the CLI, and REST error responses, so devs no longer hit empty tabs wondering whether the install is broken or just opt-in. Adds a doctor command that diagnoses the whole stack in one shot and a first-run hero in the viewer that points at the magical-moment demo command.
Added
agentmemory doctorcommand. Runs 10 diagnostic checks in one shot: server reachability, health status, viewer port, LLM provider, embedding provider, four feature flag states, and whether the knowledge graph has data. Every failing check includes a concrete hint with the exact env var or command to fix it. Mirrors the shape of the new viewer feature-flag banners./agentmemory/config/flagsREST endpoint. Returns{ version, provider, embeddingProvider, flags[] }with per-flag{ key, label, enabled, default, affects, needsLlm, description, enableHow, docsHref }. Used by the viewer banner, CLI status/doctor, and anyone who wants to introspect config without parsing logs.- Viewer feature-flag banner system. Compact collapsible summary row at the top of every tab (
⚠ 3 off · ⚙ 1 note · Feature flags — click to expand). Expanded view shows per-flag card with description, exact enable command, docs link, and dismiss button. Dismissed state persists per-flag in localStorage so banners stay out of the way once acknowledged. Banners filter by the current tab'saffectslist. - Viewer first-run hero card. When
sessions.length === 0, dashboard renders an orange-accent card titled "First run → magical moment in 10 seconds" withnpx @agentmemory/agentmemory demoas the next step. Removes the dead-empty dashboard that used to greet fresh installs. - Viewer footer with preset issue report.
agentmemory viewer · v{version} · github · docs · report issue →. The feedback link opens a GitHub issue pre-filled with version, provider name, embedding provider, flag state, and user-agent — so the first message on an issue already contains the diagnostic context that used to take three back-and-forths. - Richer empty states on Actions, Memories, Lessons, Crystals tabs. Each now has a titled lead explaining what the tab is for, why it's empty, three concrete ways to populate it (MCP tool, curl, hook), and a docs link. The old one-liners ("No actions yet. Create actions via memory_action_create MCP tool") assumed too much context.
statuscommand shows flag state. New section in the output block lists provider (✓ llm/✗ noop), embedding provider (✓ embeddings/bm25-only), and each flag with a tick/cross. Parity with the viewer banner.AGENTMEMORY_URLenvironment variable honored by CLI.status,doctor, and related health checks now respectAGENTMEMORY_URL=http://host:portand extract the port from it. Previously documented but silently ignored;--port Nwas the only way to override.- Website install section promotes
demoto step 2.npx @agentmemory/agentmemory demonow appears between "start server" and "open viewer" on agent-memory.dev. The magical-moment command is on the critical path of the three-step install, not tucked into the README. - Website version auto-derived from repo package.json.
gen-meta.mjspicks upsrc/version.tsonprebuildand writeswebsite/lib/generated-meta.json. Removes the stale-version drift that showedv0.9.1on the landing page afterv0.9.2shipped.
Changed
- REST "feature not enabled" errors now return structured bodies. Graph extraction (3 endpoints) and consolidation pipeline (1 endpoint) used to return
{ error: "Knowledge graph not enabled" }. Now return{ error, flag, enableHow, docsHref }matching the viewer banner contract. Curl users get the same fix guidance as UI users. - Website install title:
THREE STEPS→THREE COMMANDS. Matches the new three-command install (npx agentmemory,agentmemory demo,open viewer).
Fixed
- Viewer banner scroll blocker. Initial banner implementation rendered four full-height banner cards stacked above the dashboard, pushing all stats off-screen. Replaced with compact collapsible summary that takes ~40px of vertical space by default and only expands on click.
v0.9.2 — Stop-hook recursion safety + OpenAI-compatible embeddings + viewer import pipeline
Safety + import-pipeline patch. Kills the infinite Stop-hook recursion loop that burned Claude Pro tokens on unkeyed installs, repairs every empty viewer tab after import-jsonl, derives lessons and crystals automatically from imported sessions, and opens up OpenAI-compatible embedding endpoints.
Contributors
- @Edison-A-N — #186:
OPENAI_BASE_URL+OPENAI_EMBEDDING_MODELenv vars (unlocks Azure / vLLM / LM Studio for embeddings). - @Tanmay-008 and @tanmaishi — #111 / #179 follow-ups: multimodal image memory (Phase 1 CLIP visual embeddings + vision-search), 500MB disk quota LRU eviction, memory deletion parity, multiple CodeRabbit review passes across the multimodal path.
- @rohitg00 — #187 Stop-hook recursion 5-layer defense, #188 viewer empty-tabs + import pipeline, #189 OpenAI dimensions lookup, #190 README/website refresh, #191 release.
Thanks to everyone. External PRs merged via admin rebase after local verification.
Security
- Stop-hook recursion loop (#187, follow-up to #149). A user with no provider key and
AGENTMEMORY_AUTO_COMPRESS=falsecould still trigger unbounded recursion: Stop hook →/summarize→provider.summarize()→ agent-sdk provider spawned a Claude Agent SDK child session that inherited the same plugin hooks, whose own Stop fired, spawning another child, etc. Fixed at five layers in defense-in-depth:detectProvider()treats empty-string keys as unset and returns thenoopprovider by default. The agent-sdk fallback now requires explicitAGENTMEMORY_ALLOW_AGENT_SDK=trueopt-in.- New
NoopProviderreturns empty strings; callers detect.name === 'noop'and short-circuit. agent-sdkprovider setsAGENTMEMORY_SDK_CHILD=1before spawningquery()and restores the previous value infinally.- All 12 hook scripts inline an
isSdkChildContext(payload)guard checking both env marker andpayload.entrypoint === 'sdk-ts'. /summarizeshort-circuits withno_providerwhen the noop provider is active.
Added
OPENAI_BASE_URL/OPENAI_EMBEDDING_MODEL(#186, thanks @Edison-A-N). Azure OpenAI, vLLM, LM Studio, and other OpenAI-compatible proxies now work for embeddings. Defaults preserved.OPENAI_EMBEDDING_DIMENSIONS(#189).dimensionsderives from the model via a lookup table (3-small=1536, 3-large=3072, ada-002=1536) and the env var overrides for custom / self-hosted endpoints.- Auto-derived lessons + crystals on
import-jsonl(#188). Each imported session produces one crystal (narrative, tool outcomes, files, lessons) and up to 20 heuristic lessons. Content-addressed IDs (fingerprintId) so re-imports bump reinforcements instead of duplicating. - Multimodal Phase 1 (#179 series, thanks @tanmaishi + @Tanmay-008). Optional CLIP visual embeddings and vision-search on top of managed image store, with LRU eviction and refcount parity.
- Session preview on the sessions list (#188).
Session.firstPromptpopulated by bothimport-jsonland livemem::observe; viewer renders a 140-char preview. - Richer session detail panel (#188). 4-stat grid, top-10 tool bar chart, activity breakdown, file list, metadata.
Changed
- Default provider is now
noopwhen no API key is set (see Security). /agentmemory/auditreturns{ entries, success }to match viewer shape./agentmemory/replay/sessionsuseskv.listdirectly: sub-50ms on 600+ sessions.- Viewer WS has a 5s connect timeout so the
CONNECTING…banner no longer sticks forever. import-jsonlruns synthetic compression + BM25 indexing, fixing consolidation and search on imported data.
Fixed
- CI + publish workflows use two-step
npm install --package-lock-only+npm ci(lockfiles stay gitignored). image-quota-cleanupfails closed on refCount read errors.mem::observeguardsraw.userPromptas a string before sanitizing.- Viewer Actions tab reads
frontier(notactions) from/frontier. - Agent-sdk branch of
detectProviderrespects the computedmaxTokensinstead of hardcoding 4096.
Infrastructure
StateScope/StateScopeKeyinterface for theKV.statescope.onnxruntime-node+onnxruntime-webdeclared asoptionalDependencies.FALLBACK_PROVIDERShonors theAGENTMEMORY_ALLOW_AGENT_SDKgate.- README provider table + env block refreshed; hero test badge 654 → 827.
- 827 tests (up from 812 in v0.9.1).
@agentmemory/mcpshim bumped to 0.9.2 (was stuck at 0.9.0).
Full CHANGELOG: https://github.com/rohitg00/agentmemory/blob/main/CHANGELOG.md#092--2026-04-22
Full diff: v0.9.1...v0.9.2
v0.9.1 — viewer endpoints + CLI hardening
Trust-the-CLI patch. Three bugs that surfaced in real testing of v0.9.0: the dashboard viewer showed zeros for half its cards, import-jsonl crashed on anything but a perfect response, and upgrade hard-aborted on a cargo registry that never had the crate.
Fixed
- Viewer dashboard list endpoints (#172).
GET /agentmemory/semanticandGET /agentmemory/proceduralwere never registered, andGET /agentmemory/relationsreturned 405 because only the POST trigger existed. The dashboard'sPromise.allfan-out silently received null for those cards even when semantic, procedural, or relation data was present. Addedapi::semantic-list,api::procedural-list, andapi::relations-listhandlers next toapi::memoriesinsrc/triggers/api.ts, each returning the shape the viewer already parses. - CLI version drift (#173). The viewer brand badge hardcoded
v0.7.0and the README "New in" banner still saidv0.8.2. Replaced the viewer string with a__AGENTMEMORY_VERSION__placeholder substituted at render time bydocument.ts(same mechanism as the CSP nonce). Collapsedsrc/version.tsfrom a literal union of every historical release back to a singleVERSIONconstant — the import-compat contract is thesupportedVersionsSet inexport-import.ts, not the type. import-jsonlcrashed withUnexpected end of JSON input(#174). The livez probe used fetch throws as the only failure signal — any stray service on port 3111 passed silently, thenres.json()blew up when the real POST returned an empty body or HTML error. Probe now capturesprobe.status+ body snippet on non-OK responses and the exception message on network failure, so the error distinguishesunreachable (...)fromreachable but unhealthy (HTTP 503: ...). The POST reads body as text, parses only if non-empty, requiresjson.success === true, and maps 401 → "set AGENTMEMORY_SECRET" and 404 → "upgrade server to v0.8.13+".upgradeaborted oncargo install iii-engine(#174). The crate was never published — the old flow calledrequireSuccess, which exited before the Docker pull ran. Swapped to the official installer used throughout the README and demo command:curl -fsSL https://install.iii.dev/iii/main/install.sh | sh. Installer failure is optional; a warn points atiiidev/iii:latestand the releases page atiii-hq/iii.
Infrastructure
- Three integration tests cover the new list endpoints.
VERSION/ExportData.versionunion /supportedVersions/test/export-import.test.tsall bumped in lockstep.
Full Changelog: v0.9.0...v0.9.1
v0.9.0 — Website, fs-watcher, MCP proxy, audit policy
Visibility + correctness release. Landing site, filesystem connector, MCP standalone now actually talks to the running server, health logic stops crying wolf, audit trail closes its last gap, and every memory path has a clear policy.
Highlights
- Website — Next.js 16 App Router landing page at
website/. Lamborghini-inspired dark canvas, live GitHub stars pill, agents marquee with real brand logos, command-center tab showcase (viewer · iii console · state · traces), 12-tile feature grid, agent install selector, universal MCP JSON + one-click Cursor/VS Code deeplinks. Deploys to Vercel with Root Directory =website/. - Filesystem connector — new
@agentmemory/fs-watcherpackage. Emits validHookPayloadobservations for every file change and delete, debounced, with default ignore list and bearer auth. - Standalone MCP now talks to the running server —
@agentmemory/mcpprobesGET /agentmemory/livezatAGENTMEMORY_URL(defaults tohttp://localhost:3111). If the server is up, every tool routes through REST and sees what hooks and the viewer see. If the probe fails, falls back to the localInMemoryKV. Handle cache invalidates on proxy failure with a 30s TTL. - Health stops flagging critical on tiny Node processes — memory severity no longer escalates from heap ratio alone. Both warn and critical bands require RSS above
memoryRssFloorBytes(default 512 MB). - Audit policy codified —
src/functions/audit.tsgets a top-of-file policy block.mem::forgetno longer deletes silently; it writes a single audit row with target ids, session id, and per-type counts. - Retention eviction targets the right store —
mem::retention-evictroutes deletes tomem:memoriesormem:semanticbased on the candidate'ssourcefield, probing both namespaces for legacy rows. Batched audit per sweep. - Security advisory drafts for the v0.8.2 CVE set, ready to file through GitHub's advisory UI.
- iii console docs + vendored screenshots in the README.
Install
npx @agentmemory/agentmemory # runs the memory server on :3111, viewer on :3113Then wire any MCP client — Claude Desktop, Cursor, VS Code, Claude Code, Gemini CLI, Codex CLI, Hermes, OpenClaw — from the new install section on the website or the quick start in the README.
PRs in this release
- #118 — v0.8.2 security advisory drafts
- #132 — route semantic memory eviction to the correct KV store
- #157 — document iii console in README with screenshots
- #160 (#158) — gate memory severity on RSS floor
- #161 (#159) — proxy standalone MCP tools to the running server
- #162 (#125) — mem::forget audit coverage + policy doc
- #163 (#62) — filesystem connector
- #164 — Next.js website
Stats
- 777 tests passing (+ 14 skipped)
- Build clean
- 0 critical npm vulnerabilities
Full diff: v0.8.12...v0.9.0
v0.8.12
v0.8.11 — iii-sdk v0.11 getContext crash fix
What's Fixed
node dist/index.mjs crashed on startup after the iii-sdk v0.11 migration (#116) merged:
SyntaxError: The requested module 'iii-sdk' does not provide an export named 'getContext'
iii-sdk v0.11 dropped getContext() entirely. 32 src/functions/*.ts files still imported and called it. The bug was invisible to CI (tests mock iii-sdk) and build (tsdown doesn't type-check).
Changes
src/logger.ts— new thin stderr shim with.info/.warn/.errorreplacinggetContext().logger. Output goes to stderr as[agentmemory] <level> <msg>, forwarded intodocker logsby iii-exec.- 32
src/functions/*.tsfiles — removedgetContextimports, deletedconst ctx = getContext()lines, replacedctx.logger.*withlogger.* src/functions/search.ts— also fixedregisterFunction({ id: '...' })→registerFunction('...')for v0.11 string-ID API- 45 test files — updated
vi.mock("iii-sdk")logger mocks tovi.mock("../src/logger.js")
Verification
node dist/index.mjs— starts cleanly, logs all startup linesnpm test— 731/731 pass
v0.8.10 — SessionStart gate (#143) + retention-evict semantic leak (#124)
Behavior change: the PreToolUse and SessionStart hooks no longer run enrichment by default. SessionStart saves 1-2K input tokens per session you start (the only path that was actually reaching the model, per the Claude Code hook docs). PreToolUse stops spawning a Node process and POSTing to `/agentmemory/enrich` on every file-touching tool call — a pure resource cleanup, not a token fix. If you were relying on either path, set `AGENTMEMORY_INJECT_CONTEXT=true` in `/.agentmemory/.env` and restart.
Fixed
#143 — Gate SessionStart context injection
`src/hooks/session-start.ts` previously wrote ~1-2K chars of project context to stdout at every session start. Per the Claude Code hook docs, `SessionStart` stdout is injected into the model's context (one of the two documented exceptions alongside `UserPromptSubmit`), so this was adding real tokens to the first turn of every new session. Now gated behind `AGENTMEMORY_INJECT_CONTEXT`, default off. The session still gets registered for observation tracking — only the stdout echo is skipped.
`src/hooks/pre-tool-use.ts` was POSTing `/agentmemory/enrich` on every `Edit`/`Write`/`Read`/`Glob`/`Grep` tool call and piping up to 4000 chars to stdout. The Claude Code docs make clear that PreToolUse stdout goes to the debug log, not the model context, so this was not burning user tokens — but it was spawning a Node process + full HTTP round-trip ~20x per user message with no effect on the conversation. Gating it makes the disabled hot path a ~15ms no-op Node startup instead of a ~100-300ms REST round-trip. Resource cleanup, not a token fix.
Honest note on #143: my initial diagnosis on the issue thread pattern-matched too quickly to #138 and overclaimed that PreToolUse stdout was the smoking gun behind "Claude Pro burned in 4 messages". It wasn't — per the docs, PreToolUse stdout is debug-log only. The actual background cause is that Claude Pro's Claude Code quotas are documented as tight and Anthropic has publicly confirmed "people are hitting usage limits in Claude Code way faster than expected." agentmemory contributes ~1-2K tokens per session via SessionStart, and that contribution is worth eliminating, but this release does not and cannot make Claude Pro's base quotas roomier. Users on heavy tool-call workloads should consider Max 5x or Team tiers regardless of whether agentmemory is installed. 0.8.8's #138 fix remains the correct fix for users with `ANTHROPIC_API_KEY` set.
#124 — mem::retention-evict no longer leaks semantic memories
The eviction loop was unconditionally calling `kv.delete(KV.memories, id)` for every below-threshold candidate, but retention scores are computed for both episodic (`KV.memories`) and semantic (`KV.semantic`) memories. When a candidate came from `KV.semantic`, the delete silently became a no-op and the semantic row stayed alive forever with a sub-threshold score. Semantic memories could not be evicted by this path at all.
Fix: new `source: "episodic" | "semantic"` discriminator on `RetentionScore`, tagged at score creation. The eviction loop branches on `candidate.source`. For pre-0.8.10 rows with no `source` field (including semantic retention rows written by the old scorer), the loop probes both namespaces to find where the `memoryId` actually lives, so upgraded stores get their stranded semantic memories evicted without needing to re-score first. Response now includes `evictedEpisodic` and `evictedSemantic` counts.
Added
- `AGENTMEMORY_INJECT_CONTEXT` env var — default `false`. When `true`, restores the old SessionStart stdout write and the old PreToolUse `/enrich` round-trip. Startup banner prints a loud WARNING when it's on.
- `isContextInjectionEnabled()` helper in `src/config.ts` — single source of truth for the flag.
- Audit coverage for retention operations — both `mem::retention-score` (new `retention_score` operation) and `mem::retention-evict` (`delete` operation) now emit batched audit rows per sweep, making retention visible to audit consumers. Zero-eviction sweeps intentionally skip the evict audit row to avoid flooding.
- Parallelized retention-score writes — the score loop now collects pending writes and flushes with `Promise.all`, turning O(n) sequential KV round-trips into O(1) wall time on backends that can pipeline.
- 12 new regression tests across `test/context-injection.test.ts` (5 subprocess tests that spawn the compiled `pre-tool-use.mjs` and `session-start.mjs` hooks and assert stdout is empty in all off/default paths) and `test/retention.test.ts` (7 tests covering source tagging, mixed episodic+semantic eviction, legacy-row probing for both scopes, audit coverage for score and evict, zero-eviction skip audit).
Full suite: 731 passing (was 719 + 12 new).
Infrastructure
- Startup banner now prints `Context injection: OFF (default, #143)` or a prominent WARNING when opt-in is enabled, so the mode is never silent.
- README `.env` section has a new `AGENTMEMORY_INJECT_CONTEXT` entry with the note that only SessionStart actually reaches the model (PreToolUse stdout is debug-log only).
Migration
If you were relying on the old SessionStart project-context injection or the old PreToolUse enrichment round-trip, add to `~/.agentmemory/.env`:
```env
AGENTMEMORY_INJECT_CONTEXT=true
```
and restart Claude Code.
Upgrade
```bash
npm install @agentmemory/agentmemory@0.8.10
or standalone:
npx -y @agentmemory/mcp@0.8.10
```
Full changelog: v0.8.9...v0.8.10
v0.8.9 — Claude Code plugin auto-wires MCP + sandbox-safe skills (#139)
Two UX fixes for the Claude Code plugin install path, reported in #139 by @stefanfaur.
Fixed
- Claude Code plugin now auto-wires the MCP server (#139) — new
plugin/.mcp.jsondeclares the@agentmemory/mcpstdio server so/plugin install agentmemory@agentmemoryauto-starts it when the plugin is enabled. No extra config step, no separate install command. - Skills no longer fail under Claude Code's sandbox with "Contains expansion" (#139) — the
recallandsession-historyskills used pre-execution bash with$(...)/${VAR:-default}shell expansion, which Claude Code's sandbox rejects by pattern match. All four plugin skills (recall,remember,forget,session-history) are now pure prompts that tell Claude to use MCP tools directly. No bash, no sandbox issues, no shell escaping — and they run faster because they no longer fork a curl subprocess on every invocation.
Added
- Standalone MCP shim (
@agentmemory/mcp) implements the tools the rewritten skills need — previously exposed 5 tools, now exposes 7:memory_smart_search— aliasesmemory_recallwith substring fallback (BM25/vector/graph are only in the engine-backed path, not the standalone shim). Now searches title, content, files, concepts, and session IDs so the forget skill can find memories by file path or session ID as the docs promised. Rejects empty/whitespace-only queries to prevent the forget flow from accidentally matching every memory.memory_governance_delete— deletes memories bymemoryIdsarray or CSV string. Returns{deleted, requested, reason}. Silently skips unknown IDs.
- Argument normalization —
memory_savenow acceptsconcepts/filesas either an array (plugin skill format) or a comma-separated string (legacy). NewnormalizeList()helper handles both. parseLimit()helper — clampslimitargs to a sane range (1–100) acrossmemory_smart_search,memory_sessions, andmemory_audit. Rejects bogus values (negative, NaN, Infinity, booleans, objects) instead of silently passing them to.slice().- Input hardening on
memory_save—contentis now type-checked as a string before.trim(), somemory_save({content: 42})gets a clean "content is required" error instead of a runtimeTypeError. - 10 regression tests covering array + CSV
concepts/files, empty-query rejection, broadened search corpus,memory_governance_deletehappy/CSV/unknown-id paths,parseLimitclamping, non-string content rejection, and thememory_sessionslimit. Full suite: 719 passing.
Changed
- README Claude Code install snippet — now explicitly notes that
/plugin install agentmemoryregisters hooks + skills and auto-wires the MCP server via.mcp.json, with no extra step.
Upgrade
```bash
npm install @agentmemory/agentmemory@0.8.9
or standalone:
npx -y @agentmemory/mcp@0.8.9
```
If you're on Claude Code, re-run /plugin install agentmemory@agentmemory to pick up the new .mcp.json and rewritten skills, then restart Claude Code so the MCP server spawns.
Full changelog: v0.8.8...v0.8.9
v0.8.8 — stop burning Claude tokens on every tool call (#138)
Behavior change: per-observation LLM compression is now opt-in. If you were relying on LLM-generated summaries, set `AGENTMEMORY_AUTO_COMPRESS=true` in `~/.agentmemory/.env` and restart.
Fixed
- Stop silently burning Claude API tokens on every tool invocation (#138, thanks @olcor1) — the old `mem::observe` path fired `mem::compress` unconditionally on every PostToolUse hook, which called Claude via the user's `ANTHROPIC_API_KEY` to turn each raw observation into a structured summary. An active coding session could burn hundreds of thousands of tokens in minutes, which is the opposite of what a memory tool should do. The new default path skips the LLM call and uses a zero-token synthetic compression that derives `type`, `title`, `narrative`, and `files` from the raw tool name, input, and output directly. Recall and BM25 search still work.
Added
- `AGENTMEMORY_AUTO_COMPRESS` env var — default `false`. When `true`, restores the old per-observation LLM compression path. Startup banner prints a loud warning when it's on.
- `src/functions/compress-synthetic.ts` — `buildSyntheticCompression()` with camelCase-aware substring matching for `Read`/`Write`/`Edit`/`Bash`/`Grep`/`WebFetch`/`Task`/etc., file-path extraction, and 400-char narrative truncation.
- 8 regression tests in `test/auto-compress.test.ts` — full suite now 707 passing.
Migration
If your token usage suddenly drops after upgrading, that's working as intended. If you want richer LLM-generated summaries back:
```env
~/.agentmemory/.env
AGENTMEMORY_AUTO_COMPRESS=true
```
Restart the engine. Existing compressed observations on disk are untouched.
Upgrade
```bash
npm install @agentmemory/agentmemory@0.8.8
or standalone:
npx -y @agentmemory/mcp
```
Full changelog: v0.8.7...v0.8.8
v0.8.7 — fix Docker config file missing from npm tarball
Brown-paper-bag fix for #136, reported by @stefano-medapps. If you hit `Failed to read config file '/app/config.yaml': Is a directory` on a fresh `npx @agentmemory/agentmemory`, this release fixes it.
What broke
The 0.8.6 tarball shipped `docker-compose.yml` but not `iii-config.docker.yaml`, even though the compose file mounts `./iii-config.docker.yaml:/app/config.yaml:ro`. Docker resolves missing host-path bind sources by silently creating them as empty directories, so the iii-engine container mounted an empty dir at `/app/config.yaml` and crashed with:
Error: Failed to read config file '/app/config.yaml': Is a directory (os error 21)
Fixed
iii-config.docker.yamlis now in the published tarball (#136) — added to thefilesarray inpackage.jsonalongside the regulariii-config.yaml.npm pack --dry-runconfirms it's shipped at the package root (1.3kB).
Infrastructure
- Regression guard in
test/consistency.test.ts— parses every.<path>:<container>bind mount indocker-compose.ymland asserts the source file is shipped via thefilesarray. Catches the class of bug where a new bind mount is added to compose without a corresponding entry infiles, before it reaches a release.
Upgrade
npm install @agentmemory/agentmemory@0.8.7
# or standalone:
npx -y @agentmemory/mcpFull changelog: v0.8.6...v0.8.7