feat: add experimental dacli ask command (#186)#258
Closed
raifdmueller wants to merge 12 commits intodocToolchain:mainfrom
Closed
feat: add experimental dacli ask command (#186)#258raifdmueller wants to merge 12 commits intodocToolchain:mainfrom
raifdmueller wants to merge 12 commits intodocToolchain:mainfrom
Conversation
…olchain#186) Adds an experimental `ask` command that uses an LLM to answer questions about documentation. Searches for relevant sections, builds context, and calls an LLM provider. Supports Claude Code CLI and Anthropic API with auto-detection. Available as CLI command (`dacli ask`/`dacli a`) and MCP tool (`ask_documentation_tool`). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…hain#186) - CLI spec: add ask command, alias, command group, LLM integration example - API spec: add ask_documentation_tool with parameters and responses - User manual: update tool count (9→10), add tool reference section - Tutorial: add ask to command reference table and LLM workflow - Arc42: add ask_documentation_tool to component responsibilities - Use cases: add UC-11 for ask documentation - Acceptance criteria: add 5 Gherkin scenarios, update traceability matrix Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Instructs Claude Code to use dacli itself for reading and modifying project documentation instead of reading files directly. Also adds ask_documentation_tool to the MCP tools table. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds --strict-mcp-config, --mcp-config '{}', and --disable-slash-commands
flags to the Claude Code subprocess call for faster response times.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ain#186) Rewrites ask_service.py to iterate through sections one by one as described in issue docToolchain#186: 1. Extract keywords from natural language questions (stop word removal) 2. Search with individual keywords (fixes multi-word query problem) 3. Iterate sections: pass each + question + previous findings to LLM 4. Consolidate all findings into final answer with source references Result now includes 'sources' (section paths) and 'iterations' count. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…lchain#186) Instead of keyword-based pre-filtering (which fails for synonyms and natural language), iterate through ALL sections and let the LLM decide relevance. This means "Schraubenzieher" can find "Schraubendreher" etc. - Removed _extract_keywords, _build_context, stop word lists - Added _get_all_sections to walk the structure tree - LLM evaluates each section for relevance during iteration - Removed obsolete TestContextBuilding tests Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…chain#186) Default is now None (all sections) instead of 5. The whole point of the iterative approach is to let the LLM see all documentation — a default limit would silently skip sections. Users can still pass --max-sections to limit for performance if needed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Explain how ask iterates through all sections (no keyword search), that it's more accurate than RAG (synonyms work), and that it takes a few seconds due to per-section LLM calls. Updated max_sections default to "all" in docs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Iterate through documentation files instead of individual sections. A typical project has ~35 files vs ~460 sections, reducing LLM calls by ~13x while providing better context (full file content) per call. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Shows "Checking file 1/35: filename.adoc..." on stderr during file-by-file iteration so users see progress instead of silence. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ions
The --strict-mcp-config --mcp-config '{}' flags caused claude CLI to hang
indefinitely. Use --model haiku and --max-turns 1 instead for faster,
tool-free responses.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Ensures progress lines appear immediately in all environments. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Collaborator
Author
|
Closing this PR. After extensive testing of the iterative LLM approach, we concluded that the See ADR-009 in PR #261 for the full decision documentation. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
dacli ask "question"command that uses an LLM to answer questions about documentationdacli ask/ aliasa) and MCP tool (ask_documentation_tool)llmextra group)New Files
src/dacli/services/llm_provider.py- LLM provider abstraction (ABC, ClaudeCode, AnthropicAPI, auto-detect)src/dacli/services/ask_service.py- Ask service (context building, LLM orchestration)tests/test_ask_experimental_186.py- 30 tests covering providers, context building, service, CLI, MCPModified Files
src/dacli/cli.py- Addedaskcommand with--providerand--max-sectionsoptionssrc/dacli/mcp_app.py- Addedask_documentation_toolMCP toolsrc/dacli/services/__init__.py- Exportask_documentationpyproject.toml- Version bump,llmoptional dependency groupsrc/dacli/__init__.py- Version bumpTest plan
ruff checkpasses with no lint errorsdacli ask "What is this?" --docs-root src/docs/with Claude Code CLIdacli --helpshowsaskunder "Experimental" groupFixes #186
🤖 Generated with Claude Code