Skip to content

feat: add experimental dacli ask command (#186)#258

Closed
raifdmueller wants to merge 12 commits intodocToolchain:mainfrom
raifdmueller:feature/ask-command-186
Closed

feat: add experimental dacli ask command (#186)#258
raifdmueller wants to merge 12 commits intodocToolchain:mainfrom
raifdmueller:feature/ask-command-186

Conversation

@raifdmueller
Copy link
Collaborator

Summary

  • Adds experimental dacli ask "question" command that uses an LLM to answer questions about documentation
  • Searches for relevant sections using existing search, builds a context prompt, and calls an LLM provider
  • Two providers: Claude Code CLI (subprocess) and Anthropic API (SDK) with auto-detection
  • Available as CLI command (dacli ask / alias a) and MCP tool (ask_documentation_tool)
  • Anthropic SDK added as optional dependency (llm extra group)
  • Version bump: 0.4.27 → 0.4.28

New Files

  • src/dacli/services/llm_provider.py - LLM provider abstraction (ABC, ClaudeCode, AnthropicAPI, auto-detect)
  • src/dacli/services/ask_service.py - Ask service (context building, LLM orchestration)
  • tests/test_ask_experimental_186.py - 30 tests covering providers, context building, service, CLI, MCP

Modified Files

  • src/dacli/cli.py - Added ask command with --provider and --max-sections options
  • src/dacli/mcp_app.py - Added ask_documentation_tool MCP tool
  • src/dacli/services/__init__.py - Export ask_documentation
  • pyproject.toml - Version bump, llm optional dependency group
  • src/dacli/__init__.py - Version bump

Test plan

  • 30 new tests pass covering all components
  • Full test suite: 667 tests pass (no regressions)
  • ruff check passes with no lint errors
  • Manual: dacli ask "What is this?" --docs-root src/docs/ with Claude Code CLI
  • Manual: dacli --help shows ask under "Experimental" group

Fixes #186

🤖 Generated with Claude Code

raifdmueller and others added 12 commits February 7, 2026 12:01
…olchain#186)

Adds an experimental `ask` command that uses an LLM to answer questions
about documentation. Searches for relevant sections, builds context, and
calls an LLM provider. Supports Claude Code CLI and Anthropic API with
auto-detection. Available as CLI command (`dacli ask`/`dacli a`) and MCP
tool (`ask_documentation_tool`).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…hain#186)

- CLI spec: add ask command, alias, command group, LLM integration example
- API spec: add ask_documentation_tool with parameters and responses
- User manual: update tool count (9→10), add tool reference section
- Tutorial: add ask to command reference table and LLM workflow
- Arc42: add ask_documentation_tool to component responsibilities
- Use cases: add UC-11 for ask documentation
- Acceptance criteria: add 5 Gherkin scenarios, update traceability matrix

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Instructs Claude Code to use dacli itself for reading and modifying
project documentation instead of reading files directly. Also adds
ask_documentation_tool to the MCP tools table.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds --strict-mcp-config, --mcp-config '{}', and --disable-slash-commands
flags to the Claude Code subprocess call for faster response times.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ain#186)

Rewrites ask_service.py to iterate through sections one by one as
described in issue docToolchain#186:
1. Extract keywords from natural language questions (stop word removal)
2. Search with individual keywords (fixes multi-word query problem)
3. Iterate sections: pass each + question + previous findings to LLM
4. Consolidate all findings into final answer with source references

Result now includes 'sources' (section paths) and 'iterations' count.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…lchain#186)

Instead of keyword-based pre-filtering (which fails for synonyms and
natural language), iterate through ALL sections and let the LLM decide
relevance. This means "Schraubenzieher" can find "Schraubendreher" etc.

- Removed _extract_keywords, _build_context, stop word lists
- Added _get_all_sections to walk the structure tree
- LLM evaluates each section for relevance during iteration
- Removed obsolete TestContextBuilding tests

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…chain#186)

Default is now None (all sections) instead of 5. The whole point of the
iterative approach is to let the LLM see all documentation — a default
limit would silently skip sections. Users can still pass --max-sections
to limit for performance if needed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Explain how ask iterates through all sections (no keyword search),
that it's more accurate than RAG (synonyms work), and that it takes
a few seconds due to per-section LLM calls. Updated max_sections
default to "all" in docs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Iterate through documentation files instead of individual sections.
A typical project has ~35 files vs ~460 sections, reducing LLM calls
by ~13x while providing better context (full file content) per call.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Shows "Checking file 1/35: filename.adoc..." on stderr during
file-by-file iteration so users see progress instead of silence.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ions

The --strict-mcp-config --mcp-config '{}' flags caused claude CLI to hang
indefinitely. Use --model haiku and --max-turns 1 instead for faster,
tool-free responses.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Ensures progress lines appear immediately in all environments.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@raifdmueller
Copy link
Collaborator Author

Closing this PR. After extensive testing of the iterative LLM approach, we concluded that the ask command is redundant — the calling LLM can use dacli's existing tools (get_structure, get_section, search) to answer questions more effectively.

See ADR-009 in PR #261 for the full decision documentation.

@raifdmueller raifdmueller deleted the feature/ask-command-186 branch February 7, 2026 19:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature: AI-powered 'ask' command with flexible LLM provider support

1 participant