Skip to content

Fix/enhance search and logs#1121

Closed
phuongvm wants to merge 9 commits intooraios:mainfrom
phuongvm:fix/enhance-search-and-logs
Closed

Fix/enhance search and logs#1121
phuongvm wants to merge 9 commits intooraios:mainfrom
phuongvm:fix/enhance-search-and-logs

Conversation

@phuongvm
Copy link

@phuongvm phuongvm commented Mar 3, 2026

🚀 What does this PR do?

This Pull Request introduces several stability fixes, cleans up noisy stack traces, and significantly improves the UX of the SearchForPatternTool when dealing with large
results.

📝 Detailed Changes & Benefits:

  1. Enhance SearchForPatternTool UX & Context Efficiency
  • Lowered default_max_tool_answer_chars from 150,000 to 50,000.
    • Benefit: 150k is too large, causing LLM context dilution ("lost in the middle" effect), raising costs, and slowing down response times. 50k is a much safer sweet spot
      for modern LLM context windows.
  • When a search result exceeds the max limit, instead of throwing a generic flat error ("The answer is too long..."), the tool now parses the results and returns a smart
    summary containing the total matches and the "Top 10 files with the most matches".
    • Benefit: This actively guides the AI Agent to iteratively refine its search query (e.g., using relative_path or paths_include_glob) rather than hitting a dead end and
      giving up.
  1. Clean up noisy logs and suppress misleading errors
  • asyncio / Windows: Added a custom exception handler to the main event loop in cli.py to silently suppress ConnectionResetError [WinError 10054] and pipe transport
    errors.
    • Benefit: These happen naturally when clients/language servers close connections but were polluting the terminal logs with scary tracebacks on Windows environments.
  • tools_base.py: Standard AI argument errors like ValueError and FileNotFoundError (which happen frequently when the LLM hallucinates a path) are now logged as simple
    WARNINGs instead of full ERROR stack traces.
  • TypeScript LS: Added a dummy handler for the $/typescriptVersion notification to eliminate the Unhandled method warning on startup.
  1. Fix Terraform Language Server crash
  • Initialized self.completions_available = threading.Event() inside TerraformLS.init.
    • Benefit: Previously, the server would crash with an AttributeError when trying to call .set() on this missing attribute.

phuongvm and others added 9 commits December 22, 2025 19:37
# Conflicts:
#	src/serena/cli.py
…from language servers that don't support the file type
…, tech stack, and updated suggested commands
# Conflicts:
#	src/solidlsp/language_servers/terraform_ls.py
1. Enhance SearchForPatternTool UX & Context Efficiency

- Lowered default_max_tool_answer_chars from 150,000 to 50,000. 150k is too large, causing LLM context dilution (lost in the middle) and wasting tokens. 50k is a much safer sweet spot.

- When a search result exceeds the max limit, instead of throwing a generic error, the tool now parses the results and returns a smart summary containing the total matches and the 'Top 10 files with the most matches'. This actively guides the AI Agent to iteratively refine its search query rather than hitting a dead end.

2. Clean up noisy logs and suppress misleading errors

- asyncio / Windows: Added a custom exception handler to the main event loop to silently suppress ConnectionResetError [WinError 10054] and pipe transport errors. These happen naturally when clients/language servers close connections but were polluting the logs with scary tracebacks on Windows.

- tools_base.py: Standard AI argument errors like ValueError and FileNotFoundError are now logged as simple WARNINGs instead of full ERROR stack traces.

- TypeScript LS: Added a dummy handler for the $/typescriptVersion notification to eliminate the Unhandled method warning on startup.

3. Fix Terraform Language Server crash

- Initialized self.completions_available = threading.Event() inside TerraformLS.__init__. Previously, the server would crash with an AttributeError when trying to call .set() on this missing attribute.
Copy link
Contributor

@MischaPanch MischaPanch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution! For the future, it would be better to split up independent changes across different PRs, but since it's overall fairly small changes, this time we can let it slide

# For a specific file, only use the language server that supports it
try:
lang_server = self._ls_manager.get_language_server(within_relative_path)
symbol_roots = lang_server.request_full_symbol_tree(within_relative_path=within_relative_path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be factored into a local function

def window_log_message(msg: dict) -> None:
log.info(f"LSP: window/logMessage: {msg}")

def workspace_configuration(params: dict) -> list[dict]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this needed? Pls add a block comment

@@ -0,0 +1,20 @@
# Serena Coding Conventions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls rollback all changes to memories

and rate limits may apply.
"""
default_max_tool_answer_chars: int = 150_000
default_max_tool_answer_chars: int = 50_000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@opcode81 wdyt?

loop.default_exception_handler(context)

try:
loop = asyncio.get_running_loop()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have never seen excessive logs at startup and this is complex logic that dives into the MCP server's event loop. Is this really needed?

file_to_matches[match.source_file_path].append(match.to_display_string())
result = self._to_json(file_to_matches)

# Smart limit checking: if the result is too long, we provide a summary of the top 10 files instead of a generic failure message
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat, thanks! I like this, wdyt @opcode81 ?

@opcode81
Copy link
Contributor

opcode81 commented Mar 4, 2026

@phuongvm please submit one PR per logical change. In general, never submit PRs that contain several unrelated changes.
It is too difficult to then separate the changes that we want to keep from the things that need to change/are not wanted.
Even your commits mix unrelated changes, so we cannot even cherry-pick.

The change you made to the search tool is something we would integrate, so do issue a PR on that.

@opcode81 opcode81 closed this Mar 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants