You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add MongoDB Text-to-MQL Agent Tutorial to Gen-AI Showcase
This PR adds a comprehensive tutorial showcasing how to build natural language agents using MongoDB Atlas and LangChain's Text-to-MQL tools. The guide walks developers through two agent architectures:
ReAct-style agent – ideal for fast prototyping, demos, and dynamic tool usage.
LangGraph-style agent – designed for production with deterministic, auditable flows.
Key Features
Use the langchain_mongodb.agent_toolkit to generate, validate, and execute secure MongoDB aggregation pipelines from plain English.
Simplified prompt design using MONGODB_AGENT_SYSTEM_PROMPT.
Detailed comparison of ReAct vs. graph-style agents (speed, accuracy, tradeoffs).
Patterns for extending agents with memory, hybrid search (Vector + Full-Text + MQL), and observability.
Audience
This tutorial is intended for developers and teams building:
GenAI agents over structured data
Internal tools and analytics bots
Production-grade agentic systems with audit/control requirements
Nice work! I like the comparison between an easy (React) and a more customized (LangGraph) agent. Few comments:
I found the wording "graph-based agent" a bit confusing. I took it to mean it was using graph-based retrieval or GraphRAG. Might want to specifically say "custom LangGraph agent" or similar.
The long outputs at every step make the notebook a bit hard to read. Consider clearing the outputs of some cells
In the LangGraph agent, curious why there is a conditional edge after generate_query? Shouldn't check_query, run_query, etc., always be run if the agent decides to generate a query?
Instead of saving the output to a local file, you could persist it in MongoDB? Could also use the MongoDB LangGraph checkpointer to show how to use these persisted outputs as memory for the agent.
Nice work! I like the comparison between an easy (ReAct) and a more customized (LangGraph) agent. Few comments:
I found the wording "graph-based agent" a bit confusing. I took it to mean it was using graph-based retrieval or GraphRAG. Might want to specifically say "custom LangGraph agent" or similar.
The long outputs at every step make the notebook a bit hard to read. Consider clearing the outputs of some cells.
In the LangGraph agent, curious why there is a conditional edge after generate_query? Shouldn't check_query, run_query, etc., always be run if the agent decides to generate a query?
Instead of saving the output to a local file, you could persist it in MongoDB? Could also use the MongoDB LangGraph checkpointer to show how to use these persisted outputs as memory for the agent.
V2 Changes & Updates Made
1) Functionality added
Persistent Memory: Replaced file-based saving with MongoDBSaver (LangGraph’s built-in checkpointer) and added an LLMSummarizingMongoDBSaver class that generates human-readable step summaries via the LLM.
Quick Reference & Smoke Test: Appended a System Initialization & Quick Reference section and a if __name__ == "__main__": test_enhanced_summarization() smoke-test snippet for one-click validation.
Enhanced Memory Management: Improved inspection and cleanup utilities—inspect_thread_with_summaries_enhanced, list_conversation_threads, clear_thread_history, and memory_system_stats—for better observability and maintenance.
Interactive CLI: Refined interactive_query() with clear commands (exit, threads, switch, debug) and seamless thread switching to support live exploratory sessions.
2) Workflow restructuring
Introduction overhaul: Reorganized front-matter into logical sections—Overview, Use Cases, Business Applications, Technical Components, Prerequisites, and Network Setup—to guide readers step by step.
Error-handling cleanup: Converted every bare except: into except Exception as e: and surfaced exception details in logs.
Control-flow clarification:
Rationale for keeping the conditional edge:
Future-proofing: If the LLM ever returns a non-tool response (e.g. a clarification request or “I don’t know”), the branch cleanly skips validation and execution without graph rewrites.
Semantic clarity: Explicitly encodes “only validate/execute when a query was generated,” making the intent and node roles crystal clear.
Scalable branching: Allows easy addition of fallback or help branches later by adding new conditional paths from generate_query rather than rewiring hard-coded edges.
3) New uses
Business applications: Added real-world query examples (analytics, recommendations, trend analysis, geographic analysis) to showcase practical value.
Demos & agent comparison: Structured demo functions (demo_basic_queries, demo_conversation_memory, compare_agents_with_memory) under clear headings, with usage instructions and side-by-side performance analysis.
Comprehensive test suite: Introduced run_comparison_tests() to automatically run simple, moderate, and complex query scenarios across both agents.
Interactive exploration: Enhanced live notebook usage via the improved interactive_query() interface, encouraging hands-on data exploration and debugging.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add MongoDB Text-to-MQL Agent Tutorial to Gen-AI Showcase
This PR adds a comprehensive tutorial showcasing how to build natural language agents using MongoDB Atlas and LangChain's Text-to-MQL tools. The guide walks developers through two agent architectures:
Key Features
langchain_mongodb.agent_toolkitto generate, validate, and execute secure MongoDB aggregation pipelines from plain English.MONGODB_AGENT_SYSTEM_PROMPT.Audience
This tutorial is intended for developers and teams building: