Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions langGraph.js/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
node_modules/
.env
1 change: 1 addition & 0 deletions langGraph.js/Agent-1/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
GROQ_API_KEY= # "your_groq_api_key"
49 changes: 49 additions & 0 deletions langGraph.js/Agent-1/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Minimal AI Chat Agent

This project showcases a lightweight “Hello World”–style AI assistant built with [LangGraph](https://github.com/langchain-ai/langgraphjs), [LangChain.js](https://js.langchain.com/), and the [Groq](https://groq.com/) language model (`llama-3.3-70b-versatile`). It accepts user input through the terminal and returns concise, accurate responses generated by the LLM.

This project provides an excellent starting point for experimenting with graph-based AI logic, custom chat flows, or Groq-powered inference.

## Setup Instructions

### 1. Install dependencies
```bash
npm install
```

### 2. Create a .env file
Create a ```.env``` file in the root directory (or a copy of .env.example ) with your Groq API key:

```ini
GROQ_API_KEY="your_groq_api_key_here"
```
You can get a free API key from [Groq](https://console.groq.com).

### 3. Run the Assistant
```bash
node index.js
```
(Make sure you're using Node.js version 18 or above)

## Example Usage
Once started, you can chat with the assistant in your terminal:

```
User: What are the principles of OOPS?
AI: (whatever answer we get from LLM)
```


## Graph Structure

Here is the visual representation of the LangGraph workflow.

![Graph Structure](./assets/graph.png)

## Results

Here is a sample output from the terminal after interacting with the assistant:

![AI Result Screenshot](./assets/result1.png)

![AI Result Screenshot](./assets/result2.png)
Binary file added langGraph.js/Agent-1/assets/graph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added langGraph.js/Agent-1/assets/result1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added langGraph.js/Agent-1/assets/result2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
54 changes: 54 additions & 0 deletions langGraph.js/Agent-1/index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import { config } from 'dotenv' // Load environment variables from .env file
config()

// Import prompt-sync for user input in terminal
import promptSync from 'prompt-sync';
const prompt = promptSync();

// Import necessary LangChain modules
import { ChatGroq } from "@langchain/groq";
import { Graph} from '@langchain/langgraph'

// Initialize the ChatGroq model with API key and parameters
const model = new ChatGroq({
apiKey: process.env.GROQ_API_KEY, // API key stored in .env
model: 'llama-3.3-70b-versatile', // Selected Groq model
temperature: 0.1 // Low temperature for focused responses
})

// Create a new LangGraph workflow
const graph = new Graph()

// Add a node named "chat" that handles LLM response generation
graph.addNode('chat', async (input) => {
// Invoke the model with the incoming user message
const res = await model.invoke([{ role: "user", content: input }]);
return res.content ; // Return only the generated text
});

// Link start → chat → end in the graph
graph.addEdge("__start__", "chat");
graph.addEdge("chat" ,"__end__")

// Compile the graph for execution
const compiledGraph = graph.compile();

// Function to interact with the agent
const agentCall = async () => {
try {

// Prompt user for input
let userInput = prompt("Hi user! I am you AI Assistant. Enter your query: ");

// Invoke the graph with a system-style instruction + user query
const result = await compiledGraph.invoke("You are a helpful AI assistant. Provide a detailed yet concise response to user query in about 50 - 100 words with accuracy. The user query is: " + userInput);
console.log("Result: ", result); // Output the final model response
}
catch (err) {
// Catch and display errors from graph execution or API calls
console.error("Error running graph: ", err);
}
};

// Start the agent interaction
agentCall();
Loading