Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
144 commits
Select commit Hold shift + click to select a range
e6c1370
Merge branch 'release/v0.2.2'
Jan 16, 2025
9c253c6
update install requirements
Jan 16, 2025
01d7864
Merge branch 'release/v0.2.2'
Jan 16, 2025
31e984a
Merge branch 'develop/v0.2.3' of github.com:JulienZe/OmAgent into sel…
JulienZe Jan 17, 2025
e1010ef
Implement new self-consistent workflow with batch processing; add Lit…
JulienZe Jan 21, 2025
844ed76
Remove outdated JSON test files and scripts; update model configurati…
JulienZe Jan 21, 2025
c3205cc
Update .gitignore to include JSON files and remove outdated test file…
JulienZe Jan 21, 2025
2b2f107
Refactor COTConclusion class to streamline answer handling by removin…
JulienZe Jan 22, 2025
de51985
Merge branch 'om-ai-lab:main' into self_consist_cot_workflow
JulienZe Jan 22, 2025
736244d
support local-ai
yileld Jan 23, 2025
d64e71e
update code parsing for PoT
P3ngLiu Jan 23, 2025
c8b8afe
Merge pull request #197 from yileld/develop/v0.2.3
panregedit Jan 24, 2025
a948de6
Remove run_lite.py from git tracking
JulienZe Jan 24, 2025
5a1f05a
XMerge branch 'self_consist_cot_workflow' of github.com:JulienZe/OmAg…
JulienZe Jan 24, 2025
83f2ae2
Merge pull request #193 from JulienZe/self_consist_cot_workflow
panregedit Jan 25, 2025
e63209d
Merge pull request #199 from P3ngLiu/feature/v0.2.3/PoT
panregedit Jan 25, 2025
badb8a5
add video_understanding webpage example
Jan 26, 2025
a49f571
add video_understanding webpage example
Jan 26, 2025
31125e7
add video_understanding webpage example
Jan 26, 2025
55a3c2a
Fixed issue with LLM payload containing only text and updated image h…
XeonHis Jan 26, 2025
b03812b
opt video_understanding webpage example readme
Jan 26, 2025
994b451
opt video_understanding webpage example readme
Jan 26, 2025
40f8881
opt video_understanding webpage example config
Jan 26, 2025
2097469
Merge pull request #201 from Silentharry94/develop/v0.2.3
panregedit Jan 26, 2025
f8b1a20
Merge pull request #202 from XeonHis/develop/v0.2.3
panregedit Jan 26, 2025
fd627c3
update README
Jan 26, 2025
afff60f
feat: add reflexion workflow and examples
lijingcheng2021 Feb 9, 2025
3458846
refactor: remove deprecated react_pro_reflexion directories
lijingcheng2021 Feb 9, 2025
84f85c7
refactor: reorganize reflexion implementation
lijingcheng2021 Feb 10, 2025
97fc5e0
feat: update reflexion programmatic example
lijingcheng2021 Feb 10, 2025
07012bc
docs: update reflexion workflow diagram
lijingcheng2021 Feb 10, 2025
7d73576
refactor: Translate Chinese comments to English in reflexion workflow…
lijingcheng2021 Feb 10, 2025
ec4aa4a
refactor: update reflexion examples
lijingcheng2021 Feb 10, 2025
bb50f75
Merge remote-tracking branch 'origin/feature/lite_engine' into develo…
Feb 12, 2025
57be4d8
Merge branch 'feature/lite_engine' into develop/v0.2.4
lijingcheng2021 Feb 12, 2025
5195dc2
Replace fakeredis with redislite
djwu563 Feb 13, 2025
36bf6d2
Update workflow examples and core components:
lijingcheng2021 Feb 13, 2025
0f6b741
Add Tree of Thoughts (ToT) workflow example and core implementation
fourfireM Feb 13, 2025
ba12445
Update run_batch_test.py for react and react_pro:
lijingcheng2021 Feb 13, 2025
4e133b8
Merge remote-tracking branch 'origin/feature/lite_engine' into develo…
Feb 14, 2025
c96a009
video understanding support lite version, fix minor bugs in lite version
XeonHis Feb 14, 2025
5c8f0b2
Merge pull request #212 from XeonHis/develop/v0.2.4
XeonHis Feb 14, 2025
2b68c17
Merge branch 'develop/v0.2.4' into develop/v0.2.4
djwu563 Feb 14, 2025
f297904
Merge pull request #209 from djwu563/develop/v0.2.4
panregedit Feb 14, 2025
82f22aa
program exit bug fix
XeonHis Feb 14, 2025
5e09082
Merge pull request #213 from XeonHis/develop/v0.2.4
panregedit Feb 14, 2025
fd6d198
general got
zhangqianqianhzlh Feb 17, 2025
928df9a
got support lite version
zhangqianqianhzlh Feb 17, 2025
0ab6681
Fix bug where worker lacked workflow_instance_id
djwu563 Feb 17, 2025
99d5656
Merge pull request #215 from djwu563/develop/v0.2.4
panregedit Feb 17, 2025
e2f5e1b
Add caculator and code interpreter
qiandl2000 Feb 18, 2025
c00c4d6
delete d
qiandl2000 Feb 18, 2025
5b6de3e
Fix bug where SharedMemSTM does not release memory after program exit
djwu563 Feb 18, 2025
9069e12
Merge pull request #217 from djwu563/develop/v0.2.4
panregedit Feb 18, 2025
ee49ca6
Merge pull request #216 from qiandl2000/develop/v0.2.4
XeonHis Feb 19, 2025
3bf87a4
Add self-consistency chain of thought (SC-COT) implementation:
lijingcheng2021 Feb 20, 2025
5645964
Fix bug in SharedMemSTM initialization
djwu563 Feb 20, 2025
d0ee397
Merge branch 'develop/v0.2.4' of https://github.com/djwu563/OmAgent i…
djwu563 Feb 20, 2025
80c9dd3
Merge pull request #218 from djwu563/develop/v0.2.4
panregedit Feb 20, 2025
475f664
update PoT for lite engine
P3ngLiu Feb 20, 2025
dc00d59
update examples for lite engine
P3ngLiu Feb 20, 2025
fd0e5c3
Fix memory leak bug in SharedMemSTM
djwu563 Feb 20, 2025
076f786
Merge pull request #219 from djwu563/develop/v0.2.4
panregedit Feb 21, 2025
b2696fc
add RAP workflow and example
xrc10 Feb 21, 2025
1773188
Merge branch 'develop/v0.2.4' into ToT
panregedit Feb 21, 2025
2d06e75
Merge pull request #211 from lijingcheng2021/develop/v0.2.4
panregedit Feb 21, 2025
855e553
Merge pull request #210 from fourfireM/ToT
panregedit Feb 21, 2025
b702808
Merge pull request #214 from zhangqianqianhzlh/develop/v0.2.4
panregedit Feb 21, 2025
a4876c2
Merge pull request #220 from P3ngLiu/feature/v0.2.4/PoT
panregedit Feb 21, 2025
8455f78
Merge pull request #221 from xrc10/develop/v0.2.4
panregedit Feb 21, 2025
057cd52
add lite_client for webpage
djwu563 Feb 21, 2025
77965e7
Merge pull request #222 from djwu563/develop/v0.2.4
panregedit Feb 21, 2025
51789c8
add omagent maker
kyusonglee Feb 23, 2025
b0c489e
update README
Feb 24, 2025
45fce02
add run_agent_full doc
Feb 24, 2025
979c2d9
Merge pull request #223 from panregedit/develop/v0.2.4
panregedit Feb 24, 2025
b4e3e49
update python version in cicd
Feb 24, 2025
e7eaf49
Merge pull request #225 from panregedit:develop/v0.2.4
panregedit Feb 24, 2025
e5915d2
update package version
Feb 24, 2025
d8f7189
Merge pull request #227 from panregedit:fix/update_package_version
panregedit Feb 24, 2025
d116ac6
refactor: simplify Message class and remove _msg2req method in Openai…
XeonHis Feb 26, 2025
64174be
add omagent4agent stable version
kyusonglee Feb 26, 2025
f14f309
restructure omagent4agent
kyusonglee Mar 4, 2025
64a649a
Merge pull request #231 from XeonHis/develop/v0.2.5
panregedit Mar 4, 2025
3645d90
fix: fix the issue where passing None to stop variable caused error
XeonHis Mar 4, 2025
7ca468c
Merge pull request #232 from XeonHis/develop/v0.2.5
panregedit Mar 4, 2025
1bcff70
update
kyusonglee Mar 4, 2025
53b4d78
Update base.py for deepseek
qiandl2000 Mar 5, 2025
6d21947
Update calculator.py
qiandl2000 Mar 5, 2025
a0469a3
Add workerVerifier
kyusonglee Mar 5, 2025
e6c9efa
update worker verifier
kyusonglee Mar 5, 2025
14a86e0
omagent-core load fix bug
kyusonglee Mar 7, 2025
6d00df4
fix error
kyusonglee Mar 7, 2025
387fc16
modify agent and configs
kyusonglee Mar 7, 2025
9754e6b
add tester workflow
kyusonglee Mar 7, 2025
e108ed8
add run_test run_web
kyusonglee Mar 7, 2025
bd0e339
add test work
kyusonglee Mar 7, 2025
c03a328
update workflow
kyusonglee Mar 8, 2025
8741763
fix rule base verifier
kyusonglee Mar 8, 2025
17d5e7e
modify verifier
kyusonglee Mar 8, 2025
a7ce545
fix bug for workflow
kyusonglee Mar 8, 2025
1321dd7
add rule
kyusonglee Mar 8, 2025
5324552
add tools
kyusonglee Mar 10, 2025
b7723d3
fix bug
kyusonglee Mar 10, 2025
ef99932
Return exception information to the interactive client.
djwu563 Mar 11, 2025
e23a23c
Merge pull request #237 from djwu563/develop/v0.2.5
panregedit Mar 11, 2025
8c7246b
change agents folder to make it clear
kyusonglee Mar 11, 2025
b98bbc7
add debug agent
kyusonglee Mar 11, 2025
62f258b
fix bug including debug agent
kyusonglee Mar 11, 2025
481c286
update
kyusonglee Mar 12, 2025
9873bb6
fix bugs
kyusonglee Mar 12, 2025
b7f090b
fix bug for Rule-based verifier
kyusonglee Mar 12, 2025
a9d8596
improve debug agent
kyusonglee Mar 12, 2025
5a4501e
Merge pull request #233 from qiandl2000/v0.2.5
XeonHis Mar 13, 2025
317148a
fix error for vlm input
kyusonglee Mar 18, 2025
e50a15c
update for better workflow
kyusonglee Mar 19, 2025
af9a92b
small update
kyusonglee Mar 19, 2025
b411c13
recent update
kyusonglee Mar 20, 2025
4ec9d10
update
kyusonglee Mar 20, 2025
5f3d1b1
fix bug for conduct workflow
kyusonglee Mar 20, 2025
4d58a47
support mcp
kyusonglee Mar 26, 2025
7419af3
update example
kyusonglee Mar 26, 2025
46ddbd3
update
kyusonglee Mar 26, 2025
545b7d3
update
kyusonglee Mar 26, 2025
8322c21
add mcp.json
kyusonglee Mar 26, 2025
31ed29a
change to deepseek
kyusonglee Apr 2, 2025
c2cbf3d
add tool mcp call for omagent4agent
kyusonglee Apr 2, 2025
10d72b7
update for latest omagent4agent
kyusonglee Apr 7, 2025
ee315c8
fix bug
kyusonglee Apr 7, 2025
e659c08
fix bug
kyusonglee Apr 7, 2025
b8ced32
fix minor bug
kyusonglee Apr 7, 2025
86f2181
fix debug agent
kyusonglee Apr 7, 2025
0445e42
delete brower use
kyusonglee Apr 7, 2025
f344575
add image output
kyusonglee Apr 8, 2025
ede4c25
use redis lite
qiandl2000 Apr 16, 2025
9b7e423
change for examples
qiandl2000 Apr 21, 2025
e563990
add vlm-r1-mcp
qiandl2000 May 6, 2025
923d193
Update test_mcp.py
qiandl2000 May 6, 2025
d0f8bb2
add both tool
qiandl2000 May 6, 2025
4803026
support sse in mcp
kyusonglee May 12, 2025
52769c9
Merge pull request #12 from qiandl2000/a4a_redislite
kyusonglee May 14, 2025
74a9eab
Merge pull request #13 from qiandl2000/mcp_vlm_r1
kyusonglee May 14, 2025
fc71480
Merge pull request #14 from kyusonglee/develop/v0.2.5
kyusonglee May 14, 2025
8277d71
Merge branch 'main' into feature/lite_engine
kyusonglee May 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .github/workflows/workflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: '3.10'
python-version: '3.11'
- name: Install Poetry
uses: snok/install-poetry@v1
- name: Install dependencies
Expand Down
8 changes: 7 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -154,4 +154,10 @@ video_cache/
*.db

# vscode
.vscode
.vscode
*copy*

# JSON files
*.json
!mcp.json
import os
File renamed without changes.
25 changes: 8 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ OmAgent is python library for building multimodal language agents with ease. We
- A flexible agent architecture that provides graph-based workflow orchestration engine and various memory type enabling contextual reasoning.
- Native multimodal interaction support include VLM models, real-time API, computer vision models, mobile connection and etc.
- A suite of state-of-the-art unimodal and multimodal agent algorithms that goes beyond simple LLM reasoning, e.g. ReAct, CoT, SC-Cot etc.
- Supports local deployment of models. You can deploy your own models locally by using Ollama[Ollama](./docs/concepts/models/Ollama.md) or [LocalAI](./examples/video_understanding/docs/local-ai.md).
- Fully distributed architecture, supports custom scaling. Also supports Lite mode, eliminating the need for middleware deployment.


## 🛠️ How To Install
Expand All @@ -40,11 +42,6 @@ OmAgent is python library for building multimodal language agents with ease. We
```bash
pip install -e omagent-core
```
- Set Up Conductor Server (Docker-Compose) Docker-compose includes conductor-server, Elasticsearch, and Redis.
```bash
cd docker
docker-compose up -d
```

## 🚀 Quick Start
### Configuration
Expand All @@ -56,9 +53,7 @@ The container.yaml file is a configuration file that manages dependencies and se
cd examples/step1_simpleVQA
python compile_container.py
```
This will create a container.yaml file with default settings under `examples/step1_simpleVQA`.


This will create a container.yaml file with default settings under `examples/step1_simpleVQA`. For more information about the container.yaml configuration, please refer to the [container module](./docs/concepts/container.md)

2. Configure your LLM settings in `configs/llms/gpt.yml`:

Expand All @@ -69,14 +64,6 @@ The container.yaml file is a configuration file that manages dependencies and se
```
You can use a locally deployed Ollama to call your own language model. The tutorial is [here](docs/concepts/models/Ollama.md).

3. Update settings in the generated `container.yaml`:
- Configure Redis connection settings, including host, port, credentials, and both `redis_stream_client` and `redis_stm_client` sections.
- Update the Conductor server URL under conductor_config section
- Adjust any other component settings as needed


For more information about the container.yaml configuration, please refer to the [container module](./docs/concepts/container.md)

### Run the demo

1. Run the simple VQA demo with webpage GUI:
Expand All @@ -91,7 +78,11 @@ For more information about the container.yaml configuration, please refer to the

## 🤖 Example Projects
### 1. Video QA Agents
Build a system that can answer any questions about uploaded videos with video understanding agents. See Details [here](examples/video_understanding/README.md).
Build a system that can answer any questions about uploaded videos with video understanding agents. we provide a gradio based application, see details [here](examples/video_understanding/README.md).
<p >
<img src="docs/images/video_understanding_gradio.png" width="500"/>
</p>

More about the video understanding agent can be found in [paper](https://arxiv.org/abs/2406.16620).
<p >
<img src="docs/images/OmAgent.png" width="500"/>
Expand Down
22 changes: 21 additions & 1 deletion docs/concepts/clients/input_and_callback.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The input has only one method:
- `workflow_instance_id` is the ID of the workflow instance.
- `input_prompt` is the information prompting the user on what to input, which can be empty.

The callback has five methods:
The callback has the following methods:
- `send_incomplete(agent_id, msg, took=0, msg_type=MessageType.TEXT.value, prompt_tokens=0, output_tokens=0, filter_special_symbols=True)`
- `send_block(agent_id, msg, took=0, msg_type=MessageType.TEXT.value, interaction_type=InteractionType.DEFAULT.value, prompt_tokens=0, output_tokens=0, filter_special_symbols=True)`
- `send_answer(agent_id, msg, took=0, msg_type=MessageType.TEXT.value, prompt_tokens=0, output_tokens=0, filter_special_symbols=True)`
Expand All @@ -26,5 +26,25 @@ The callback has five methods:
- `info(agent_id, progress, message)`
- The required parameters for the `info` method are `agent_id`, `progress`, and `message`. `agent_id` is the ID of the workflow instance, `progress` is the program name, and `message` is the progress information.

- `show_image(agent_id, progress, image)`
- Displays an image in the main chat interface.
- `agent_id` is the ID of the workflow instance.
- `progress` is a short description of what the image represents (not displayed in the interface).
- `image` can be any of the following formats:
- URL string (starting with 'http://' or 'https://')
- base64 encoded image string
- PIL Image object (will be converted to PNG)
- In CLI mode, this will just log a message about the image, but in WebpageClient it will display the actual image in the main chat area.

- `info_image(agent_id, progress, image)`
- Displays an image in the info panel (right side) only.
- `agent_id` is the ID of the workflow instance.
- `progress` is a short description of what the image represents (shown in the info panel).
- `image` can be any of the following formats:
- URL string (starting with 'http://' or 'https://')
- base64 encoded image string
- PIL Image object (will be converted to PNG)
- In CLI mode, this will just log a message about the image, but in WebpageClient it will display the actual image in the info panel.

- `error(agent_id, error_code, error_info, **kwargs)`
- The required parameters for the `error` method are `agent_id`, `error_code`, and `error_info`. `agent_id` is the ID of the workflow instance, `error_code` is the error code, and `error_info` is the error information.
43 changes: 43 additions & 0 deletions docs/concepts/tool_system/mcp.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Model Control Protocol (MCP)

OmAgent's Model Control Protocol (MCP) system enables seamless integration with external AI models and services through a standardized interface. This protocol allows OmAgent to dynamically discover, register, and execute tools from multiple external servers, extending its capabilities without modifying the core codebase.

## MCP Configuration File

MCP servers are configured in a JSON file, typically named `mcp.json`. This file defines the servers that OmAgent can connect to. Each server has a unique name, command to execute, arguments, and environment variables.

Here's an example of a basic `mcp.json` file that configures multiple MCP servers:

```json
{
"mcpServers": {
"desktop-commander": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@wonderwhy-er/desktop-commander",
"--key",
"your-api-key-here"
]
},
.....
}
```

By default, OmAgent looks for this file in the following locations (in order):
1. Inside the tool_system directory `omagent-cor/src/omagnet_core/tool_system/mcp.json`
it will be automatically loaded.

## Executing MCP Tools

MCP tools can be executed just like any other tool using the ToolManager:

```python
# Let the ToolManager choose the appropriate tool
x = tool_manager.execute_task("command ls -l for the current directory")
print (x)
```

For more details on creating MCP servers, refer to the [MCP specification](https://github.com/modelcontextprotocol/python-sdk).
3 changes: 3 additions & 0 deletions docs/images/reflexion.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions docs/images/video_understanding_gradio.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
73 changes: 73 additions & 0 deletions docs/tutorials/run_agent_full.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Run the full version of OmAgent
OmAgent now supports free switching between Full and Lite versions, the differences between the two versions are as follows:
- The Full version has better concurrency performance, can view workflows as well as run logs with the help of the orchestration system GUI, and supports more device types (e.g. smartphone apps). Note that running the Full version requires a Docker deployment middleware dependencies.
- The Lite version is suitable for developers who want to get started faster. It eliminates the steps of installing and deploying Docker, and is suitable for rapid prototyping and debugging.

## Instruction of how to use Full version
### 🛠️ How To Install
- python >= 3.10
- Install omagent_core
Use pip to install omagent_core latest release.
```bash
pip install omagent-core
```
Or install the latest version from the source code like below.
```bash
pip install -e omagent-core
```
- Set Up Conductor Server (Docker-Compose) Docker-compose includes conductor-server, Elasticsearch, and Redis.
```bash
cd docker
docker-compose up -d
```

### 🚀 Quick Start
#### Configuration

The container.yaml file is a configuration file that manages dependencies and settings for different components of the system. To set up your configuration:

1. Generate the container.yaml file:
```bash
cd examples/step1_simpleVQA
python compile_container.py
```
This will create a container.yaml file with default settings under `examples/step1_simpleVQA`.



2. Configure your LLM settings in `configs/llms/gpt.yml`:

- Set your OpenAI API key or compatible endpoint through environment variable or by directly modifying the yml file
```bash
export custom_openai_key="your_openai_api_key"
export custom_openai_endpoint="your_openai_endpoint"
```
You can use a locally deployed Ollama to call your own language model. The tutorial is [here](docs/concepts/models/Ollama.md).

3. Update settings in the generated `container.yaml`:
- Configure Redis connection settings, including host, port, credentials, and both `redis_stream_client` and `redis_stm_client` sections.
- Update the Conductor server URL under conductor_config section
- Adjust any other component settings as needed


For more information about the container.yaml configuration, please refer to the [container module](./docs/concepts/container.md)

#### Run the demo

1. Set the OmAgent to Full version by setting environment variable `OMAGENT_MODE`
```bash
export OMAGENT_MODE=full
```
or
```pyhton
os.environ["OMAGENT_MODE"] = "full"
```
2. Run the simple VQA demo with webpage GUI:

For WebpageClient usage: Input and output are in the webpage
```bash
cd examples/step1_simpleVQA
python run_webpage.py
```
Open the webpage at `http://127.0.0.1:7860`, you will see the following interface:
<img src="docs/images/simpleVQA_webpage.png" width="400"/>
5 changes: 3 additions & 2 deletions examples/PoT/eval_aqua_zeroshot.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ def main():
# Setup logging and paths
logging.init_logger("omagent", "omagent", level="INFO")
CURRENT_PATH = Path(__file__).parents[0]
container.register_stm("SharedMemSTM")

# Initialize agent modules and configuration
registry.import_module(project_path=CURRENT_PATH.joinpath('agent'))
Expand Down Expand Up @@ -87,7 +88,7 @@ def main():
for r, w in zip(res, workflow_input_list):
output_json.append({
"id": w['id'],
"question": w['query'],
"question": w['query']+'\nOptions: '+str(question['options']),
"last_output": r['last_output'],
"prompt_tokens": r['prompt_tokens'],
"completion_tokens": r['completion_tokens']
Expand All @@ -104,7 +105,7 @@ def main():
# Save results to output file
if not os.path.exists(args.output_path):
os.makedirs(args.output_path)
with open(f'{args.output_path}/{dataset_name}_{model_id}_POT_output.json', 'w') as f:
with open(f'{args.output_path}/{dataset_name}_{model_id.replace("/","-")}_POT_output.json', 'w') as f:
json.dump(final_output, f, indent=4)

# Cleanup
Expand Down
3 changes: 2 additions & 1 deletion examples/PoT/eval_gsm8k_fewshot.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@ def main():
# Setup logging and paths
logging.init_logger("omagent", "omagent", level="INFO")
CURRENT_PATH = Path(__file__).parents[0]
container.register_stm("SharedMemSTM")

# Initialize agent modules and configuration
registry.import_module(project_path=CURRENT_PATH.joinpath('agent'))
Expand Down Expand Up @@ -167,7 +168,7 @@ def main():
# Save results to output file
if not os.path.exists(args.output_path):
os.makedirs(args.output_path)
with open(f'{args.output_path}/{dataset_name}_{model_id}_POT_output.json', 'w') as f:
with open(f'{args.output_path}/{dataset_name}_{model_id.replace("/","-")}_POT_output.json', 'w') as f:
json.dump(final_output, f, indent=4)

# Cleanup
Expand Down
9 changes: 7 additions & 2 deletions examples/PoT/run_programmatic.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
# Import required modules and components
# Import core modules and components for the Program of Thought (PoT) workflow
import os
os.environ["OMAGENT_MODE"] = "lite"

from omagent_core.utils.container import container
from omagent_core.engine.workflow.conductor_workflow import ConductorWorkflow
from omagent_core.advanced_components.workflow.pot.workflow import PoTWorkflow
from pathlib import Path
from omagent_core.utils.registry import registry
from omagent_core.clients.devices.programmatic.client import ProgrammaticClient
from omagent_core.clients.devices.programmatic import ProgrammaticClient
from omagent_core.utils.logger import logging


Expand All @@ -17,6 +20,8 @@
# Load custom agent modules from the project directory
registry.import_module(project_path=CURRENT_PATH.joinpath('agent'))

container.register_stm("SharedMemSTM")

# Load container configuration from YAML file
container.from_config(CURRENT_PATH.joinpath('container.yaml'))

Expand Down
Loading