This is the official Python implementation of "ConSensus: Multi-Agent Collaboration for Multimodal Sensing (ACL '26 Findings, long paper)" by Hyungjun Yoon, Mohammad Malekzadeh, Sung-Ju Lee, Fahim Kawsar, and Lorena Qendro.
- Create a conda environment:
conda create -n consensus python=3.12
conda activate consensus- Install dependencies:
pip install -r requirements.txt- Set up API keys (as needed):
- For OpenAI:
export OPENAI_API_KEY=your_key - For Together AI:
export TOGETHER_API_KEY=your_key - For Ollama: see Ollama Setup below
- For OpenAI:
The following datasets are supported:
- WESAD:
preprocess_WESAD.py - ActionSense:
preprocess_ActionSense.py - PAMAP2:
preprocess_PAMAP2.py - SleepEDF:
preprocess_SleepEDF.py - MMFit:
preprocess_MMFit.py
- Process raw data:
python preprocess/preprocess_<dataset>.py --path <data_path> --out_dir <processed_output_path>- Split processed data:
python preprocess/split.py \
--data_path <processed_output_path> \
--test_per_class 30 \
--examples_per_class 1 \
--output_path <split_output_path>To simulate missing modalities (e.g., 30% missing):
- Process with missing ratio:
python preprocess/preprocess_<dataset>.py \
--path <data_path> \
--out_dir <processed_output_path> \
--missing_ratio 0.3- Split with missing flag:
python preprocess/split.py \
--data_path <processed_output_path> \
--test_per_class 30 \
--examples_per_class 1 \
--output_path <split_output_path> \
--missingThe configuration file (YAML) controls all experiment parameters. An example is provided in config/example.yaml.
| Parameter | Description | Default |
|---|---|---|
seed |
Random seed for reproducibility | 0 |
temperature |
Sampling temperature. 0.0 for deterministic (greedy) decoding, 0.7 for self-consistency. |
0.0 |
top_p |
Nucleus sampling probability. 1.0 disables nucleus filtering (uses full distribution). |
1.0 |
use_large_context |
Context window size. false = 15,000 tokens, true = 30,000 tokens. Enable for large datasets. |
false |
num_instances |
Number of sampled instances (only for self-consistency baseline). | 3 |
models |
List of model strings specifying which LLMs to use (see Supported Providers). | — |
data_path |
Path to the preprocessed and split data directory. | — |
log_path |
Directory where logs and results will be saved. | — |
To reproduce the results reported in the paper, use the following settings:
| Parameter | Value | Notes |
|---|---|---|
| Temperature | 0.0 |
Greedy decoding for deterministic outputs |
| Top-p | 1.0 |
No nucleus filtering |
| Context length | 15,000 tokens | use_large_context: false (30,000 for large datasets) |
| Seed | 0 |
Controls data shuffling order |
Models are specified as <provider>:<model_name> strings in the models list:
- Ollama (local):
ollama:<model_name>— e.g.,ollama:llama3,ollama:llama3.1:70b - OpenAI:
openai:<model_name>— e.g.,openai:gpt-4; requiresOPENAI_API_KEY - Together AI:
together:<model_name>— e.g.,together:meta-llama/Llama-3-8b-chat-hf; requiresTOGETHER_API_KEY
For Ollama with a remote server, use the URL syntax:
ollama:url:<host>:<port>/<model_name>
For example: ollama:url:http://192.168.1.100:11434/llama3
seed: 0
temperature: 0.0
top_p: 1.0
use_large_context: false
models:
- ollama:llama3
- ollama:llama3
- ollama:llama3
data_path: ./data/WESAD/processed
log_path: ./logs/WESAD_llama3seed: 0
temperature: 0.0
top_p: 1.0
use_large_context: false
models:
- openai:gpt-4
- openai:gpt-4
- openai:gpt-4
data_path: ./data/WESAD/processed
log_path: ./logs/WESAD_gpt4Ollama allows running open-source LLMs locally. Follow these steps to set it up for ConSensus:
- Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh- Start the Ollama server:
ollama serveThis starts the server at http://localhost:11434 by default. Keep this running in a separate terminal.
- Pull a model:
ollama pull llama3Replace llama3 with any supported model. Common choices:
llama3— Llama 3 8Bllama3.1:70b— Llama 3.1 70B (requires ~40GB VRAM)gemma2— Google Gemma 2qwen2.5— Qwen 2.5
- Verify the model is available:
ollama list- Run the experiment with the Ollama model in your config:
models:
- ollama:llama3
- ollama:llama3
- ollama:llama3-
Custom host/port: Set the
OLLAMA_HOSTenvironment variable before starting the server:OLLAMA_HOST=0.0.0.0:11434 ollama serve
This is useful for serving Ollama on a remote machine accessible over the network.
-
GPU selection: Use
CUDA_VISIBLE_DEVICESto control which GPUs Ollama uses:CUDA_VISIBLE_DEVICES=0,1 ollama serve
-
Remote Ollama server: If Ollama runs on a different machine, use the URL syntax in the config:
models: - ollama:url:http://192.168.1.100:11434/llama3
-
Context length: The
use_large_contextconfig parameter controls the context window passed to Ollama (num_ctx). This overrides Ollama's default of 2,048 tokens, setting it to 15,000 (default) or 30,000 (large) to accommodate the multi-agent prompts.
python run_consensus.py --config_path <config_path>Final results are stored in the log_path specified in the config file.
The following baseline methods are available:
- Single Agent:
baselines/single_agent/run.py - Self-Consistency:
baselines/self_consistency/run.py - Self-Refine:
baselines/self_refine/run.py - Debate:
baselines/debate/run.py - MAD:
baselines/mad/run.py - CMD:
baselines/cmd/run.py - ReConcile:
baselines/reconcile/run.py
Run any baseline with:
python baselines/<baseline_name>/run.py --config_path <config_path>If you find this work useful, please cite our paper:
@article{yoon2026consensus,
title={ConSensus: Multi-Agent Collaboration for Multimodal Sensing},
author={Yoon, Hyungjun and Malekzadeh, Mohammad and Lee, Sung-Ju and Kawsar, Fahim and Qendro, Lorena},
journal={arXiv preprint arXiv:2601.06453},
year={2026}
}