Skip to content

kaneda2004/optimizedmind-ttc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

optimizedMind TTC

Experimental test-time compute tooling for improving LLM responses through multi-pass generation, evaluation, and selection.

optimizedMind header

What it is

optimizedMind TTC explores a simple idea: instead of relying on a single model response, spend additional inference budget on multiple candidates, score them, and return the strongest result.

The repository is a Python prototype focused on:

  • query classification,
  • difficulty estimation,
  • parallel and sequential response generation,
  • response scoring and verification,
  • token and runtime analytics.

It is intended as an experimental implementation rather than a production library.

Current capabilities

  • Classify prompts into broad task categories
  • Estimate prompt difficulty and adjust processing strategy
  • Generate multiple candidate responses with different pass structures
  • Score responses against task-specific criteria
  • Compare candidates and select a final answer
  • Track token usage and processing metrics in the console

How it works

  1. Analyze the incoming prompt
  2. Choose a response strategy based on task type and difficulty
  3. Generate multiple responses using parallel and sequential passes
  4. Evaluate and cross-check those responses
  5. Return the strongest candidate with supporting analytics

Example workflow

Processing steps

Results are displayed in a console interface with prompt analysis, token usage, and final selection details.

Final results

Installation

git clone https://github.com/kaneda2004/optimizedmind-ttc.git
cd optimizedmind-ttc
pip install -r requirements.txt
export OPENAI_API_KEY='your-api-key-here'

Usage

python ttc_oai.py

The CLI lets you run predefined prompt types or enter a custom prompt and inspect the generation and evaluation flow.

Requirements

  • Python 3.7+
  • OpenAI API key
  • Dependencies listed in requirements.txt

Status

This repository is an experimental prototype for studying TTC-style response improvement. The code is useful as a reference implementation and sandbox for further iteration, but it should not be treated as a polished framework.

Research context

The project is inspired by work on test-time compute, including:

License

MIT License

About

Experimental test-time compute prototype for multi-pass LLM response generation and evaluation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages