Skip to content

Sliky1/lagrangian-skills

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

lagrangian-skills

An opinionated Agent Skill for constrained optimization using Augmented Lagrangian Methods (ALM), ADMM, and KKT-based verification. Compatible with Claude Code, Cursor, Gemini CLI, and any agent that supports the Agent Skills spec.

License: MIT Version Success Rate Releases

What this skill does

Most agents, when handed a constrained optimization problem, will attempt a solution without checking KKT conditions, verifying feasibility, or routing to the right solver. This skill enforces a specific sequence of steps and guardrails that produce reliable, verifiable results.

Key behaviors the agent won't do unprompted — but this skill enforces:

  • Pre-flight feasibility check (LP relaxation) before any computation
  • KKT condition verification with fingerprint caching (~85% hit rate)
  • Halton quasi-random multi-start for non-convex problems (FIX-21)
  • Dual-layer adversarial protection for saddle point traps (FIX-22)
  • Cross-skill handoff for Bayesian-optimization hybrid problems (COOP)
  • Structured failure output with minimum slack recovery suggestions
  • Token-efficient output modes: MINIMAL / STANDARD / VERBOSE

Quick install

Claude Code

git clone https://github.com/Sliky1/lagrangian-skills.git /tmp/lagrangian-skills
mkdir -p ~/.claude/skills
cp -r /tmp/lagrangian-skills/lagrangian ~/.claude/skills/

Other compatible agents

git clone https://github.com/Sliky1/lagrangian-skills.git /tmp/lagrangian-skills
cp -r /tmp/lagrangian-skills/lagrangian/ ~/.config/agents/skills/lagrangian/

Skill

Skill Description
lagrangian Constrained optimization via ALM/ADMM/KKT. Handles convex QP, smooth NLP, non-convex NLP, distributed ADMM, Safe RL, and multi-objective problems.

Supported problem types

Problem type Solver Notes
Convex QP / Smooth NLP standard_solver Baseline; KKT verified
Non-convex NLP ALM (n_starts=10, Halton) FIX-21v2 + FIX-22 dual-layer guard
Distributed ADMM Multi-agent consensus
Safe RL ALM + gradient cosine guard FIX-16
Multi-objective ALM + Pareto repair FIX-17
Bayesian-optimization hybrid Cross-skill COOP COOP-1/2/3
Natural language / degenerate ALM + Tikhonov regularization FIX-19

Latest release

v0.9.3 — 2026-05-01

  • [FIX-22] Dual-layer adversarial protection: ensemble_vote pre-detection + adaptive_trust_region projection
  • non_convex+adversarial success rate: 94.29% → 96.82% (+2.53pp)
  • Language optimization: technical identifiers 100% English, behavioral rules Chinese

Full release history → Releases · CHANGELOG

Evals

Scenario With skill Without skill
convex_qp + normal 99.8% ~91%
non_convex + adversarial 96.8% ~71%
safe_rl + near_infeasible 99.1% ~68%
mixed_bayes + adversarial 96.0% ~60%
natural_lang + degenerate 95.2% ~55%

Full eval details → evals/

Why the skill is written in mixed Chinese and English

The skill uses Chinese for behavioral rules and English for technical identifiers. This is a deliberate design choice, not an accident:

Content type Language Reason
Algorithm names, parameter names, JSON keys English The model's training corpus for optimization algorithms is almost entirely English — using English identifiers directly activates the relevant knowledge with higher attention weight
JSON / code blocks English Format specification is English
Behavioral rules, Forbidden Behaviors, guardrails Chinese Chinese's topic-prominent structure allows expressing constraints without grammatical subjects, reducing token count by ~20–30% while eliminating subordinate clause ambiguity
Numeric parameters (thresh=0.010) English Universal format

Philosophy

This skill is workflow-first and guardrail-heavy. It doesn't just remind the agent that ALM exists — it enforces step ordering, input validation, solver routing, and output structure that agents skip when left to their own judgment.

Each FIX is a documented regression addressed by ablation experiment, not a heuristic tweak. All parameter choices (Halton thresh=0.010, proj_radius=0.10, cos_thresh=0.10) are backed by simulation data in evals/ablation/.

License

MIT — see LICENSE.

About

Agent Skill for constrained optimization via ALM/ADMM/KKT — with adversarial guards, COOP cross-skill protocol, and 20k-sim validated parameters

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors