Skip to content

feat: token efficiency system — 40-60% output reduction#40

Open
rohitg00 wants to merge 3 commits intomainfrom
feat/token-efficiency
Open

feat: token efficiency system — 40-60% output reduction#40
rohitg00 wants to merge 3 commits intomainfrom
feat/token-efficiency

Conversation

@rohitg00
Copy link
Copy Markdown
Owner

@rohitg00 rohitg00 commented Apr 1, 2026

Summary

  • Adds token efficiency system inspired by drona23/claude-token-efficient (MIT)
  • New skill, rule file, and 3 PreToolUse hook scripts to reduce token waste by 40-60%
  • Updates 5 existing files to integrate token efficiency patterns throughout

What's New

New Files (5)

File Purpose
rules/token-efficiency.mdc Anti-sycophancy, ASCII-only, read-before-write, one-pass coding, no re-reads
skills/token-efficiency/SKILL.md Comprehensive guide with task profiles, tool-call budgets, impact measurement
scripts/read-before-write.js PreToolUse hook: warns on Write/Edit without prior Read
scripts/reread-tracker.js PreToolUse hook: detects unnecessary re-reads of unchanged files
scripts/tool-call-budget.js PreToolUse hook: tracks calls against budget thresholds (20/30/50/80)

Updated Files (7)

File Changes
hooks/hooks.json 3 new PreToolUse hooks wired up
rules/core-rules.md Token efficiency rules added to Communication + Context sections
rules/communication-style.mdc Anti-sycophancy rules, output reduction patterns
rules/context-discipline.mdc One-pass discipline, tool-call budgets, no re-reads
skills/cost-tracker/SKILL.md Tool-call budget table, 4 new optimization strategies
skills/context-optimizer/SKILL.md Token Efficiency section with output reduction + task profiles
package.json Version bump to 3.2.0, new keywords

Token Efficiency Techniques

Output Reduction (40-60% savings):

  • No sycophantic openers ("Sure!", "Great question!")
  • No closing fluff ("Let me know if you need anything!")
  • No prompt restatement before answering
  • Code first, explanation only if non-obvious
  • ASCII-only output (no em dashes, smart quotes)

Behavioral Efficiency:

  • One-pass coding: write solution, test, stop if green
  • Read-before-write enforcement via hook
  • Re-read detection and warnings
  • Tool-call budgets with 80% threshold warnings

Task Profiles:

  • Coding: code first, minimal explanation
  • Agent/Pipeline: structured output only, parseable
  • Analysis: finding first, tables over paragraphs

Test plan

  • Verify hooks load without errors on session start
  • Write a file without reading it first — should see warning
  • Re-read an unchanged file — should see warning at 2nd read
  • Make 16+ tool calls — should see budget warning at 16 (80% of 20)
  • Verify existing hooks still work (quality-gate, secret detection)

Summary by CodeRabbit

Release Notes - v3.2.0

  • New Features

    • Expanded workflow capabilities: Skills 17→24, Agents 6→8, Commands 16→21
    • New commands: /permission-tuner, /cost-tracker, /mcp-audit
    • Introduced token-efficiency optimization workflow
    • Extended hook events from 18 to 24 with new event types
  • Documentation

    • Updated README with v3.2 highlights and feature comparison table
    • Enhanced communication and execution guidelines
    • Added comprehensive token-efficiency skill guide

rohitg00 added 3 commits April 1, 2026 13:56
- Add competitor comparison table (vs Superpowers, ECC, gstack, GSD)
- Add What's New in v3.2 section with all new features
- Add Tips from the Community section with attributed quotes
- Add Related Projects section
- Update all counts: 24 skills, 8 agents, 21 commands, 24 hook events
- Expand all tables with new skills, agents, commands, and hooks
- Update structure section with correct counts
- Fix ECC link: anthropics/courses (404) → affaan-m/everything-claude-code
- Fix best-practice link: anthropics/courses (404) → shanraisshan/claude-code-best-practice
- Fix competitor counts: ECC 143→140+, 71→60+; gstack 33→18+, 0→5+
- Add source links to competitor names in comparison table header
…s, one-pass discipline

New skill, rule, and 3 hook scripts to reduce token waste by 40-60%:

- rules/token-efficiency.mdc: anti-sycophancy, ASCII-only, read-before-write,
  one-pass coding, no re-reads, structured output preference
- skills/token-efficiency/SKILL.md: comprehensive guide with task profiles
  (coding/agent/analysis), tool-call budget tiers, measuring impact
- scripts/read-before-write.js: PreToolUse hook warns on Write/Edit without
  prior Read of the target file
- scripts/reread-tracker.js: PreToolUse hook detects unnecessary re-reads of
  unchanged files
- scripts/tool-call-budget.js: PreToolUse hook tracks call count against
  budget thresholds (20/30/50/80) with 80% warnings

Updated existing files:
- hooks/hooks.json: 3 new PreToolUse hooks (read-before-write, reread-tracker,
  tool-call-budget)
- rules/core-rules.md: added token efficiency rules to Communication and Context
- rules/communication-style.mdc: added anti-sycophancy and output reduction rules
- rules/context-discipline.mdc: added one-pass discipline and tool-call budgets
- skills/cost-tracker/SKILL.md: added tool-call budget table and 4 new strategies
- skills/context-optimizer/SKILL.md: added Token Efficiency section with output
  reduction, behavioral efficiency, and task profiles

Inspired by drona23/claude-token-efficient (MIT).
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 1, 2026

📝 Walkthrough

Walkthrough

This PR introduces v3.2 with token-efficiency and anti-sycophancy guardrails, adds three new hook scripts for read-before-write validation and tool-call budgeting, and updates comprehensive documentation across rules and skills. Version bumped from 3.1.0 to 3.2.0 with expanded feature counts.

Changes

Cohort / File(s) Summary
Version & Metadata
package.json
Version incremented to 3.2.0; keywords expanded to include token-efficiency, anti-sycophancy, and tool-call-budget.
Documentation
README.md
Updated v3.2 header with expanded feature counts (Skills 17→24, Agents 6→8, Commands 16→21, Hook events/scripts 18/23→24/29); added "What's New in v3.2", feature comparison table, and community tips; replaced hook-event reference table with 24-event/29-script variant; new commands (/permission-tuner, /cost-tracker, /mcp-audit).
Core Rules & Discipline
rules/communication-style.mdc, rules/context-discipline.mdc, rules/core-rules.md, rules/token-efficiency.mdc
Expanded communication rules with anti-sycophancy constraints (no openers, closing fluff, prompt restatement, "As an AI" disclaimers), code-first output, ASCII-only punctuation; added "read-before-write" discipline, tool-call budgets by task tier (20/30/50/80), one-pass coding workflow, and re-read avoidance rules. New token-efficiency.mdc consolidates output-quality and session-discipline constraints.
Hook Configuration
hooks/hooks.json
Added PreToolUse hooks for matcher: "Read" (triggers scripts/reread-tracker.js and scripts/read-before-write.js); extended Edit/Write hooks to include scripts/read-before-write.js; updated catch-all hook description to reflect tool-call budget tracking.
Session Tracking Scripts
scripts/read-before-write.js, scripts/reread-tracker.js, scripts/tool-call-budget.js
Three new Node.js CLI scripts: read-before-write.js validates files are read before write/edit operations; reread-tracker.js monitors re-read frequency and warns when unchanged files are re-read ≥2 times; tool-call-budget.js increments and enforces per-session tool-call limits with 80% threshold warnings. All use os.tmpdir()/pro-workflow/ for session-scoped state.
Skill Documentation
skills/context-optimizer/SKILL.md, skills/cost-tracker/SKILL.md, skills/token-efficiency/SKILL.md
Extended context-optimizer with token-efficiency rules and output profiles; added tool-call budgets section to cost-tracker; new token-efficiency/SKILL.md defines anti-sycophancy patterns, one-pass coding, tool budgets, read-before-write enforcement, ASCII constraints, and impact metrics.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant PreToolUse as PreToolUse Hook
    participant RBW as read-before-write.js
    participant RRT as reread-tracker.js
    participant TCB as tool-call-budget.js
    participant SessionStore as Session State<br/>(tmpdir)

    Client->>PreToolUse: Tool: Read(file_path)
    PreToolUse->>RRT: stdin: file_path
    RRT->>SessionStore: Check mtime vs last read
    RRT-->>RRT: Increment counter if unchanged
    RRT->>SessionStore: Update tracking JSON
    PreToolUse->>RBW: stdin: file_path
    RBW->>SessionStore: Record file as read
    PreToolUse->>TCB: stdin: tool_call event
    TCB->>SessionStore: Increment session counter
    TCB-->>TCB: Warn at 80% threshold if needed

    Client->>PreToolUse: Tool: Write(file_path)
    PreToolUse->>RBW: stdin: file_path
    RBW->>SessionStore: Verify file was read
    RBW-->>RBW: Warn if not previously read
    PreToolUse->>TCB: stdin: tool_call event
    TCB->>SessionStore: Increment & check limits
    TCB-->>TCB: Emit warning at boundaries
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Poem

🐰 Hop along with hooks so keen,
Reading files before unseen,
Budgets tracked in JSON fair,
No more sycophancy in the air—
Token-efficient, clean, and bright,
v3.2 gets it right!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately captures the primary objective of the pull request: introducing a token efficiency system designed to reduce output by 40-60%, which is the main focus of all the changes across rules, skills, hooks, and scripts.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/token-efficiency

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (4)
skills/cost-tracker/SKILL.md (1)

51-59: Consider a single source of truth for budget tiers.

This table duplicates the budget matrix in skills/token-efficiency/SKILL.md; centralizing or cross-linking would prevent future drift between docs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/cost-tracker/SKILL.md` around lines 51 - 59, The budget matrix in the
skills/cost-tracker/SKILL.md (the table with headers "Task Type | Tool-Call
Budget | Wrap-Up At") duplicates the matrix in skills/token-efficiency/SKILL.md;
create a single source of truth by removing the duplicated table from
skills/cost-tracker/SKILL.md and replacing it with a clear cross-link to the
canonical matrix (or move the canonical table into a new shared doc like
skills/budget-tiers/SKILL.md), update any references to point to the chosen
canonical location, and add a short comment in both SKILL.md files indicating
where the authoritative budget tiers live so future edits only change one file.
scripts/read-before-write.js (1)

40-54: Read tracking file updates are vulnerable to lost writes under concurrency.

This read-modify-write cycle is not atomic; concurrent tool events can overwrite each other and drop tracked reads.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/read-before-write.js` around lines 40 - 54, The read-modify-write on
readFiles/readTrackFile can lose concurrent updates; instead, when handling the
'Read' tool in the block that reads and writes readTrackFile, acquire an atomic
update by re-reading the file immediately before write, merging the new entry by
taking the max timestamp for the same filePath with any existing value, and then
performing an atomic replace (write to a temp file and fs.rename to replace
readTrackFile) so writes are atomic; optionally retry the read-merge-rename loop
on conflict/errors to ensure concurrent updates are merged rather than
overwritten.
scripts/tool-call-budget.js (1)

49-51: Remove redundant hardcoded check for 65 calls.

This check is inconsistent with the general threshold logic above:

  • The loop already warns at 64 (80 * 0.8) for approaching the 80-call limit
  • Then this fires separately at exactly 65 with a different message

This creates confusing back-to-back warnings at 64 and 65. If 65 is the intended wrap-up point for large features, integrate it into the thresholds array instead.

Proposed fix
-  if (count === 65) {
-    console.error('[TokenEfficiency] 65 tool calls — approaching large-feature limit. Wrap up current task.');
-  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/tool-call-budget.js` around lines 49 - 51, Remove the hardcoded if
(count === 65) console.error branch and instead add the 65-call wrap-up
threshold into the existing thresholds array/logic used for warnings (the same
mechanism that produced the 64 warning). Update the thresholds data (or
threshold entries) to include an entry for 65 with the desired message
("[TokenEfficiency] 65 tool calls — approaching large-feature limit. Wrap up
current task."), and ensure the loop that checks count against thresholds uses
that entry so you no longer emit back-to-back 64/65 messages; reference the
count variable and the thresholds array/threshold checking code where warnings
are produced.
scripts/reread-tracker.js (1)

24-28: Reading from /dev/stdin is not portable to Windows.

This approach works on Linux and macOS but will fail on Windows. If Windows support is needed, consider using an event-based stdin read pattern like scripts/worktree-create.js (context snippet 2).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/reread-tracker.js` around lines 24 - 28, The code uses
fs.readFileSync('/dev/stdin', 'utf8') which is not portable to Windows; replace
the synchronous /dev/stdin read with an event-based read from process.stdin:
remove the try/catch block around fs.readFileSync('/dev/stdin', 'utf8'), instead
collect chunks on process.stdin 'data' events into the existing input variable
and handle completion on the 'end' event (and call process.exit(0) if input is
empty); ensure you keep the same surrounding logic that uses input so behavior
remains unchanged (look for the input variable and the try/catch/process.exit(0)
lines to update).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/read-before-write.js`:
- Around line 50-53: The code stores raw toolInput.file_path as the key in
readFiles causing duplicates for relative vs absolute paths; normalize paths
before using them by converting toolInput.file_path to a canonical form (e.g.,
use Node's path.resolve/path.normalize) and use that normalizedPath when
assigning readFiles[normalizedPath] and when later comparing or checking reads
(references: toolInput.file_path, readFiles, readTrackFile); apply the same
normalization logic to the other block mentioned (lines 59-69) so all lookups
and writes use the canonical path consistently.
- Around line 17-22: Sanitize the sessionId before using it in readTrackFile to
prevent path traversal: in the logic that builds sessionId (symbol: sessionId)
strip or replace any path separators and suspicious characters (e.g., slashes,
backslashes, "..") or restrict to a safe character class (alphanumeric,
dashes/underscores), and if the sanitized value is empty, fall back to a safe
default or a short hash; then construct readTrackFile with the sanitized
sessionId (symbols: readTrackFile, getTempDir, ensureDir) so filenames cannot
escape the tempDir.
- Line 25: The code reads stdin using fs.readFileSync('/dev/stdin', 'utf8'),
which is Unix-only and breaks on Windows; update the call to use the file
descriptor number instead: replace the '/dev/stdin' argument with 0 in the
fs.readFileSync call (the statement referencing input and fs.readFileSync) so it
becomes portable across Node.js-supported platforms.

In `@scripts/reread-tracker.js`:
- Around line 64-70: The current reread detection in reread-tracker.js (the
block using tracked[`${filePath}:count`], readCount, and filePath) only logs
errors and exits with code 0; update it to enforce the "Hard rules" by
terminating the process with a non-zero exit when readCount >= 2 (e.g., after
emitting the console.error call, call process.exit(1) or throw an Error) so
violations stop the tool, and ensure the error message mentions the rule and the
file name for clarity; alternatively, if you prefer warnings, update SKILL.md to
mark Read-Before-Write as guidance instead of "Hard rules"—but pick one approach
and implement it consistently.

In `@scripts/tool-call-budget.js`:
- Around line 30-47: The thresholds currently compute warnAt =
Math.floor(t.limit * 0.8) causing warn values (16,24,40,64) to differ from the
documented Wrap-Up At values (15,25,40,65); update the implementation to use
explicit warn values to match the docs by changing the thresholds array entries
(the thresholds constant) to include a warn property (e.g., { limit: 20, warn:
15, label: 'quick-fix budget (20 calls)' }) and replace the computed warnAt with
t.warn when comparing against count inside the loop (the logic around warnAt,
thresholds, and the count comparisons should reference t.warn instead of
Math.floor(...)).

---

Nitpick comments:
In `@scripts/read-before-write.js`:
- Around line 40-54: The read-modify-write on readFiles/readTrackFile can lose
concurrent updates; instead, when handling the 'Read' tool in the block that
reads and writes readTrackFile, acquire an atomic update by re-reading the file
immediately before write, merging the new entry by taking the max timestamp for
the same filePath with any existing value, and then performing an atomic replace
(write to a temp file and fs.rename to replace readTrackFile) so writes are
atomic; optionally retry the read-merge-rename loop on conflict/errors to ensure
concurrent updates are merged rather than overwritten.

In `@scripts/reread-tracker.js`:
- Around line 24-28: The code uses fs.readFileSync('/dev/stdin', 'utf8') which
is not portable to Windows; replace the synchronous /dev/stdin read with an
event-based read from process.stdin: remove the try/catch block around
fs.readFileSync('/dev/stdin', 'utf8'), instead collect chunks on process.stdin
'data' events into the existing input variable and handle completion on the
'end' event (and call process.exit(0) if input is empty); ensure you keep the
same surrounding logic that uses input so behavior remains unchanged (look for
the input variable and the try/catch/process.exit(0) lines to update).

In `@scripts/tool-call-budget.js`:
- Around line 49-51: Remove the hardcoded if (count === 65) console.error branch
and instead add the 65-call wrap-up threshold into the existing thresholds
array/logic used for warnings (the same mechanism that produced the 64 warning).
Update the thresholds data (or threshold entries) to include an entry for 65
with the desired message ("[TokenEfficiency] 65 tool calls — approaching
large-feature limit. Wrap up current task."), and ensure the loop that checks
count against thresholds uses that entry so you no longer emit back-to-back
64/65 messages; reference the count variable and the thresholds array/threshold
checking code where warnings are produced.

In `@skills/cost-tracker/SKILL.md`:
- Around line 51-59: The budget matrix in the skills/cost-tracker/SKILL.md (the
table with headers "Task Type | Tool-Call Budget | Wrap-Up At") duplicates the
matrix in skills/token-efficiency/SKILL.md; create a single source of truth by
removing the duplicated table from skills/cost-tracker/SKILL.md and replacing it
with a clear cross-link to the canonical matrix (or move the canonical table
into a new shared doc like skills/budget-tiers/SKILL.md), update any references
to point to the chosen canonical location, and add a short comment in both
SKILL.md files indicating where the authoritative budget tiers live so future
edits only change one file.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 70788312-2e7a-4280-aeb5-beb119dda5f3

📥 Commits

Reviewing files that changed from the base of the PR and between dc11253 and f58c986.

📒 Files selected for processing (13)
  • README.md
  • hooks/hooks.json
  • package.json
  • rules/communication-style.mdc
  • rules/context-discipline.mdc
  • rules/core-rules.md
  • rules/token-efficiency.mdc
  • scripts/read-before-write.js
  • scripts/reread-tracker.js
  • scripts/tool-call-budget.js
  • skills/context-optimizer/SKILL.md
  • skills/cost-tracker/SKILL.md
  • skills/token-efficiency/SKILL.md

Comment on lines +17 to +22
const sessionId = process.env.CLAUDE_SESSION_ID || String(process.ppid) || 'default';
const tempDir = getTempDir();
ensureDir(tempDir);

const readTrackFile = path.join(tempDir, `reads-${sessionId}.json`);

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Sanitize sessionId before composing the tracker filename.

Using raw CLAUDE_SESSION_ID in the filename allows path traversal patterns and can write outside the intended temp folder.

Proposed fix
-  const sessionId = process.env.CLAUDE_SESSION_ID || String(process.ppid) || 'default';
+  const rawSessionId = process.env.CLAUDE_SESSION_ID || String(process.ppid) || 'default';
+  const sessionId = rawSessionId.replace(/[^a-zA-Z0-9._-]/g, '_');
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const sessionId = process.env.CLAUDE_SESSION_ID || String(process.ppid) || 'default';
const tempDir = getTempDir();
ensureDir(tempDir);
const readTrackFile = path.join(tempDir, `reads-${sessionId}.json`);
const rawSessionId = process.env.CLAUDE_SESSION_ID || String(process.ppid) || 'default';
const sessionId = rawSessionId.replace(/[^a-zA-Z0-9._-]/g, '_');
const tempDir = getTempDir();
ensureDir(tempDir);
const readTrackFile = path.join(tempDir, `reads-${sessionId}.json`);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/read-before-write.js` around lines 17 - 22, Sanitize the sessionId
before using it in readTrackFile to prevent path traversal: in the logic that
builds sessionId (symbol: sessionId) strip or replace any path separators and
suspicious characters (e.g., slashes, backslashes, "..") or restrict to a safe
character class (alphanumeric, dashes/underscores), and if the sanitized value
is empty, fall back to a safe default or a short hash; then construct
readTrackFile with the sanitized sessionId (symbols: readTrackFile, getTempDir,
ensureDir) so filenames cannot escape the tempDir.


let input = '';
try {
input = fs.readFileSync('/dev/stdin', 'utf8');
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In Node.js, is fs.readFileSync('/dev/stdin', 'utf8')portable on Windows, and isfs.readFileSync(0, 'utf8') the recommended cross-platform alternative for CLI stdin?

💡 Result:

No, fs.readFileSync('/dev/stdin', 'utf8') is not portable on Windows because Windows lacks /dev/stdin; it results in ENOENT error. Yes, fs.readFileSync(0, 'utf8') works cross-platform as 0 is the POSIX stdin file descriptor (STDIN_FILENO) and functions on Windows too, reading entire stdin synchronously into a string. However, it is not the officially recommended approach for CLI stdin reading—Node.js docs emphasize streams via process.stdin or readline module for proper handling, especially since readFileSync assumes a seekable regular file (may fail to read complete stdin from pipes). For sync CLI tools wanting full stdin, this hack is commonly used despite caveats.

Citations:


🏁 Script executed:

cat -n scripts/read-before-write.js

Repository: rohitg00/pro-workflow

Length of output: 2336


🏁 Script executed:

# Check for Windows references in project docs and CI
find . -type f \( -name "README*" -o -name "*.md" \) | head -5 | xargs grep -l -i "windows\|platform" 2>/dev/null || echo "No Windows references found in docs"

# Check for CI configuration
ls -la .github/workflows/ 2>/dev/null || echo "No GitHub workflows found"

# Check package.json for any platform hints
cat package.json 2>/dev/null | grep -A 2 -B 2 "windows\|platform" || echo "No platform hints in package.json"

Repository: rohitg00/pro-workflow

Length of output: 428


🏁 Script executed:

cat -n .github/workflows/ci.yml

Repository: rohitg00/pro-workflow

Length of output: 2754


Use fd 0 instead of /dev/stdin for cross-platform stdin reads.

/dev/stdin is Unix-specific and fails with ENOENT on Windows. Use fs.readFileSync(0, 'utf8') to read stdin portably across Node.js-supported platforms.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/read-before-write.js` at line 25, The code reads stdin using
fs.readFileSync('/dev/stdin', 'utf8'), which is Unix-only and breaks on Windows;
update the call to use the file descriptor number instead: replace the
'/dev/stdin' argument with 0 in the fs.readFileSync call (the statement
referencing input and fs.readFileSync) so it becomes portable across
Node.js-supported platforms.

Comment on lines +50 to +53
const filePath = toolInput.file_path || '';
if (filePath) {
readFiles[filePath] = Date.now();
fs.writeFileSync(readTrackFile, JSON.stringify(readFiles));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize file paths before tracking/comparison.

Current keying uses raw tool_input.file_path, so relative/absolute variants of the same file can produce false "not read" warnings.

Proposed fix
+function normalizeFilePath(filePath) {
+  return path.resolve(filePath);
+}
+
   if (tool === 'Read') {
-    const filePath = toolInput.file_path || '';
+    const filePath = toolInput.file_path ? normalizeFilePath(toolInput.file_path) : '';
     if (filePath) {
       readFiles[filePath] = Date.now();
       fs.writeFileSync(readTrackFile, JSON.stringify(readFiles));
@@
   if (tool === 'Write' || tool === 'Edit') {
-    const filePath = toolInput.file_path || '';
+    const filePath = toolInput.file_path ? normalizeFilePath(toolInput.file_path) : '';

Also applies to: 59-69

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/read-before-write.js` around lines 50 - 53, The code stores raw
toolInput.file_path as the key in readFiles causing duplicates for relative vs
absolute paths; normalize paths before using them by converting
toolInput.file_path to a canonical form (e.g., use Node's
path.resolve/path.normalize) and use that normalizedPath when assigning
readFiles[normalizedPath] and when later comparing or checking reads
(references: toolInput.file_path, readFiles, readTrackFile); apply the same
normalization logic to the other block mentioned (lines 59-69) so all lookups
and writes use the canonical path consistently.

Comment on lines +64 to +70
if (!modified) {
const readCount = (tracked[`${filePath}:count`] || 1) + 1;
tracked[`${filePath}:count`] = readCount;
if (readCount >= 2) {
console.error(`[TokenEfficiency] Re-reading ${path.basename(filePath)} (${readCount}x) — file unchanged since last read. Consider using cached knowledge.`);
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if other hook scripts in this repo enforce rules by exiting non-zero
# or if warn-only is the established pattern

rg -n 'process\.exit\([^0]' scripts/ || echo "No non-zero exits found - warn-only appears to be the pattern"

Repository: rohitg00/pro-workflow

Length of output: 126


🏁 Script executed:

find . -name "SKILL.md" -o -name "skill.md" | head -20

Repository: rohitg00/pro-workflow

Length of output: 695


🏁 Script executed:

# Also search for any documentation mentioning "Hard rules" or "Read-Before-Write"
rg -i "hard rule|read-before-write" --max-count=50

Repository: rohitg00/pro-workflow

Length of output: 1005


🏁 Script executed:

cat skills/token-efficiency/SKILL.md

Repository: rohitg00/pro-workflow

Length of output: 4140


🏁 Script executed:

cat -n scripts/reread-tracker.js

Repository: rohitg00/pro-workflow

Length of output: 2410


Reconcile enforcement mechanism with "Hard rules" documentation.

The SKILL.md documentation explicitly labels Read-Before-Write rules as "Hard rules" under enforcement, yet reread-tracker.js only emits warnings via console.error and exits with code 0. The tool proceeds regardless of violations.

Decide: either update SKILL.md to reflect that these are guidelines/warnings rather than enforced rules, or implement actual enforcement by exiting with a non-zero code when violations are detected.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/reread-tracker.js` around lines 64 - 70, The current reread detection
in reread-tracker.js (the block using tracked[`${filePath}:count`], readCount,
and filePath) only logs errors and exits with code 0; update it to enforce the
"Hard rules" by terminating the process with a non-zero exit when readCount >= 2
(e.g., after emitting the console.error call, call process.exit(1) or throw an
Error) so violations stop the tool, and ensure the error message mentions the
rule and the file name for clarity; alternatively, if you prefer warnings,
update SKILL.md to mark Read-Before-Write as guidance instead of "Hard
rules"—but pick one approach and implement it consistently.

Comment on lines +30 to +47
const thresholds = [
{ limit: 20, label: 'quick-fix budget (20 calls)' },
{ limit: 30, label: 'bug-fix budget (30 calls)' },
{ limit: 50, label: 'feature budget (50 calls)' },
{ limit: 80, label: 'large-feature budget (80 calls)' }
];

for (const t of thresholds) {
const warnAt = Math.floor(t.limit * 0.8);
if (count === warnAt) {
console.error(`[TokenEfficiency] ${count} tool calls — approaching ${t.label}. Consider wrapping up or compacting.`);
break;
}
if (count === t.limit) {
console.error(`[TokenEfficiency] ${count} tool calls — hit ${t.label}. Commit progress and assess remaining work.`);
break;
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Documentation mismatch: "Wrap-Up At" thresholds differ from implementation.

The script calculates warnAt = Math.floor(limit * 0.8), yielding:

  • 20 → 16, 30 → 24, 50 → 40, 80 → 64

However, skills/token-efficiency/SKILL.md documents "Wrap-Up At" as:

  • 20 → 15, 30 → 25, 50 → 40, 80 → 65

The 15/25/65 values don't match the 80% formula. Either update the documentation to match the implementation (16/24/40/64) or adjust the thresholds array to use explicit warn values.

Proposed fix: Use explicit wrap-up values matching documentation
 const thresholds = [
-    { limit: 20, label: 'quick-fix budget (20 calls)' },
-    { limit: 30, label: 'bug-fix budget (30 calls)' },
-    { limit: 50, label: 'feature budget (50 calls)' },
-    { limit: 80, label: 'large-feature budget (80 calls)' }
+    { limit: 20, warnAt: 15, label: 'quick-fix budget (20 calls)' },
+    { limit: 30, warnAt: 25, label: 'bug-fix budget (30 calls)' },
+    { limit: 50, warnAt: 40, label: 'feature budget (50 calls)' },
+    { limit: 80, warnAt: 65, label: 'large-feature budget (80 calls)' }
   ];

   for (const t of thresholds) {
-    const warnAt = Math.floor(t.limit * 0.8);
-    if (count === warnAt) {
+    if (count === t.warnAt) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/tool-call-budget.js` around lines 30 - 47, The thresholds currently
compute warnAt = Math.floor(t.limit * 0.8) causing warn values (16,24,40,64) to
differ from the documented Wrap-Up At values (15,25,40,65); update the
implementation to use explicit warn values to match the docs by changing the
thresholds array entries (the thresholds constant) to include a warn property
(e.g., { limit: 20, warn: 15, label: 'quick-fix budget (20 calls)' }) and
replace the computed warnAt with t.warn when comparing against count inside the
loop (the logic around warnAt, thresholds, and the count comparisons should
reference t.warn instead of Math.floor(...)).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant