|
| 1 | +--- |
| 2 | +name: test-audit |
| 3 | +description: Verify test coverage consistency across the neuron workspace |
| 4 | +--- |
| 5 | + |
| 6 | +# Test Audit |
| 7 | + |
| 8 | +Run this skill after adding features, changing public API, or adding new crates. |
| 9 | +It systematically checks that test coverage is consistent with the codebase. |
| 10 | + |
| 11 | +## When to run |
| 12 | + |
| 13 | +- After implementing a new feature or public API change |
| 14 | +- After adding or removing a crate from the workspace |
| 15 | +- Before any release |
| 16 | +- When asked to "audit tests" or "check test coverage" |
| 17 | + |
| 18 | +## Discovery |
| 19 | + |
| 20 | +All checks start by parsing the root `Cargo.toml` `[workspace].members` to get |
| 21 | +the canonical crate set. No crate names are hardcoded — if a new crate is added, |
| 22 | +these checks automatically cover it. |
| 23 | + |
| 24 | +## Checklist |
| 25 | + |
| 26 | +Work through each check in order. Report findings as you go. |
| 27 | + |
| 28 | +### 1. Every crate has tests |
| 29 | + |
| 30 | +For each workspace member, verify at least one of: |
| 31 | + |
| 32 | +- A `tests/` directory containing `.rs` files |
| 33 | +- A `#[cfg(test)]` module in any `src/*.rs` file |
| 34 | + |
| 35 | +Flag any crate with zero test files. |
| 36 | + |
| 37 | +### 2. Public API test coverage |
| 38 | + |
| 39 | +For each crate: |
| 40 | + |
| 41 | +1. Read `src/lib.rs` and extract all public names: `pub use`, `pub struct`, |
| 42 | + `pub enum`, `pub trait`, `pub fn`, `pub type` |
| 43 | +2. For glob re-exports (`pub use module::*`), read the source module to expand |
| 44 | + the names |
| 45 | +3. Grep the crate's `tests/` directory and inline `#[cfg(test)]` modules for |
| 46 | + each name |
| 47 | +4. Flag any public type, trait, or function that appears in zero test files |
| 48 | + |
| 49 | +This is a **heuristic** (name-based grep, not semantic analysis). False positives |
| 50 | +are acceptable — it's better to over-flag and let the auditor assess than to |
| 51 | +miss gaps. |
| 52 | + |
| 53 | +### 3. Error variant coverage |
| 54 | + |
| 55 | +For each crate that defines error enums (look for `#[derive(thiserror::Error)]` |
| 56 | +or files named `error.rs`): |
| 57 | + |
| 58 | +1. Extract each enum variant name |
| 59 | +2. Grep the crate's test files for that variant name |
| 60 | +3. Flag any variant that appears in zero tests |
| 61 | + |
| 62 | +Error paths are where bugs hide — every error variant should have at least one |
| 63 | +test that exercises it. |
| 64 | + |
| 65 | +### 4. Streaming parity |
| 66 | + |
| 67 | +This check is **scoped dynamically**: it only runs if the workspace contains a |
| 68 | +crate whose `src/` files define both `pub async fn run(` and |
| 69 | +`pub async fn run_stream(` methods (currently `neuron-loop`). |
| 70 | + |
| 71 | +For each such crate: |
| 72 | + |
| 73 | +1. Grep `tests/` for test function names containing `run(` or `.run(` |
| 74 | + (non-streaming tests) |
| 75 | +2. Extract the feature keyword from the test name (e.g., `usage_limits`, |
| 76 | + `cancellation`, `model_retry`, `compaction`, `hooks`) |
| 77 | +3. Check if a corresponding test exists with `run_stream` or `stream` in the |
| 78 | + name testing the same feature |
| 79 | +4. Flag features that are tested only via the non-streaming path |
| 80 | + |
| 81 | +The streaming path often has different control flow and error handling — features |
| 82 | +need coverage in both paths. |
| 83 | + |
| 84 | +### 5. Example compilation |
| 85 | + |
| 86 | +Run: |
| 87 | + |
| 88 | +``` |
| 89 | +cargo build --workspace --examples |
| 90 | +``` |
| 91 | + |
| 92 | +Report any compilation failures. This is a binary pass/fail. |
| 93 | + |
| 94 | +### 6. Test infrastructure consistency |
| 95 | + |
| 96 | +For each crate's test files (`tests/*.rs` and inline `#[cfg(test)]` modules): |
| 97 | + |
| 98 | +- **Missing async attribute**: Flag any `async fn test_*` or `async fn *_test` |
| 99 | + that lacks `#[tokio::test]` (likely a missing attribute — the test won't run) |
| 100 | +- **Brittle panic tests**: Flag any `#[should_panic]` without an |
| 101 | + `expected = "..."` message (these pass on ANY panic, hiding real bugs) |
| 102 | + |
| 103 | +Report as informational findings, not hard failures. |
| 104 | + |
| 105 | +### 7. Test count summary |
| 106 | + |
| 107 | +Run `cargo test --workspace` and parse the output to report per-crate test |
| 108 | +counts. This is **informational only** — no PASS/FAIL. It establishes a |
| 109 | +baseline so regressions are visible. |
| 110 | + |
| 111 | +Report as a table: |
| 112 | + |
| 113 | +``` |
| 114 | +| Crate | Tests | |
| 115 | +|-------|-------| |
| 116 | +| neuron-types | 160 | |
| 117 | +| neuron-tool | 60 | |
| 118 | +| ... | ... | |
| 119 | +| **Total** | **N** | |
| 120 | +``` |
| 121 | + |
| 122 | +Include both integration tests (from `tests/`) and inline tests (from |
| 123 | +`#[cfg(test)]` modules) in the count. |
| 124 | + |
| 125 | +## Fixing issues |
| 126 | + |
| 127 | +For each issue found: |
| 128 | + |
| 129 | +- **Missing test for public type**: Add at least one test that constructs or |
| 130 | + uses the type. For traits, add a test with a mock implementation. |
| 131 | +- **Missing error variant test**: Add a test that triggers the error condition |
| 132 | + and asserts the variant. |
| 133 | +- **Missing streaming test**: Clone the `run()` test, adapt it to use |
| 134 | + `run_stream()`, and verify the same behavior via stream events. |
| 135 | +- **Infrastructure issues**: Add missing `#[tokio::test]` attributes, add |
| 136 | + `expected` messages to `#[should_panic]`. |
| 137 | + |
| 138 | +## Output format |
| 139 | + |
| 140 | +Report results as: |
| 141 | + |
| 142 | +``` |
| 143 | +## Test Audit Results |
| 144 | +
|
| 145 | +### Check 1: Every crate has tests - PASS/FAIL |
| 146 | +[details if FAIL] |
| 147 | +
|
| 148 | +### Check 2: Public API test coverage - PASS/FAIL |
| 149 | +[details if FAIL, listing untested types per crate] |
| 150 | +
|
| 151 | +### Check 3: Error variant coverage - PASS/FAIL |
| 152 | +[details if FAIL, listing untested variants] |
| 153 | +
|
| 154 | +### Check 4: Streaming parity - PASS/FAIL |
| 155 | +[details if FAIL, listing features missing streaming tests] |
| 156 | +
|
| 157 | +### Check 5: Example compilation - PASS/FAIL |
| 158 | +[details if FAIL] |
| 159 | +
|
| 160 | +### Check 6: Test infrastructure - INFO |
| 161 | +[any findings] |
| 162 | +
|
| 163 | +### Check 7: Test count summary - INFO |
| 164 | +[table of per-crate counts] |
| 165 | +``` |
0 commit comments