This document defines verifiable system properties, evidence artifacts, and a tooling roadmap for progressively increasing assurance in this codebase.
The goal is to make verification routine and repeatable:
- SIL (Software-in-the-Loop) provides deterministic, replayable correctness testing.
- HIL (Hardware-in-the-Loop) validates timing, device behavior, and integration stability.
- Policy expresses safety and mission constraints as explicit contracts.
- Profiling provides quantitative evidence for performance, jitter, and latency.
Scope note: This document specifies what must be proven and how evidence is collected. It does not prescribe any single vendor toolchain.
- Safety envelope integrity
Control and output behaviors remain within declared policy limits. - Deterministic execution under SIL
Given identical inputs/config, system produces identical outputs and event traces. - Bounded latency and jitter
Execution timing stays within declared budgets under defined load. - Backend isolation
Hardware specifics remain behindIHardwareBackendwithout leaking into core logic. - Traceable evidence
Every meaningful claim maps to a test, metric, or static analysis artifact.
- Proving correctness of third-party OS drivers or hardware firmware.
- Absolute real-time guarantees on non-real-time operating systems.
- Full formal proofs for every floating-point DSP component (unless explicitly required).
- Trusted
- Core algorithm code (target of verification)
- Policy declarations (
src/policy/mission_policy.hpp) - Profiling instrumentation (
src/profiling/perf_profiler.hpp)
- Partially trusted
- Reference backend (
src/hardware/reference_backend.cpp) for deterministic SIL/HIL scaffolding
- Reference backend (
- Untrusted / external
- OS scheduling, system load, physical hardware devices, vendor SDKs
All device I/O must occur only via:
apms::hw::IHardwareBackendread_input_block(...)write_output_block(...)
This enables:
- backends to be swapped without affecting core logic
- test harnesses to validate safety behaviors independently of hardware
This section lists properties the system should maintain, expressed in a form suitable for tests, static analysis, and/or formal methods.
Property P1 — Scalar envelope:
For every constrained scalar value x, if ScalarLimit(min,max) is active, then:
min ≤ x ≤ maxalways holds after enforcement.
Property P2 — Slew-rate:
For any value x(t) subject to SlewRateLimit(max_delta_per_sec):
|x(t) - x(t-Δt)| / Δt ≤ max_delta_per_secfor all evaluation steps.
Property P3 — Duration constraint:
If a monitored condition persists continuously for duration d:
d ≤ max_durationor the system escalates according to severity.
Evidence sources:
- SIL replay tests asserting clamp/escalation outcomes
- HIL tests verifying policy enforcement under realistic timing
- Static analysis confirming policy evaluation has no hidden side effects
Property D1 — Deterministic replay:
Given identical configuration, input sequences, and random seeds, the system produces:
- identical output sequences
- identical policy violation traces
- identical performance metric summaries within tolerance
Property D2 — End-of-stream behavior:
Finite sources (replay runs) produce:
- a clean
EndOfStreamtermination - no deadlocks, and no partial blocks
Evidence sources:
- Reference backend deterministic modes (
silence,sine,loopback) - Replay harness tests (if present) that compare output hashes and event logs
Property T1 — Loop period bound:
If target period is T_target, then:
- observed period deviation (jitter) stays below declared threshold
J_maxin specified environments.
Property T2 — Stage execution bound:
For each profiled stage S, measured duration stays within:
S_maxunder defined load profiles.
Evidence sources:
apms::profiling::Profilersnapshots (min/max/mean)- HIL runs on representative hardware with load injection
Property B1 — No backend leakage:
Core code must not directly include OS/device SDK headers. Backend-specific code must live in backend implementation units.
Property B2 — Format integrity:
For any AudioBlock transmitted across the backend boundary:
samples.size() == frames_per_block * channels
Evidence sources:
- Compile-time include checks / lint rules
- Unit tests validating format checks reject malformed buffers
The system should produce verifiable artifacts that can be archived per release:
- SIL test logs (inputs, outputs, policy events)
- HIL test logs (device configuration, timing stats, failures)
- Regression hashes (output block hashes or summary hashes)
- Profiler snapshots:
- sample count
- min/max/mean duration
- period jitter stats (if configured)
- Environment metadata:
- CPU, OS, compiler version, build flags
- Compiler warnings treated as errors
- Sanitizer runs (as feasible):
- AddressSanitizer
- UndefinedBehaviorSanitizer
- ThreadSanitizer (for concurrency paths)
- Linting reports (optional)
This section lays out incremental adoption steps. Each step must be reproducible and CI-friendly.
- Build with strict warnings:
-Wall -Wextra -Wpedantic(or equivalent)
- Run SIL tests:
- deterministic modes:
silence,sine,loopback - validate EOS behavior if used
- deterministic modes:
- Collect profiling snapshots:
- control loop period metric
- core stage metrics
- Run sanitizers in dedicated build profiles
- Add concurrency stress tests:
- multiple producer/consumer threads calling backend read/write
- Add load injection tests:
- CPU load
- artificial backend delays
Focus formal work on interfaces and invariants, not all DSP math.
Recommended targets:
- Policy invariants (P1–P3) expressed as:
- property tests / model checking on policy evaluation
- State machine correctness for mission phases:
- phase transition rules
- safe shutdown properties
- Buffer shape and bounds:
- memory safety properties across backend boundary
This project is structured to align with common certification expectations by making safety constraints explicit and testable:
- Separation of concerns
- hardware boundary (
IHardwareBackend) - policy contract (
MissionPolicy) - observability (
Profiler)
- hardware boundary (
- Traceability
- each claim maps to a measurable artifact
- Incremental assurance
- SIL → HIL → static analysis → selective formal verification
If this system is used in regulated contexts, this document should be extended with:
- hazard analysis and risk controls
- requirements traceability matrix (RTM)
- verification protocol templates
- release evidence checklist
For each release intended to increase assurance, capture:
- Compiler + build flags recorded
- SIL runs executed (modes: silence/sine/loopback as applicable)
- HIL runs executed (hardware IDs + environment recorded)
- Profiling snapshot recorded (min/max/mean + jitter where configured)
- Sanitizers executed (if applicable)
- Known limitations updated (docs)
- Policy changes reviewed for regression impact
- Real-time guarantees depend on OS scheduling and hardware.
- Reference backend loopback delay/jitter are block-quantized by design.
- Formal verification is targeted and incremental; not all numerical code is proven.
- Hardware boundary:
src/hardware/hardware_backend.hppsrc/hardware/reference_backend.cpp
- Policy contracts:
src/policy/mission_policy.hpp
- Profiling / evidence collection:
src/profiling/perf_profiler.hpp
- Shared vocabulary:
src/common/types.hpp