Skip to content

[Enhancement] Replace CSV-Based Benchmark Configuration with Pytest-Native Approach #86

@lcy-seso

Description

@lcy-seso

The project currently uses CSV files to define benchmark/test configurations. While functional, this approach introduces avoidable complexity and does not integrate well with modern Python testing practices.

This issue proposes migrating benchmark configuration to a pytest-native solution.

Problems with Current Approach

Using CSV for test or benchmark configuration has several drawbacks:

  • CSV lacks type safety and structure
  • Requires custom parsing and validation logic
  • Poor integration with pytest (no native parametrization, markers, or filtering)
  • Harder to review and document benchmark intent
  • Limited support from tooling (linters, type checkers, IDEs)

Proposed Improvements

Preferred Option: Pytest Parametrization

  • Define benchmark/test cases using pytest.mark.parametrize
  • Keep configurations in Python for clarity, type safety, and tooling support

Optional Enhancement

  • Evaluate pytest-benchmark for performance benchmarks to gain:
    • Statistical analysis
    • Regression detection
    • CI-friendly reporting

Acceptance Criteria

  • Benchmark/test configurations no longer rely on CSV files
  • Benchmarks are visible and runnable via pytest
  • No custom CSV parsing logic is required for benchmarks
  • Documentation explains how to add or modify benchmark cases

Scope Notes

  • Initial migration can focus on correctness and clarity
  • Performance tooling and stricter validation can be added incrementally

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions