Skip to content

Add overloads of _apply_operator to support type conversion of Numbers to ValidVector eltype.#575

Open
GongJr0 wants to merge 3 commits intoastroautomata:masterfrom
GongJr0:pysr_issue_1141
Open

Add overloads of _apply_operator to support type conversion of Numbers to ValidVector eltype.#575
GongJr0 wants to merge 3 commits intoastroautomata:masterfrom
GongJr0:pysr_issue_1141

Conversation

@GongJr0
Copy link
Copy Markdown

@GongJr0 GongJr0 commented Feb 19, 2026

Introduced a small refactor to add two overloads of the _apply_operator function specifically targeting {ValidVector, Number} and {Number, ValidVector} argument pairs. The overloads ensure that a given Number is converted to the eltype of the vector, preventing type upcasting via higer-precision Float literals. The changes address issue #1411 posted in MilesCranmer/PySR.

Refactor operator application by introducing _apply_operator_values that accepts raw vector values and broadcasts a safe operator over them. Restore previous behavior with a wrapper _apply_operator that maps _get_value and delegates to the new function. Add optimized overloads to handle ValidVector/Number and Number/ValidVector combinations by converting scalars to the vector element type and calling _apply_operator_values. This improves clarity and handles mixed scalar/vector operations more efficiently.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Feb 19, 2026

Benchmark Results (Julia v1)

Time benchmarks
master b3aae77... master / b3aae77...
search/multithreading 15.5 ± 0.2 s 15.3 ± 0.39 s 1.02 ± 0.029
search/serial 33.7 ± 0.48 s 31.9 ± 0.068 s 1.06 ± 0.015
utils/best_of_sample 1.72 ± 0.35 μs 1.63 ± 0.29 μs 1.06 ± 0.28
utils/check_constraints_x10 16.9 ± 4.3 μs 16.8 ± 4.3 μs 1 ± 0.36
utils/compute_complexity_x10/Float64 2.17 ± 0.1 μs 2.19 ± 0.091 μs 0.991 ± 0.061
utils/compute_complexity_x10/Int64 2.12 ± 0.09 μs 2.05 ± 0.09 μs 1.03 ± 0.063
utils/compute_complexity_x10/nothing 1.56 ± 0.09 μs 1.55 ± 0.09 μs 1.01 ± 0.082
utils/insert_random_op_x10 5.26 ± 1.9 μs 5.15 ± 1.9 μs 1.02 ± 0.54
utils/next_generation_x100 0.441 ± 0.022 ms 0.444 ± 0.022 ms 0.995 ± 0.069
utils/optimize_constants_x10 0.0358 ± 0.0083 s 0.034 ± 0.0077 s 1.05 ± 0.34
utils/randomly_rotate_tree_x10 8.39 ± 0.99 μs 8.37 ± 0.94 μs 1 ± 0.16
time_to_load 2.62 ± 0.0074 s 2.67 ± 0.033 s 0.982 ± 0.013
Memory benchmarks
master b3aae77... master / b3aae77...
search/multithreading 0.205 G allocs: 53.8 GB 0.203 G allocs: 52.7 GB 1.02
search/serial 0.207 G allocs: 53.8 GB 0.207 G allocs: 53.8 GB 1
utils/best_of_sample 0.038 k allocs: 3.25 kB 0.038 k allocs: 3.25 kB 1
utils/check_constraints_x10 0.034 k allocs: 0.875 kB 0.034 k allocs: 0.875 kB 1
utils/compute_complexity_x10/Float64 0 allocs: 0 B 0 allocs: 0 B
utils/compute_complexity_x10/Int64 0 allocs: 0 B 0 allocs: 0 B
utils/compute_complexity_x10/nothing 0 allocs: 0 B 0 allocs: 0 B
utils/insert_random_op_x10 0.041 k allocs: 1.62 kB 0.04 k allocs: 1.56 kB 1.04
utils/next_generation_x100 4.63 k allocs: 0.276 MB 4.63 k allocs: 0.276 MB 1
utils/optimize_constants_x10 24.9 k allocs: 25.2 MB 25 k allocs: 25.7 MB 0.981
utils/randomly_rotate_tree_x10 0.042 k allocs: 1.34 kB 0.042 k allocs: 1.34 kB 1
time_to_load 0.15 k allocs: 11.2 kB 0.145 k allocs: 11 kB 1.02

@GongJr0
Copy link
Copy Markdown
Author

GongJr0 commented Feb 19, 2026

Fails unit test "ValidVector operations with Union{} return type" due to the snippet:

a = ValidVector(Float32[1.0, 2.0], false)
b = 1.0
result2 = apply_operator(*, a, b)
@test result2 isa ValidVector{<:AbstractArray{Float64}}

Under previous behavior the operation would resolve to Float32 .* Float64 = Float64. Additions in this PR cause the apply_operator call to branch into the overload _apply_operator(op::F, x::ValidVector, y::Number) where y (Float64) is converted to eltype(x.x) (Float32) before the Mul operator gets called.

No other functionality seems to be affected but I refrained from updating test cases prior to a review.

@MilesCranmer
Copy link
Copy Markdown
Collaborator

Just so I understand the design, is it to automatically convert everything to lower precision when applying operations on Float64,Float32 types?

I wonder if the safer thing might be to lower the precision only after the entire template is evaluated

@GongJr0
Copy link
Copy Markdown
Author

GongJr0 commented Feb 19, 2026

Just so I understand the design, is it to automatically convert everything to lower precision when applying operations on Float64,Float32 types?

I wonder if the safer thing might be to lower the precision only after the entire template is evaluated

Not exactly. The intent isn’t to automatically lower precision across the whole expression/template. But whenever a ValidVector and Number arg pair is encountered, this ensures that the output of apply_operator will preserve the eltype of the ValidVector regardless of argument positioning.

Example where precision gets lowered:

x1::ValidVector{Float32} = ...
y1::Float64 = ...
apply_operator(some_op, x, y) # Returns Float32 (eltype of the vector)

Example where precision remains 64-bit:

x1::ValidVector{Float64}
y1::Float32
apply_operator(some_op, y, x) # Promotes y -> Float64  (preserves eltype)
                              # before passing to some_op; return type remains Float64

This is implemented with the assumption that the ValidVector eltype (which should undergo a conversion to the user defined precision) should always be preferred over a Number input. If the desired policy is to evaluate using the highest precision implied by the arguments and only cast back afterward, that would work as well.

@MilesCranmer
Copy link
Copy Markdown
Collaborator

In your example:

x1::ValidVector{Float32} = ...
y1::Float64 = ...
apply_operator(some_op, x, y) # Returns Float32 (eltype of the vector)

I don't think we would want this. Julia always promotes types to higher precision in operations so this would be unusual.

We probably just want something to convert types at the very end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants