Skip to content

Conversation

@overbalance
Copy link
Contributor

@overbalance overbalance commented Dec 10, 2025

Summary

  • Optimizes browser RandomIdGenerator by using Uint8Array with a byte-to-hex lookup table instead of character code generation with String.fromCharCode
  • Refactors to use proper class methods instead of arrow property assignments
  • Adds benchmark tests for Node and browser

Benchmark Results

Benchmark Node Browser (old) Browser (new) Browser Improvement
generateTraceId 9.07M ops/sec 2.48M ops/sec 7.17M ops/sec +189%
generateSpanId 13.3M ops/sec 4.70M ops/sec 10.4M ops/sec +121%
create spans (10 attrs) 1.24M ops/sec 645K ops/sec 903K ops/sec +40%
BatchSpanProcessor 1.19M ops/sec 694K ops/sec 993K ops/sec +43%

Test plan

  • All existing tests pass
  • Verified output format: 32-char hex for traceId, 16-char for spanId
  • Benchmarks run on both Node (npm run test:bench) and browser (npm run test:bench:browser)

Benchmark Infrastructure

Separate Node and browser benchmarks

Benchmarks are split into separate files to allow distinct test names for reporting:

  • test/node/*.bench.ts - Node benchmarks (e.g., generateTraceId)
  • test/browser/*.bench.ts - Browser benchmarks with "(browser)" suffix (e.g., generateTraceId (browser))

Output files

  • test/node/.benchmark-results.txt - Node benchmark results
  • Browser benchmark results are not written to disk, only displayed in the console

Test folder structure

  • test/node/ - Node-only tests and benchmarks
  • test/browser/ - Browser-only tests and benchmarks

@overbalance overbalance requested a review from a team as a code owner December 10, 2025 23:48
@codecov
Copy link

codecov bot commented Dec 10, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 95.58%. Comparing base (38924cb) to head (3499c8f).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6209      +/-   ##
==========================================
+ Coverage   95.46%   95.58%   +0.11%     
==========================================
  Files         317      314       -3     
  Lines        9602     9590      -12     
  Branches     2221     2221              
==========================================
  Hits         9167     9167              
+ Misses        435      423      -12     

see 3 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Comment on lines +35 to +42
for (let i = 0; i < buf.length; i++) {
buf[i] = (Math.random() * 256) >>> 0;
}
// Ensure non-zero
for (let i = 0; i < buf.length; i++) {
if (buf[i] > 0) return;
}
buf[buf.length - 1] = 1;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: did you consider using crypto.getRandomValues?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without running a test i can't say for sure but I'd say its very unlikely the crypto module is fast enough. The "cryptographically strong" requirement of that module trades off speed and its pretty significant in a lot of cases. We only have need for statistical randomness (uniform distribution) here. The distinction is that it might be possible to guess which trace id is generated in some cases (through things like timing attacks), but that's fine in this case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I did try crypto: 10x slower than the original 🫠

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, apparently it's slower than your solution... this was unexpected 😄 https://jsben.ch/ZR73M

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We were actually using crypto module in the past and removed it for performance reasons https://github.com/open-telemetry/opentelemetry-js/pull/1349/changes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just added a benchmark test.

@overbalance overbalance added the browser Browser-specific additions or benefits label Dec 17, 2025
buf[i] = (Math.random() * 256) >>> 0;
}
// Ensure non-zero
for (let i = 0; i < buf.length; i++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: as a fallback a '1' is written in the last position to ensure the buf does not contain all zeroes. Maybe accessing a random position of the array and setting it to '1' if not already set would allow is to skip a second iteration.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe accessing a random position of the array and setting it to '1' if not already set would allow is to skip a second iteration.

I'm not a mathematician but I'd worry a bit that that biases the distribution away from values with 0s.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Me neither but what you said makes a lot of sense. The I guess a flag in the loop that fills the buf would be enough

Something like

let allZero = true;
for (let i = 0; i < buf.length; i++) {
    buf[i] = (Math.random() * 256) >>> 0;
    allZero = allZero && buf[i] === 0;
}

if (allZero) {
  buf[buf.length - 1] = 1;
}

Copy link
Contributor Author

@overbalance overbalance Jan 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@david-luna the allZero check in the loop is about 20% slower than the proposed function.

Copy link
Contributor

@trentm trentm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, with a Q about dropping the benchmark file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we keep this file? This is a benchmark running node using browser code. I think quoting some benchmark code (or perhaps a https://jsben.ch or similar link) showing browser numbers in the PR discussion would be good, but not sure of the value of having the benchmark file persisting in the repo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point. Let me try to add a browser bench test.

Copy link
Contributor Author

@overbalance overbalance Dec 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@trentm I moved some files around so the platform chosen in package.json is used for common tests, including the new benchmark test for the randomIdGenerator. It expanded the scope quite a bit but now reports numbers for node and browser. I updated the description above with node, browser (main), and browser (this branch) results.

If this is too many changes for this PR I can back them out and make a new branch for the test folder.

Copy link
Contributor Author

@overbalance overbalance Jan 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@trentm I added browser benchmark tests that do not write to disk. I just saw @pichlermarc's note about being judicious with the amount of benchmarks published.

I defer to you whether or not they should be published (in a follow-up)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks. Sounds good to me. My main concern had been about the (lack of) value of having those metrics included in the published benchmarks.

Marc had mentioned that he has a PR coming at some point to reduce the set of metrics that will be included in the published set (by just changing the benchmark to not output results to the benchmark.txt file that is slurped up by the benchmark.yml CI, I believe).

@overbalance overbalance disabled auto-merge December 18, 2025 21:04
- move test/common/platform/RandomIdGenerator.test.ts to test/common/
- move test/common/Sampler.test.ts to test/common/sampler/
- add benchmarks for RandomIdGenerator, Span, and BatchSpanProcessor
- benchmarks run on both Node and browser platforms
- add karma.bench.js config for browser benchmarks
- add AMD stub to karma.webpack.js for Benchmark.js compatibility
@overbalance overbalance requested a review from trentm December 19, 2025 03:54
@overbalance overbalance force-pushed the overbalance/perf-browser-random-id branch 3 times, most recently from be43a55 to bb60e72 Compare January 20, 2026 21:25
@overbalance overbalance force-pushed the overbalance/perf-browser-random-id branch from bb60e72 to 4c703e2 Compare January 20, 2026 21:30
@overbalance overbalance enabled auto-merge January 21, 2026 19:00
@overbalance overbalance added this pull request to the merge queue Jan 21, 2026
Merged via the queue into open-telemetry:main with commit e5f2d42 Jan 21, 2026
27 checks passed
@overbalance overbalance deleted the overbalance/perf-browser-random-id branch January 21, 2026 19:11
Copy link
Contributor

@trentm trentm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this LGTM. Impl is good. I didn't dig into the browser bench scripts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks. Sounds good to me. My main concern had been about the (lack of) value of having those metrics included in the published benchmarks.

Marc had mentioned that he has a PR coming at some point to reduce the set of metrics that will be included in the published set (by just changing the benchmark to not output results to the benchmark.txt file that is slurped up by the benchmark.yml CI, I believe).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

browser Browser-specific additions or benefits

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants