Skip to content

Cache objectpath.Encoder.For() results to reduce CPU overhead#384

Open
ewhauser wants to merge 2 commits intouber-go:mainfrom
ewhauser:perf-objectpath-cache
Open

Cache objectpath.Encoder.For() results to reduce CPU overhead#384
ewhauser wants to merge 2 commits intouber-go:mainfrom
ewhauser:perf-objectpath-cache

Conversation

@ewhauser
Copy link
Contributor

@ewhauser ewhauser commented Jan 3, 2026

Add objPathCache map[types.Object]objectpath.Path to cache the results of objectpath.Encoder.For() calls. The same types.Object can be queried multiple times during analysis (shallow/deep checks, multiple triggers referencing the same site), and each For() call performs expensive recursive type traversal via objectpath.find().

Profiling shows objectpath.find() consuming 40-70% of CPU time in our internal nogo builds. This cache eliminates redundant traversals for repeated queries of the same object and decreased the latency by roughly 7-8x for some packages.

Add objPathCache map[types.Object]objectpath.Path to cache the results
of objectpath.Encoder.For() calls. The same types.Object can be queried
multiple times during analysis (shallow/deep checks, multiple triggers
referencing the same site), and each For() call performs expensive
recursive type traversal via objectpath.find().

Profiling shows objectpath.find() consuming 38-70% of CPU time in nogo
builds. This cache eliminates redundant traversals for repeated queries
of the same object.
@CLAassistant
Copy link

CLAassistant commented Jan 3, 2026

CLA assistant check
All committers have signed the CLA.

  Building on the caching from the previous commit, this adds fast paths
  to skip expensive objectpath.Encoder.For() calls:

  1. Unexported non-types never have valid object paths - skip immediately
  2. Package-level objects have simple paths (just the name) - compute directly

  This increases the performance an additional ~30%
@codecov
Copy link

codecov bot commented Jan 21, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 87.04%. Comparing base (89df5f7) to head (d6c92c3).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #384      +/-   ##
==========================================
+ Coverage   87.01%   87.04%   +0.02%     
==========================================
  Files          73       73              
  Lines        8305     8324      +19     
==========================================
+ Hits         7227     7246      +19     
  Misses        885      885              
  Partials      193      193              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Comment on lines +244 to +258
// Fast path: unexported non-types never have a valid object path
_, isTypeName := obj.(*types.TypeName)
if !obj.Exported() && !isTypeName {
p.objPathCache[obj] = ""
return ""
}

// Fast path: package-level objects have a simple path (just the name)
if pkg := obj.Pkg(); pkg != nil {
if pkg.Scope().Lookup(obj.Name()) == obj {
path = objectpath.Path(obj.Name())
p.objPathCache[obj] = path
return path
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll admit that I don't remember the actual implementation for object path encoder, but these feel like these fast paths should just exist inside the encoder?

if err != nil {
path = ""
}
p.objPathCache[obj] = path
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if a cache should just live inside the encoder too such that all callers get such performance boost (unless upstream does not really want to add the extra memory footprint?)

If so I'm not opposed to maintaining a cache here :) (I'll check if this increases a lot of memory consumption in our internal Go repo too)

@yuxincs
Copy link
Contributor

yuxincs commented Jan 26, 2026

This is nice! I left two comments more as discussion points rather than requests, let me know what you think!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants