HIVE-29451: Optimize MapWork to configure JobConf once per table#6317
HIVE-29451: Optimize MapWork to configure JobConf once per table#6317hemanthumashankar0511 wants to merge 2 commits intoapache:masterfrom
Conversation
49d9fd4 to
4ba4b60
Compare
4ba4b60 to
29d9fab
Compare
|
|
I believe this patch is fine now, LGTM which is a bad public getter for a mutable collection, also, it's setter counterpart |
ayushtkn
left a comment
There was a problem hiding this comment.
I have a question here:
Set<TableDesc> processedTables = new HashSet<>();
Why are we storing TableDesc object? Can't we just do it via tableDesc.getFullTableName(), it is like just storing the tableName vs the TableDesc. The equals & hashcode seams comparatively heavier than a String
I stuck with the TableDesc object to be safe with things like self-joins (e.g., table A join table A). If we just checked the table name, we’d incorrectly skip the second configuration. Also, since the planner reuses the exact same object instance for partitions, the Set check is mostly just comparing memory addresses. This is actually faster than hashing and comparing Strings. |
I agree with using as light comparison as possible, so I believe it's worth discovering String comparison (given maximum collision in case of comparing the same TableDesc objects just as many times as many partitions we have) btw: @hemanthumashankar0511 you mentioned self-joins and safety: how is it a risk? in case of a self-join, does "we’d incorrectly skip the second configuration" true? does it mean different TableDesc objects that should be considered twice by this configuration method regarding: "Set check is mostly just comparing memory addresses" <- this can be true though, if you mean "==" comparison: hive/ql/src/java/org/apache/hadoop/hive/ql/plan/TableDesc.java Lines 236 to 238 in 7060d94 so from this point of view, this is fine, however regarding "This is actually faster than hashing and comparing Strings.", I think hashing will happen anyway in case of a Hash-based collection, and in the same bucket is where the equals() plays a role, correct me if I'm wrong |



What changes were proposed in this pull request?
This PR optimizes the configureJobConf method in MapWork.java to eliminate redundant job configuration calls during the map phase initialization.
Modified File: ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java
Logic Change: Introduced a Set within the partition iteration loop.
Mechanism: The code now checks if a TableDesc has already been processed before invoking PlanUtils.configureJobConf(tableDesc, job).
Result: The configuration logic, which includes expensive operations like loading StorageHandlers via reflection, is now executed only once per unique table, rather than once per partition.
Why are the changes needed?
Performance Bottleneck in Job Initialization: Currently, the MapWork.configureJobConf method iterates over aliasToPartnInfo.values(), which contains an entry for every single partition participating in the scan. Inside this loop, it calls PlanUtils.configureJobConf for every partition.
The Issue:
Redundancy: If a query reads 10,000 partitions from the same table, PlanUtils.configureJobConf is called 10,000 times with the exact same TableDesc.
Expensive Operations: PlanUtils.configureJobConf invokes HiveUtils.getStorageHandler, which uses Java Reflection (Class.forName) to load the storage handler class. Repeatedly performing reflection and credential handling for thousands of identical partition objects adds significant, avoidable overhead to the job setup phase.
Impact of Fix:
Complexity Reduction: Reduces the configuration complexity from O(N) (where N is the number of partitions) to O(T) (where T is the number of unique tables).
Scalability: significantly improves the startup time for jobs scanning large numbers of partitions.
Safety: The worst-case scenario (single-partition reads) incurs only the negligible cost of a HashSet instantiation and a single add operation, preserving existing performance for small jobs.
Does this PR introduce any user-facing change?
No. This is an internal optimization to the MapWork plan generation phase. While users may experience faster job startup times for queries involving large numbers of partitions, there are no changes to the user interface, SQL syntax, or configuration properties.
How was this patch tested?
The patch was verified using local unit tests in the ql (Query Language) module to ensure no regressions were introduced by the optimization.
Build Verification: Ran a clean install on the ql module to ensure compilation and dependency integrity.
mvn clean install -pl ql -am -DskipTests
Unit Testing: Executed relevant tests in the ql module, specifically targeting the planning logic components to verify that MapWork configuration remains correct.
mvn test -pl ql -Dtest=TestMapWork
mvn test -pl ql -Dtest="org.apache.hadoop.hive.ql.plan.*"