[PDI-20383] - Adding runconfig options to be passed through pan command#10075
[PDI-20383] - Adding runconfig options to be passed through pan command#10075rmansoor wants to merge 1 commit intopentaho:masterfrom
Conversation
Analysis Details49 IssuesCoverage and DuplicationsProject ID: org.pentaho.di:pdi |
Note:Frogbot also supports Contextual Analysis, Secret Detection, IaC and SAST Vulnerabilities Scanning. This features are included as part of the JFrog Advanced Security package, which isn't enabled on your system. |
This comment has been minimized.
This comment has been minimized.
jonjarvis
left a comment
There was a problem hiding this comment.
Here's a first pass review of this refactoring. I don't believe this will work for the Spark use case. Also, it might make sense to refactor the different execution models (local, remote, clustered) into their own classes entirely and make them available through a @Serviceprovider instead of via an ExtensionPoint.
| /** | ||
| * Helper method to check if a string is empty. | ||
| */ | ||
| private boolean isEmpty(String str) { |
There was a problem hiding this comment.
Use org.pentaho.di.core.util.Utils.isEmpty( String ) instead of re-writing function
There was a problem hiding this comment.
good catch. I will update the code with your recommendation
| * Created by bmorrise on 3/16/17. | ||
| */ | ||
| @ExtensionPoint( id = "RunConfigurationRunExtensionPoint", extensionPointId = "SpoonTransBeforeStart", | ||
| @ExtensionPoint( id = "RunConfigurationRunExtensionPoint", extensionPointId = "TransBeforeStart", |
There was a problem hiding this comment.
If you rename this, you're also going to have to edit this:
There was a problem hiding this comment.
I will update the code to reflect the new name for the extension point
| } | ||
|
|
||
|
|
||
| // Running Remotely |
There was a problem hiding this comment.
This appears to be a dead comment because of the refactoring
There was a problem hiding this comment.
Is this one dead code now? Is it fully replaced by the "EnhancedPanCommandExecutor" ?
| try { | ||
| // Trigger the extension point that handles run configurations | ||
| ExtensionPointHandler.callExtensionPoint( getLog(), KettleExtensionPoint.SpoonTransBeforeStart.id, | ||
| new Object[] { executionConfiguration, trans.getTransMeta(), trans.getTransMeta(), repository } ); |
There was a problem hiding this comment.
I believe this is the extension point that will execute under a run configuration if it exists. The behavior below this will have to change such that it doesn't execute if the extension point calls "execute" to run on spark. Otherwise, the lines of code following this try block will always execute a transformation locally.
|
|
||
| try { | ||
| ExtensionPointHandler.callExtensionPoint(log, KettleExtensionPoint.SpoonTransBeforeStart.id, new Object[] { | ||
| executionConfiguration, transMeta, transMeta, repository |
There was a problem hiding this comment.
I believe this is the extension point that actually executes under a run configuration (if it exists). So for the spark use case, this is what would actually "execute" the Spark transformation.
| return executeClustered(transMeta, executionConfiguration); | ||
|
|
||
| } else { | ||
| throw new KettleException(BaseMessages.getString(PKG, "PanTransformationDelegate.Error.NoExecutionTypeSpecified")); |
There was a problem hiding this comment.
wouldn't this exception always get thrown for a spark run configuration?
| for (String key : paramMap.keySet()) { | ||
| transMeta.setParameterValue(key, Const.NVL(paramMap.get(key), "")); | ||
| } | ||
| transMeta.activateParameters(); |
There was a problem hiding this comment.
Do the parameters have to be activated for the spark execution model?
This comment has been minimized.
This comment has been minimized.
❌ Build failed in 3m 10sBuild command: mvn clean verify -B -e -Daudit -Djs.no.sandbox❗ No tests found!Errors:Filtered log (click to expand)
ℹ️ This is an automatic message |







No description provided.