-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request]: Add the "Output steps metrics" transform to HOP #3003
Comments
Workaround is to get this information after the execution with the "Execution Information" transform in another pipeline. |
It would be fantastic to have this functionality, just yesterday I needed it |
Matt, but is reading the "Execution Information" information already possible in 2.7? |
Yes, this is already possible. There's a whole perspective to explore prior executions, but this information is also available through the transform. |
Matt, do you have any example pipelines using the Execution information transform that you can send me? I confess that I can't use it, as I don't understand how it should be used, so an example would be of great help. |
还是希望实现转换步骤统计这个组件一样的功能?目前hop没看到有类似功能的组件,实现检测插入更新组件的插入、更新、读取、等 |
转换步骤统计信息这个组件没有!! |
Google translate says: "Or do you want to implement the same function as the conversion step statistics component? At present, hop has not seen any components with similar functions, which can detect the insertion, update, reading, etc. of insert and update components." |
This is one of the great features of PDI. Several simplifications were implemented using this. Getting metrics is not merely informational, but critical to embedded transforms, that process many rows in a run. And then a OSM monitor step , standalone is able to perform error handling and detect if there were traffic to any of the error hops from various steps which may simply log the error. Else there will be too many cross connects, blocks, aggregations using merge etc. to achieve the samething. Without this Hop will lose it's power immensely. We are not processing data any more in files and records, rather JSON messages with nested data collections with PDI and it works!! |
did you look at the Execution Information perspective and Execution Information transform? |
I reviewed Exec Info Transform. It seems to read from audit location, but not clear whether that can be directly tied to a current running transform and it's data. Pipeline Log / Pipeline Probe combination seems like an option, but can both the log and probe be setup in same pipeline and both meta data types configured with same pipeline? Will that be a single execution of the pipeline feeding into both inputs in same pipeline execution? This is a valid scenario if the input is just one row of large json message that may get split rows processed during pipeline execution. Ultimately the error handler that examines error hop step metrics , may need to get this first input message and take some action based on error metrics. |
What would you like to happen?
In PDI/Kettle, there was a Transformation Step called "Output steps metrics", which is missing in HOP transforms.
This step is configured with a list of steps from the same Transformation and the field names for a fixed set of metrics.
During execution this step will output these metrics after the listed steps are finished.
Please add this step as a transform to HOP.
Issue Priority
Priority: 3
Issue Component
Component: Transforms
The text was updated successfully, but these errors were encountered: