-
Notifications
You must be signed in to change notification settings - Fork 517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with addToLoopCount method #19
Comments
Hi darko-dj, |
@fbouzeraa You posted an issue (#27) on a slightly different point. Yours is based on the differentiation of loops based on different trigger operations while op is more concerned about differentiation based on the same trigger operation on more than 200 records. |
We are facing the same issue where for bulk job processing, after the first chunk of 200 records, the trigger is bypassing (i modified to use TriggerHandler.bypass instead of throwing an exception). I am thinking of two options
i like option 1 better. what do you guys think? any alternate approach? |
Yeah, I would recommend keeping a set of record ids you've already processed. Or a map of record id -> number of times processed, and not process the record if it's at max loop count. Depending on what you're trying to accomplish, it's possible you don't even need to worry about max loop count and just using a set of record ids could do the trick.
…________________________________
From: sfdevarch17 ***@***.***>
Sent: 16 September 2022 8:37 AM
To: kevinohara80/sfdc-trigger-framework ***@***.***>
Cc: darko-dj ***@***.***>; Author ***@***.***>
Subject: Re: [kevinohara80/sfdc-trigger-framework] Issue with addToLoopCount method (#19)
We are facing the same issue where for bulk job processing, after the first chunk of 200 records, the trigger is bypassing (i modified to use TriggerHandler.bypass instead of throwing an exception). I am thinking of two options
1. either to compare the trigger.new record set in each transaction with the previous chunk (if not null) and then bypass the trigger if it is the same, else reset the max loop count.
2. check the trigger.new.size in the individual handler, if it is less that 200, only then setMaxLoopCount, else clearMaxLoopCount
i like option 1 better. what do you guys think? any alternate approach?
—
Reply to this email directly, view it on GitHub<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkevinohara80%2Fsfdc-trigger-framework%2Fissues%2F19%23issuecomment-1248710959&data=05%7C01%7C%7Cf14a89b597b44d15376908da976ade76%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637988782526314193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=dqzgyPxFCWQwoD38iM3vjv3mGC4xeMtLa2qNfHZEWps%3D&reserved=0>, or unsubscribe<https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FACUOSJTE5Z3YZNPE2T7XCFLV6OQKPANCNFSM4GOS6M7Q&data=05%7C01%7C%7Cf14a89b597b44d15376908da976ade76%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637988782526314193%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=v9xKAuU%2FOW7Q8gdV2lbTeDos901vbHJsJQLt8jpfbf0%3D&reserved=0>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Hi @darko-dj yes you'r right my solution does not cover this cas you discuss (many batch of 200 recs). We should base on records ids as we always done before this framework. It would be best if we change the frame work by pass api using this principle. |
Hi Kevin,
I believe there's an issue with the addToLoopCount method, but only when the amount of records changed is greater than 200.
It's easier to explain with an example, so let's assume the following:
Now if we run a DML on 600 Account records, it will results in 3 Trigger batches (in Trigger context, records are executed in sets of 200):
Therefore, I don't think we can keep track of execution counts per handler itself. It would need to be per record per handler I believe.
Cheers
The text was updated successfully, but these errors were encountered: