So, where does this title come from? 🤔
The schedule-triggered flows are about to become immensely more helpful. Why is that?
Spring 23 is removing the 2,000 elements executed limit in flow interviews. If you followed me long enough, you will know I sparked a debate in 2021 about how schedule-triggered flows should be ideally structured. The challenge I faced was more related to record-locking issues than the 2000 limit.
What is the 2,000 elements executed limit? 🔢
When a flow executes, it goes down a particular path. It goes to a decision element, turns left, and executes 2 elements there. It then goes into a loop and processes 5 records. If there are 3 elements in the loop, that means another 15 elements are executed. All these add up to a total number for each flow interview. When a particular flow interview goes over 2,000 it throws an error. Please note the number of elements on the canvas is not what we count here; we count the elements on the execution path.
The Achilles heel of Schedule Triggered Flows: 🦵
Schedule-Triggered flows are prone to record-locking issues because you cannot determine the sequence of records they will process. They randomly select what to process first.
What is a locking error? 💻
When Salesforce processes a record, it momentarily locks the related records and makes them unavailable for processing. For example, let’s say you want to update an Opportunity. Related Account and Contact records will be locked during this transaction. When you have multiple updates happening simultaneously, your updates may try to update a locked record, which will not be possible. Why are schedule-triggered flows executing multiple updates at the same time? Because they are batched.
What is batching ❔
Salesforce batches certain operations it executes in default batches of 200 unless this is a parameter you can set; you can sometimes change the batch size. Where do you see this setting? When you import 1,000 records using the Data Loader, you can change the batch size in the settings. You can also set the batch size when you build a scheduled path in a record-triggered flow. You don’t have this setting in schedule-triggered flows. If 200 or more records fit your flow’s start element entry conditions, 200 flow interviews will be batched into one transaction.
This is where the trouble starts.
There here are two ways you can mitigate the locking error risk:
1️⃣ Decrease the batch size
2️⃣ Sort and process the records related to the same record together.
When executing a schedule-triggered flow, you don’t have these methods available.
What can you do? ✅
You can set up your schedule-triggered flow to run on the parent record rather than the child. This will ensure that you process all the opportunities related to one account together.
Why was that not feasible before? ❌
When you looped through the related records and sometimes went one level further and looped through the related records of those records, you were almost guaranteed to hit the dreaded 2,000+ elements executed error. This used to be a typical use case where I recommended you go to an Apex code solution. Now you don’t have this problem.
Where is this useful? 💡
You can set up complex roll-up summaries that you don’t need to update immediately. Instead, run these at night when the kids are sleeping.
You can create and update custom object records to flatten your data for reporting ease. For example, if you could not get specific data to show next to each other on dashboards or create complex comparisons in reports, you can now facilitate this via a nightly scheduled-triggered flow job.
The schedule-triggered flow you see above looped through more than 1,000 opportunity records in one flow interview: That’s 2,000+ elements executed.
This is complex territory. I may have left out important points and even made errors in my evaluation. So please comment below and help me get a fruitful debate started.
This post was originally made to LinkedIn on December 13th, 2022.
Read the previous post: What Is The Vision For Flow Testing?