Flow

Scheduled Flows: Sometimes Later Is Better

You go through life, and it sometimes feels like drinking from a firehose. This is also through for systems. The current work can overwhelm the available resources. Salesforce Scheduled (a.k.a. Schedule-Triggered) flows are an excellent way to defer some of the work and give your system resources a breather. I discussed this solution this past weekend in Minneapolis at Midwest Dreamin’ but needed more time to go into a step-by-step tutorial. Let me describe the use case and the solution in detail here.

Use case:

You need to track key performance indicators for your operation daily and build a history. Out-of-the-box reporting functionality and report snapshots are not powerful enough for you. In addition, you want to perform sophisticated mathematical calculations and expand your solution in the future to include details from related object records as well. You want to leverage something other than rollups as the value of a rollup field changes dynamically. In conclusion, you decided to track these KPIs for cases daily:

  • Count of Created: Created in the previous 24 hours.
  • Count of Modified: Modified in the previous 24 hours.
  • Count of Open: Count of open cases at midnight.
  • Count of Closed on Date: Closed in the previous 24 hours
  • Max Wait Open in Hrs: Max Wait on the oldest open case at midnight.
  • Average Wait Open in Hrs: Average Wait of all open cases at midnight.

The solution has two essential components:

1) A custom object record that tracks the results

2) A schedule-triggered flow that creates a single record every night

Let me show you:
1) See the screenshot below to review all the Daily Summary Record custom object fields.

You will create this custom object to house the resulting values.

2) Flow solution: The flow solution is a very efficient one. Many don’t know that selecting an object is optional in schedule-triggered flow. For this use case, you will use a flow without an object that gets all the case records in the org. If you don’t have more than 50K cases, you won’t have a problem with governor limits. If you know you will get more than 50K cases, we will have to split this flow into several flows and/or perform multiple targeted gets. You will leverage collection filters and need only one single get.


See the screenshot below to review all the elements in this flow (click to see a larger image).


You will create the elements as follows:

  • Start element: Scheduled daily at midnight: No object and no criteria.
  • Get the previous daily summary record: This is only necessary to get the created date to filter cases processed since the last run.
  • Get all cases in the Org by ascending Created Date order.
  • Collection filter 1: Filter for created cases since the last run.
  • Collection filter 2: Filter for modified cases since the last run.
  • Collection filter 3: Filter for closed cases since the last run.
  • Assignment 1: Assign the counts of the collections to variables.
  • Filter Open Cases
  • Loop Open Cases
  • Assignment 2: Add 1 to the counter, calculate case wait, and add to the total wait variable
  • If this is the first iteration, capture the current wait as the max wait. This works because our get is sorted (see above).
  • Create one single Daily Summary Record.

Now you can accumulate this information in your Org as long as you want for reporting and audit purposes. This is a very elan record that won’t consume significant storage.


Enjoy

Standard
Flow, New Release

Hidden Gem in Spring 23: Schedule-Triggered Flow Improvements

I can almost hear you! You read the Unofficial SF Sneak Preview by Adam White, and there are no Schedule-Triggered flow improvements in this next release.

So, where does this title come from? 🤔

The schedule-triggered flows are about to become immensely more helpful. Why is that?

Spring 23 is removing the 2,000 elements executed limit in flow interviews. If you followed me long enough, you will know I sparked a debate in 2021 about how schedule-triggered flows should be ideally structured. The challenge I faced was more related to record-locking issues than the 2000 limit.

What is the 2,000 elements executed limit? 🔢

When a flow executes, it goes down a particular path. It goes to a decision element, turns left, and executes 2 elements there. It then goes into a loop and processes 5 records. If there are 3 elements in the loop, that means another 15 elements are executed. All these add up to a total number for each flow interview. When a particular flow interview goes over 2,000 it throws an error. Please note the number of elements on the canvas is not what we count here; we count the elements on the execution path.

Paul McCollum, UXMC likes to live on the edge. He overcame this limit before by using platform events. You can see his session on Automation Hour here. This method won’t be necessary anymore.

The Achilles heel of Schedule Triggered Flows: 🦵

Schedule-Triggered flows are prone to record-locking issues because you cannot determine the sequence of records they will process. They randomly select what to process first.

What is a locking error? 💻

When Salesforce processes a record, it momentarily locks the related records and makes them unavailable for processing. For example, let’s say you want to update an Opportunity. Related Account and Contact records will be locked during this transaction. When you have multiple updates happening simultaneously, your updates may try to update a locked record, which will not be possible. Why are schedule-triggered flows executing multiple updates at the same time? Because they are batched.

What is batching ❔

Salesforce batches certain operations it executes in default batches of 200 unless this is a parameter you can set; you can sometimes change the batch size. Where do you see this setting? When you import 1,000 records using the Data Loader, you can change the batch size in the settings. You can also set the batch size when you build a scheduled path in a record-triggered flow. You don’t have this setting in schedule-triggered flows. If 200 or more records fit your flow’s start element entry conditions, 200 flow interviews will be batched into one transaction.

This is where the trouble starts.

There here are two ways you can mitigate the locking error risk:

1️⃣ Decrease the batch size

2️⃣ Sort and process the records related to the same record together.

When executing a schedule-triggered flow, you don’t have these methods available.

What can you do? ✅

You can set up your schedule-triggered flow to run on the parent record rather than the child. This will ensure that you process all the opportunities related to one account together.

Why was that not feasible before? ❌

When you looped through the related records and sometimes went one level further and looped through the related records of those records, you were almost guaranteed to hit the dreaded 2,000+ elements executed error. This used to be a typical use case where I recommended you go to an Apex code solution. Now you don’t have this problem.

Where is this useful? 💡

You can set up complex roll-up summaries that you don’t need to update immediately. Instead, run these at night when the kids are sleeping.

You can create and update custom object records to flatten your data for reporting ease. For example, if you could not get specific data to show next to each other on dashboards or create complex comparisons in reports, you can now facilitate this via a nightly scheduled-triggered flow job.

The schedule-triggered flow you see above looped through more than 1,000 opportunity records in one flow interview: That’s 2,000+ elements executed.

This is complex territory. I may have left out important points and even made errors in my evaluation. So please comment below and help me get a fruitful debate started.

This post was originally made to LinkedIn on December 13th, 2022.

Read the previous post: What Is The Vision For Flow Testing?

Standard