This topic was automatically generated from Slack. You can find the original thread here.
Hello. I’m on the professional tier and have a task that occasionally needs more than the max execution time. Is it possible to request or pay for an increase in the max execution time?
If you visit your workflow’s Settings tab, you’ll see the option to increase your execution timeout, up to 750 seconds (12.5 min). Does that work for you?
That’s what I’m using now.
I see thanks. Today the limit for a specific execution is capped at 750 seconds. Are you able to split the work into smaller chunks and process the data in batches?
Usually the task takes a few minutes. Occasionally it needs just a bit more time than 12.5 and in those instances I can’t process the entire task through pipedream.
There are small tricks you may be able to use, to reduce refactor time, depending on how the workflow is executed. e.g. since the 750s timeout applies to individual executions, you can use
$.flow.delay to pause between steps and persist the state of the workflow across executions: Delaying a workflow . Since delaying the workflow triggers a new execution, the 750s execution will apply to steps after the delay. Just an idea.
Note that you’d be charged an invocation for each execution (before and after each delay), but again, it can be simpler than having to split work into multiple workflows
Yeah, I don’t mind any charges.
It would be great to pay extra for a 15 minute invocation.
that’s good feedback. We may move to a model where we apply timeouts to each step vs. the entire workflow, which will drastically increase limits on workflow runtime.
Right on. In my case, it’s a single step that’s consuming 99% of the time.
#3 can take < 60 seconds, or in the current case it needs ~ 13 minutes.
Is there any external HTTP request throttling that occurs from Pipedream? The HTTP requests to the API in step 3 are taking significantly longer from Pipedream than I see elsewhere.
There’s not any throttling that we or AWS applies from egress traffic in our account (that I am aware of).
Have you seen
$.send.emit? Docs here. After you parse the XLSX file, you can emit each row as a new event to a second workflow. Steps 3-5 would be handled by that second workflow, running for each event. That increases invocation volume but should address the core issue.
What region does it run in?
us-east-1 is the region where your workflows and sources execute from.