Why Do Tasks Continue Beyond Set Workflow Timeout?

This topic was automatically generated from Slack. You can find the original thread here.

Why aren’t tasks killed if they exceed the timeout? I have a workflow timeout set to 2min but see events in the event history that last > 10hrs

Pipedream workflows have a default execution limit that varies with the trigger type. If your code exceeds this limit, Pipedream will throw a Timeout error and stop your workflow. However, it’s important to note that timeouts are specific to your workflow, not an issue with Pipedream. Often, you’re making a request to a third party API that doesn’t respond in the time you expect, or you’re processing a large amount of data in your workflow, and it doesn’t complete before you hit the execution limit. If you see events in the event history that last longer than the set timeout, it could be due to a delay in the third-party API response or the processing of a large amount of data. If it’s expected that your code is taking a long time to run, you can raise the execution limit of a workflow in your workflow’s settings. If you continue to experience this issue, I would recommend reaching out to the Pipedream support team at https://pipedream.com/support/ for further assistance.

are you using delay, retry, or suspend functionality in your workflow?

No

some are failing in a python step

Would you mind visiting https://pipedream.com/support and filling out that ticket with the details, linking to one of the events where you’re seeing it exceed 10 hours?

fwiw I also had some issues over the past day or two where workflows were strangely exceeding their timeout limits

Mine seem to be localized to a single workflow… which can run long

are you using auto-retry on this workflow?

Yes

I believe that explains it and it sounds like a UI bug on our part. That duration shows the delta between the original event and the last retry, which could happen up to 10 hours later

Ahhh, gotcha. The “credit” count didn’t match up either

you’ll incur a credit for every retry — would that explain it?

Well I meant if it was 10hrs long, it’d be 36K credits

Credit count seems plausible

but the workflow is not running for that duration, that’s the bug. It’s likely only running for a few seconds on each retry

but between the first and last execution, 10 hours elapsed. The workflow wasn’t running all 10 of those hours. We pause and retry on a backoff

Got it, makes sense