Why do I see highly-variable runtime on a workflow, even when I $end it in the first step?

This topic was automatically generated from Slack. You can find the original thread here.

timk : I’ve got a pipeline that’s nearing my free tier daily runtime limit. when I look at the workflow’s event history, I see that even when I validate an event body in my first step and kick a bad event out with $end, performing no additional steps or processing, the event still runs for a long and wildly-variable amount of time, taking between 2s and 17s to execute. is there any way to pare those runtimes down, or is that a limitation of the underlying architecture?

Dylan Sather (Pipedream) : would you mind clicking on the Share button in the top-right of the workflow and share it with dylan@pipedream.com?

We do a bit of work when a workflow goes “cold” and we have to spin up a new execution environment, but I’d like to investigate the 17 second run

timk : okay, I’ve added you. you can see several 15s+ runs in the event log right now. they don’t seem uncommon. thanks for taking a look.

Dylan Sather (Pipedream) : Thanks for sharing. I’m still looking into this and may have to get back to you tomorrow if that’s OK. For now, I doubled your compute time quota so you have a little breathing room

Dylan Sather (Pipedream) : could you try increasing the memory of that workflow in your Settings tab? I’d like to see if that helps

timk : I hadn’t noticed the memory option before. :thumbsup: okay, I’ve bumped it up to 512MB.

timk : increasing the memory setting seems to have moved execution times as low as just under 1s, and now topping out just over 8s, with most in the ~2s range. my gut feeling is that it cut execution time in half. huh. I wouldn’t have expected that correlation.

anyhow, I think my invocations and compute time are back under quota. thanks, Dylan!

Dylan Sather (Pipedream) : yes adding memory gives you a proportional increase in CPU, which we need to make more clear, but it’s a nice side benefit!