This topic was automatically generated from Slack. You can find the original thread here.
timk : I’ve got a pipeline that’s nearing my free tier daily runtime limit. when I look at the workflow’s event history, I see that even when I validate an event body in my first step and kick a bad event out with $end, performing no additional steps or processing, the event still runs for a long and wildly-variable amount of time, taking between 2s and 17s to execute. is there any way to pare those runtimes down, or is that a limitation of the underlying architecture?
Dylan Sather (Pipedream) : Thanks for sharing. I’m still looking into this and may have to get back to you tomorrow if that’s OK. For now, I doubled your compute time quota so you have a little breathing room
timk : increasing the memory setting seems to have moved execution times as low as just under 1s, and now topping out just over 8s, with most in the ~2s range. my gut feeling is that it cut execution time in half. huh. I wouldn’t have expected that correlation.
anyhow, I think my invocations and compute time are back under quota. thanks, Dylan!