This topic was automatically generated from Slack. You can find the original thread here.
I’m having quite a few issues with the reliability of the “new interaction event” event trigger. It appears that the trigger itself will frequently timeout or report out of memory. How accurate is the event history? If it show a timeout or OOM for a step (and doesn’t seem to have stepped into the next step) can I trust it? Here’s a couple examples. Both seem to error out on the trigger step, not the following node step.
It sounds like you’re experiencing timeouts and out-of-memory (OOM) issues with the “new interaction event” trigger in your Pipedream workflow. The event history in Pipedream should accurately reflect the execution status of each step in your workflow, including any timeouts or memory issues. If the event history indicates a timeout or OOM error at the trigger step, you can generally trust that this occurred. The timeout error suggests that the trigger step took longer than the maximum allowed execution time, while the OOM error indicates that the workflow exceeded the available memory limit. To address these issues, consider the following steps: 1. Optimize your code: If the trigger step includes custom code, review it to ensure it’s optimized for performance and memory usage. 2. Increase limits: You can adjust the timeout and memory limits for your workflow within the constraints of your Pipedream plan. 3. Split the workflow: If the workflow is complex, consider splitting it into multiple workflows to distribute the load. If you continue to face these issues and need further assistance, you can reach out for more support at https://pipedream.com/support.
The timeout and OOM errors that you see there are not step-specific in that context, and they’re just telling you that the workflow either ran out of memory or timed out, but not necessarily the trigger.
I’d recommend doubling the memory, re-running, and tweaking it from there. And similarly with the timeout, you’ll have to play with this a little bit to get it working.
If I add up the execution time for all of the steps and subtract that from the total execution time, will that give me the execution time for the trigger? If so, I have many instances of cron and slack triggers taking 20-25sec
this is a little subtle but we return step-level execution details when we can. In other cases (your case here), it’s likely the CPU of the container is pinned to 100% and we can’t run the code to check the timing and exit early with partial observability.
Are you processing a lot of data in one or more steps? Increasing memory will also increase CPU, so that will often help resolve the timeout issues naturally.