This topic was automatically generated from Slack. You can find the original thread here.
I need some assistance with a paid workspace. We had an event errors related to memory limits being reached. The workspace in question has 5 workflows to have a number of steps 9 and 7 respectively. We have increased the memory for them previously but today one of the smaller workflows (3 steps) that is a daily cron trigger to write workflow errors to google drive gave the memory limit error. It has been running for months without issue so I am not sure why today it had the error. I have manually ran the workflow without issue since than. So I am trying to understand how the memory limit could have been reached. The cron trigger was at 14:00:00 UTC
Hi even if a small workflow processes some MB of data, the memory usage of the workflow can grow, especially if you’re processing data in more than one step. We set 256MB since most workflows aren’t memory-intensive, but some workflows require an increase to run since they’re using more than 256MB of memory. Memory usage for the same workflow can be variable for different events, since they’re processing more data.
As to why it works when you test manually — we give users 3GB of memory in the test environment to facilitate testing (this also increases the compute available to process steps quickly).
Hi Dylan thanks for the reply. I understand where you are coming from but I don’t believe that is the case with this workflow. It triggers on a daily basis collected the latest 100 errors from a different workflow and then writes any new entries to a google drive as individual txt files. This has been running for almost 2 months and the amount of data it gets wouldn’t change that much as it will only be a limit of 100 entries. I was looking through the history and today wasn’t even the largest number of new logs that were written to google drive in the last week.
This one might be tough to troubleshoot. If the usage isn’t clearly related to large files / many steps, it’s possible one of our actions isn’t written efficiently and is using too much memory. Since we don’t log the memory usage per execution today in the UI, it’s hard to know how much workflow the memory uses on a normal run. Our core execution environment should only take up ~50MBs of memory on a given run, leaving the rest of the 256MBs to you. I wish I had a better answer for you here — if it happens consistently and you can find the exact conditions under which it seems to happen (e.g. after a certain number of runs), we can try to look into it more. But these issues are hard to troubleshoot since we’re not seeing it platform-wide (i.e. I don’t think it’s a fundamental issue with the execution environment).
Thanks for the reply, we will monitor and see if it happens again. The only other thing we could think was there be limits with a workspace instead of at the workflow? This workspace has a few that can be memory intensive and we have increased their limits accordingly
No luckily, every workflow is independent, and the execution environment for a workflow will even shut down after ~15 minutes of inactivity, which starts a new virtual machine with fresh memory. You can read more about that here: Privacy and Security at Pipedream