This topic was automatically generated from Slack. You can find the original thread here.
How can I debug Out of Memory errors?
I am constantly hitting that error on a workflow, all files are being moved via /tmp and the documentation does not give much help after that. Note that the error is raised before even running the trigger which is even more baffling
I have a couple of builin actions and 3 code blocks.
The most memory intensive thing going on is PyPDF to extract PDF text. But the whole PDF is 300Kb so it is impossible it is taking up 256Mb!
In our testing, a 20MB attachment still errored in a 2GB memory workflow in the OneDrive action, while the same in Google Drive (that had streaming) worked well in a 256MB memory workflow
How can I dig deeper; or is there any possible workaround? I currently have zero clues on where to patch things as the OOM error is triggered at the top
Still, even with a memory leak: this is an incoming mail with a 200-300Kb attachment that is written to /tmp, read from Python, and then read again from JavaScript to submit it via an API call.
I would think I’d need a few hundred runs to exhaust 256Mb!
There isn’t a view for that, but we noticed this bug happening with the One Drive Upload File action even with a few KBs file due to memory leak. After changing it to stream the file instead of loading it all to memory, this issue was fixed. See this commit: