I’m trying to create a workflow where I upload voice recordings from my phone to a Google Drive folder, have those recordings pushed into an AWS S3 bucket, then transcribed using the OpenAI Whisper API, which I then upload the transcription to a new page in my Notion database. I have these steps working for <10min recordings but more than that I keep getting an out of memory error. This seems odd to me given a 30min audio recording is like max 20MB and my workflow memory max is set at ~4800 MB. Thoughts on why I may be getting this error? I even delete the variables from memory in the step where I save the files into a variable to upload them to s3
Are you using Python code steps within your workflow by chance?
There’s a known issue with Python in particular where memory leaks can occur. You’ve already taken the step to increase memory, and since that didn’t work for you, I suggest you open a bug report here: Sign in to GitHub · GitHub
Then I would try using Node.js instead of Python in your code steps to see if that is truly the cause.
Hi @pierce I am using Python code steps (the step where I upload the recording to an s3 bucket) but the out of memory error occurs on the ChatGPT transcription step I have, which is not a Python step. Is there any way to do more debugging and see what’s going on in the workflow in terms of memory as it’s running?
Sorry @basil.chatha8 , not at this time. This low level Python bug is at the system level not at the user level.
The Python step is the most likely culprit here, I’ve not heard of an OOM error on a Node.js only workflow. It seems to only be happening on workflows that have at least one Python step.
Interesting, yeah let me try turning that into node.js and see if that fixes the issue. Will report back
Sorry for the inconvenience. I also recommend making a copy of your workflow with this error to make this Node.js conversion.
This way you completely remove any remnants of the Python installed packages on the production workflow.
@pierce I think the error is because of an issue with OpenAI. I’m not having trouble downloading or uploading the audio file. I’m getting an error on the step where I call their Whisper API.
@basil.chatha8 perhaps you can share a screenshot of the exact error message you’re seeing?
I assumed you saw the specific Out of Memory error at the top of the workflow which is emitted by Pipedream itself.
Do you mean you saw this error in the response from the OpenAI step?
I’m trying to do something similar, and I’m having a similar issue.
According to OpenAI, Whisper should be able to handle files as large as 25mb. But I’m getting the out of memory error at the top of my Pipedream workflow even with files under 20mb.
I’m in the middle of testing, and I’ve found that files around 10mb seem to work. I don’t yet know if it’s actually a file-size issue or a file length issue (even though they say files of any length should work, as long as they’re under 25mb).
I’ve tried setting my workflow’s memory limit to the max, and still got the error. So I think it’s something on OpenAI’s side.
It looks like there is a potential fix that’s in QA, is that right @andrewcturing?
Yeah I just merged Andrew’s fix, will let y’all know when that’s deployed and we can see if that helps.
The new version should be out. Chatting in the Slack thread.
Morning. Noob to all this. Having this issue as well. I have 0 coding experience and am using @thomasfrank tutorial and template for Audio → Notion workflow.
I’ve adjusted the settings to no avail. Any and all help is greatly appreciated.
Andrew