This topic was automatically generated from Slack. You can find the original thread here.
Hey everyone,
I’m trying to create a workflow where I upload voice recordings from my phone to a Google Drive folder, have those recordings pushed into an AWS S3 bucket, then transcribed using the OpenAI Whisper API and then upload the transcription to my Notion database. I have these steps working for <10min recordings but more than that I keep getting an out of memory error. This seems odd to me given a 30min audio recording is like max 20MB and my workflow memory max is set at ~4800 MB. Thoughts on why I may be getting this error? I even delete the variables from memory in the step where I save the files into a variable to upload them to s3
Could you do me a favor and take a look at the OpenAI Transcription action and see if its memory use could be improved? I noticed this too when testing with large files
just to confirm, are you hitting the Whisper / Audio API directly, or are you using our built-in Create Transcription action? Either way we should improve our action, but just curious
nice, yeah there should be nothing preventing us from steaming large files to the Whisper API in the Pipedream execution environment, hopefully we can reduce the needed memory to the minimum for y’all