Why am I getting an out of memory error for uploading 30-min voice recordings in my workflow with Google Drive, AWS S3, OpenAI Whisper, and Notion?

nice!!

Just tested with the min memory (256MB) and it worked fine

amazing

you ready for a code review on that PR? Happy to review

Wait nvm, just tried running the workflow and it exceeded the memory usage.
Is there a difference when testing vs. running the workflow?

you ready for a code review on that PR? Happy to review
Yep! the code can be reviewed

yes sorry, we give the test execution environment ~3GB to facilitate testing (since we get a proportional amount of compute), but that does create issues like this and we need to figure that out

Ahh ok, the most memory usage is probably used for the ffmpeg installation, so I’ll check on how much +/- that is

Wait nvm, I think I hadn’t clicked on deploy after updating the action. Tested quite a few times with 256MB memory and all executions were successful

so I can’t approve the PR since I opened it :laughing: , but I approve. Added one comment re: handling rate limits. Feel free to approve yourself and merge

Ok thanks! I’ll create a new issue for that improvement

yeah that works, not a blocker for this. Thanks for getting this out

@U04Q6GJ2B7D the action is now published, can you both please test your use cases and validate if it’s working?

testing now!

getting the following error with both a 55min and 1hr 20min audio file

:point_up:

I had to delete my transcription action and make a brand new one. There wasn’t an update option

But I just successfully transcribed a 90 minute podcast

Deleted that action and created a new one but I’m still getting the same error