This topic was automatically generated from Slack. You can find the original thread here.
Hi all
Just started using pipedream and have been very impressed. I’ve set up a workflow to convert an mp3 to a whisper transcript using openai. The workflow succeeds when running step by step from the editor, but when Deployed and triggered via a Google Drive upload it fails on the custom python step with the following error:
Error
Could not execute step
Error: Could not execute step
at Object.execute (/var/task/nano_worker.js:164:28)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Runner.processCell (/var/task/lambda_handler.js:642:16)
at async Runner.runUserCode (/var/task/lambda_handler.js:778:9)
at async Runner.run (/var/task/lambda_handler.js:614:5)
at async Runtime.exports.handler (/var/task/lambda_handler.js:826:22)
Other failures have shown the stack trace where the custom code was failing, but in this case I only see the Pipedream stack. Seems as if a worker thread crashed and killed the process? I can only guess…
Yes, we do use worker threads to coordinate the execution of steps. Could you do me a favor and increase your workflow’s memory in your workflow’s Settings?
It’s likely that the VM where your workflow runs is killing the worker on account of memory contention. When you test steps in the builder, we give you an environment with 3GB of memory to facilitate testing, but we default to 256MB on production. So I’d just increase stepwise until you no longer see issues.
Out of curiosity, is there something different about running each step manually in the Editor that would reduce the memory pressure? Or is it just that running manually gives the VM more breathing room…
We just increase the memory limit when you’re building to facilitate testing (tests are free for you), but reduce to the default of 256MB when you deploy your workflow so you only pay for the minimum memory you need when running on production (which you can always increase)