But the error message we get is a bit different: Timeout
The workflow execution timed out. Manage execution limits in workflow settings. Learn more here.
And we can see the total duration:
So… idk. I’m confused. :man-shrugging:
Update: I bumped up the memory from 512 to 768 MB, and it seems the error rate has roughly halved. I will bump it up to 1024 MB now and will continue to monitor.
Update: I guess that did it. No errors at all with 1024 MB of memory. I’m kind of surprised that a simple Python script that receives, transforms, and emits some JSON data would take so much memory, but :shrug::skin-tone-2:. Many thanks to and @U05FUC30Q01!
Posted thread to Discourse: Why Does My Workflow Occasionally Fail with 'Timeout' Error Despite No Steps Being Executed and Auto-Retry on Errors Enabled?
Sad to say, but it seems the error has not completely gone away. The error rate has dropped even more, but it’s still happens every few days. At the same time, adding more memory to each execution just inflates the number of credits each execution takes. Currently I’m okay with my credit budget, but I’d rather find a real fix than just throw more memory at the problem. I really do think it could be related to some memory leak given that the frequency of the problem continues to fall as I increase allocated memory.
- thoughts? (see above)