This topic was automatically generated from Slack. You can find the original thread here.
Hi, I just copied my python code that’s working on an account and deployed it to another account.
But it keeps giving me read ECONNRESET error.
If I am in the test and select events before deployment it works.
Could you do me a favor and increase the memory of your workflow in your workflow’s settings, then try again? During testing, the environment we provide you has a larger memory than the env you run in production by default. This makes testing faster, but I’m guessing that step requires more memory than the 256MB default, so the execution fails in prod.
I’d recommend raising to 512, 768, etc, step by step, and replay your event after each increase to find the minimum memory that should run the step.
I took the liberty to browse this channel and read the same suggestion.And that did it thanks.
It’s just
receive an array of objects
read JSON key from attachment
BatchWrite to FireStore
I don’t think I would have gone out of memories if this is on Lambda and Cloud Functions I feel like it’s consuming more than those am I right?
Yes, Pipedream workflows will consume slightly more memory than a normal job on a Cloud Functions service, since we run our own code to handle the step-level execution, collect observability / logs, etc.
How many objects are you processing at any given time in that code step? Could you add some print statements at relevant points of the step (e.g. after you instantiate credentials, for each loop, etc.) so we can see where specifically the ECONNRESET happens?
I switched to REST API once but now I’m back to pipedream, running the same code in different account and it’s working fine.
Thanks for taking a look.
Let me reach you again when this starts happening.
It started to happening again. I increased the memory up to 1.5GB but still.
The payload is very small it’s totally fine with 256mb on cloud functions.