How to avoid data store errors and automatically retry instead of manually replaying workflow?

This topic was automatically generated from Slack. You can find the original thread here.

I keep getting these data store errors. Any idea how I can avoid these? Or at least retry automatically rather than having to manually replay the workflow?

Error
Error communicating with data store
at null.createError (/var/task/node_modules/@lambda-v2/component-runtime/node_modules/axios/lib/core/createError.js:16:15)
at null.settle (/var/task/node_modules/@lambda-v2/component-runtime/node_modules/axios/lib/core/settle.js:17:12)
at IncomingMessage.handleStreamEnd (/var/task/node_modules/@lambda-v2/component-runtime/node_modules/axios/lib/adapters/http.js:269:11)
at IncomingMessage.emit (events.js:412:35)
at null.endReadableNT (internal/streams/readable.js:1333:12)
at process.processTicksAndRejections (internal/process/task_queues.js:82:21)
(pop_from_queue)

I keep getting these data store errors. Any idea how I can avoid these? Or at least retry automatically rather than having to manually replay the workflow?

I understand that you’re experiencing data store errors. To automatically retry your workflow on errors, you can use the Error Reruns feature available for customers on the Advanced Plan. Pipedream will retry the workflow from the failed step up to 8 times over a 10-hour span using an exponential backoff strategy. If you’re on the Advanced Plan, you can enable this feature, and if a step execution succeeds during any retry, the execution will proceed to the next step of the workflow. If the step fails on all 8 retries, it throws the final error, and you should receive an error notification through your standard notification channel. For more information on error handling and reruns, please visit the Pipedream documentation: Settings If you’re not on the Advanced Plan, you might consider upgrading to access this feature. Additionally, you can visit https://pipedream.com/support for more support options.

Hi , aside from the response from Mario, you can try to reduce the times that you called data store by batching the data. This might help your issue