Bug: you've exceeded the default timeout for this workflow,?

We still have consistent problems in V2 with this error when we use a cron trigger: you’ve exceeded the default timeout for this workflow, see Troubleshooting Common Issues.

None of the steps actually timeout, though. The trigger just never starts. So there is no “real” timeout. The error happens on the cron trigger. But, this makes no sense, how can the workflow timeout if it never starts? Can someone please fix this or look into why this happens?

Update: I think the error ultimately happens with a datastore holdup. I’m guessing interacting with a large datastore slows things down alot?

@osseonews Would you mind visiting your workflow Settings and share the workflow with Pipedream support? I’d like to take a look at the errors to see if we can spot the issue.

Yeah, after investigating this a bit more, the problem is that it seems like there are often connection problems with data stores and that causes the whole workflow to fail. The error message is always: “Error communicating with data store”.

Hi @osseonews is it possible you’re storing very large objects as values in your Data Store? It’s possible that the Data Store is not able to retrieve or store an item that’s too large.

No. It’s not large. We just store customer emails in an array. The data store just fails once in awhile and then works again in the next iteration of the workflow, even on larger arrays. It’s not clear why it fails sometimes with the communication error.

Could you add this logging step just before entering in new values to your emails array into the Data Store?

I’m assuming you have an emails variable, but you can replace it with your own that you’re storing into the Data Store.

console.log(new Blob([JSON.stringify(emails)]).size);

This will tell us the size in bytes that the record has, if it does error again you can find this value in your logs and we can determine it’s size.