Pipedream Internal Error

Hello all!

Wanted to post here as my workflow started throwing a Pipedream Internal Error. I checked around the forums and couldn’t find anything that really helped so posting here. @dylburger I saw that I should send workflows to you that are broken, is that okay?

Thanks!

@kkukshtel Thanks for reaching out. Out of curiosity, is the workflow working now? I noticed some errors on our backend around the time you raised this issue, so I’m curious if you can reproduce the issue.

If so, can you visit your workflow’s Settings and toggle the option to share the workflow with Pipedream Support? I can take a look at the internal error to see what happened.

Hi Dylan,

Not sure if this is the place to post but I have been having the same issue - I have shared the relevant workflow with support (it has been happening on some other workflows occasionally over the last week but their code is identical to the one I have shared).

Kind regards,
James Roper

Just retried the failed workflow and no dice. I turned on the setting to share it with you all.

Thanks for sharing y’all, I’m looking into this and I’ll keep you updated.

@kkukshtel I modified the code in your workflow slightly to return only the body property of the got response, which should substantially cut down on the total response size. I think that may be related to the error. Can you try triggering the workflow once more?

@oxfordtutorspipedream Are you still observing the internal errors? I noticed the most recent run of the workflow you shared worked.

Getting 403 Forbidden errors now on the workflow step I think you changed. No longer an internal error but not sure what would be forbidden here - it mostly seems boilerplate.

In your case it looks like you’re uploading files to your endpoint and downloading the file using the Pipedream-provided URL, correct? Those files expire after 30 minutes, so if you replay the event from the UI after that time, you’ll no longer be able to download the uploaded file.

Are you able to upload the files to your endpoint again to trigger the workflow again?

Okay yeah that’s exactly what I was thinking. Just tried a fresh send and it looks like it worked! Was the issue that the body being passed around was suddenly too big?

We should normally raise a Function Payload Limit Exceeded error when that happens, but yes, I think it was either that or the response contained some data that happened to throw an exception internally. I’m still hunting it down, but I’m glad that worked.

Hmmm. We suddenly started getting this same error a few days ago on every single Webhook Trigger. Nothing works. All we get is a cryptic error: "Pipedream Internal Error — Please retry your event or reach out to Pipedream support " We need to retry the trigger multiple times to get it to work. We haven’t changed a thing on these triggers for months. Just started failing out of the blue, a few days ago.

Hi @osseonews ,

Can you share your workflows with this issue with support?

I just double checked my own HTTP triggered workflows and they appear to be functional. But would like to view the logs and double check what’s going on.

It’s a private workflow. How do we share it? And it’s all our workflows with webhook http triggers. They just keep failing for no reason. Requires several refreshes to get them started again and then fail again.

Hi @osseonews you can share workflows with support under the Settings tab under each workflow that has this issue.

That will make the workflow accessible to us, but unfortunately it will not share the original HTTP source, so we’re going to have to make our own version of it and hopefully reproduce the bug.

OK, we just enabled that under the settings for the workflows that just failed now. But, this is happening across all our workflows. As we use webhooks to manage everything, this is concerning. Honestly, never had this problem ever before with Pipedream. Just started happening a few days ago on existing workflows that never had any issues. Thought, maybe it was some network issue or something, but now it keeps continuing and degrading…Thanks.

I was thinking maybe this a “cold start” issue and we should just “warm up” these workflows every few minutes? Thoughts on that?

Hi @osseonews ,

The good news is that I was able to reproduce the issue and with that we’re able to escalate the problem up, our core engineering team is actively investigating.

Unfortunately it’s not due to a cold start, that would just delay the start of the workflow and not break the actual execution of it.

I’ll let you know as soon as the problem is addressed.

Ok, thanks! Please keep me posted. BTW, this happened on quite a few other workflows over last few days (forgot which ones already), but I guess it’s all the same issue.

Hey, has the engineering team figure out what was going on with these workflows?

Hi @osseonews

There’s 1 part of a potential fix deployed yesterday, and we’re seeing the results today and hopefully will have more information or a permanent fix soon. Will let you know!