Pipedream_upload_body=1 makes replays impossible

Hi,

We are using Pipedream to ingest data in our CRM from JSON webhooks. Some of these payloads can be bigger, which is why we are using the pipedream_upload_body=1 setting.

One thing I realized because of it, is that whenever our scripts fail, I can’t effectively replay them, because the 30 minutes expired of the aws signed url. Is there a practice or workaround to make those events “replayable” or some other patterns.

@marchoeffl this is a known limitation of the custom upload service, and I hear you on the issue.

Can you tell me more about the reason for the failure in this specific case? e.g. if you had a way to retry individual steps automatically on transient errors, would that help? Or was this an error in your code that you have to fix before you retry?

There are different scenarios. The one I actually had is 1, but I am more worried about nr 2:

  1. I forgot to add the step to download the payload, since I switched to upload_body. This is an edge case and mainly based on my negligence
  2. I have an error or edge case in my code that processes the download body and need to update the code and reprocess it.

All my workflows now have the download of the body in the second step. If I would be able to fix the code, and replay the events from the second step that would be sufficient enough. I am not sure just a retry on a step without code change would be sufficient.

Thanks, that detail helps. I opened up a feature request on GitHub to track this. I’d recommend following that for updates!

Hi Dylan,

I ran into the same issue again and now need to figure out how to manually reprocess the data. If upgrading the expiry is not an option, maybe a way to rerun a workflow from a certain step, with inputs taken from previous steps would solve it as well. Since the first step is usually downloading the actual data which is then only passed to the steps that run into issues…

Hi @marchoeffl , I hear you. We’re talking about this internally as a team and this is on our backlog to address.