Handling 429 Errors

Hello,

I’m working on trying to handle 429 errors that I’m getting every once in awhile when making API requests to the Amazon Ads API. I’ve been reading your documentation on “Poll the REST API for workflow errors”, and I’d like to try sending the errors back to the Workflow that generated the errors to try and make the requests again. The workflow is triggered by HTTP requests, so I’m thinking I can send the errors back to it and retry the Amazon API request and then process the results.

My question is whether or not all 100 recent errors are sent back to the original workflow, or if I’m able to just catch the latest errors and only send them back to the original workflow. My original workflow runs every 6 hours, so I’d like to be able to take only the 429 errors from the most recent run and try and reprocess them.

Thoughts on whether or not I’d be able to process just those new errors utilizing the list of the 100 most recent errors resource, or would I need to go the route of creating a workflow that uses the “emitter_id” and “listener_id” strategy covered in the “Handle errors for one workflow using custom logic” documentation?

Any help/guidance would be very much appreciated.

Thanks,

Tully

Hey All,

No need to respond. I figured it out. I did a little test where I set up the 100 recent errors feed to send to one of our Slack channels. The workflow that generates the 429 errors ran a little earlier today and only the two 429 errors from that workflow execution were sent to our Slack channel. That pretty much gave me my answer. I have changed the error catching workflow to “repost” those 429 errors back into the original workflow so they can be reprocessed.

Tully

Hi @tully,

Glad to hear you solved it. Just an FYI, for these situations I’m a big fan of using $.rerun which will rerun a specific HTTP call or bit of Node.js code up to 10 times with a delay between retries.

This is really useful to compensate for rate limiting errors or just an error prone API that might experience an outage (who doesn’t?)

Here’s the documentation and some short videos showing how to use it:

Hi @pierce,

Thank you very much for the additional info on $.rerun. I’ll definitely look into it.

Tully

Hi @pierce,

I’m finally getting to working on utilizing the $.flow.rerun functionality within the node.js code of my workflows, but I’m getting a bit hung up on doing so. Instead of trying to type out my scenario, I just recorded a loom video that I figured would be much easier to watch.

Any help or guidance would be greatly appreciated!

Tully

Hi @tully,

Sorry, I think we linked you to the Python documentation for $.flow.rerun not to Node.js. Here’s the documentation to the video and example in Node.js:

There’s an example in there that actually makes an axios HTTP call just like your Amazon Ad code step is running.

Another option that is no code change at all is to reduce the concurrency of the workflow to 1.

This would only allow 1 instance of your workflow to run at a given time. So if you had a burst of 5 events, the workflow would execute them consecutively instead of trying to execute all 5 at the same time.

Documentation for workflow concurrency settings: Concurrency and Throttling

Hi @pierce,

Thanks for the updated Node.js documentation. I currently have the concurrency of the workflow reduced to 1. However, I’m typically adding hundreds of rows of data to the Google Sheet to kick off the workflow (today I added nearly 700 rows), so dropping the concurrency to 1 ends up resulting in the processing of all rows taking nearly 40 minutes. I’d like to see if I can reduce this time by quite a bit using $.flow.rerun.

Am I understanding the thought process of using rerun correctly for my scenario? Loom video below with my current level of understanding…haha. I apologize if this is a simple solution that I’m just not catching onto.

Tully

Hi Tully,

You can try playing with other concurrency and rate limiting settings in the workflow to fine tune it to fit your speed needs.

However, unfortunately that API has dynamic rate limits: Amazon Advertising Advanced Tools Center

Their rate limit isn’t a set amount, it just depends on their traffic at that time.

They return a header in their responses telling you how frequently to space apart the requests.

Hey @pierce,

Thanks for the follow-up. Yeah, Amazon’s not the easiest to work with. Fortunately/unfortunately (haha) for us, most of our requests are made to the DSP API, which is separate from the Sponsored Ads and Sponsored Brands APIs. Looking at the headers of the 429 error response, there aren’t any that contain info about the retry wait time. I’m guessing these headers are more specific to the Sponsored Ads and Sponsored Brands APIs.

Based on my workflow and the steps within it, are you thinking that $.flow.rerun won’t be an option to handle the 429 errors and I should instead just focus on the concurrency and workflow throttling settings? Or do you think $.flow.rerun could still be useful?

As always, thanks for all the help and guidance.

Tully

Hi @pierce,

I think I might have figured it out! I went with a try->catch solution to handle the errors and then tried to put some randomness into the delay times. Would love your thoughts as to whether this is an efficient solution or if I’m overlooking something that would make it better.

Tully