What is the Best Way to Send Bulk HTTP Requests from a Large .CSV File Without Running into Size Limits?

This topic was automatically generated from Slack. You can find the original thread here.

Hi
I am working on a .csv file (12 MB) with 120.000 lines of transactions.

I need to loop through all of them and send them as bulk HTTP request with 20 in each.

What I the best was to handle this?

My idea is to parse the csv, and store it in tmp, but I am afraid to run into size limits…

Please advise :blush:

There is plenty of room in the /tmp folder!

From the docs:

You have 2GB of available space in /tmp to save any file.

You could also use the file store for persistent storage in between executions.

So you could have one single workflow looping over all of your records for ~10 minutes, save what’s left to process in the file store, and then use $.flow.rerun() to refresh its timeout timer to zero and continue processing.

This way, that one workflow could continue processing everything for hours (if it takes that long) without timing out.

Otherwise, you could also trigger another workflow for each batch of 20 (but that will probably use more credits in total).

But we have a use case where we do just that. And the code is pretty simple:

    while (list.length > 0) {
      await fetch("https://1234567890.m.pipedream.net", {
        method: "POST",
        body: JSON.stringify(list.splice(0, 20)),
      });
    }