This topic was automatically generated from Slack. You can find the original thread here.
Hey 2 questions:
is there any way to force a retry of a failed workflow from the failed step?
I have multiple workflows linked with HTTP requests – but the timeout window for this step is very short (hardcoded at 5 seconds). Is there any way to prevent timeouts / decrease their likelihood? Alternatively, is there another way to pass data between workflows that is not so prone to time outs?
Yes, if you’re on the Advanced Plan, you can enable auto-retry for workflows on errors. If any step in your workflow throws an error, Pipedream will retry the workflow from that failed step, re-rerunning the step up to 8 times over a 10-hour span with an exponential backoff strategy. You can enable this feature in your workflow settings. 2. The timeout for HTTP requests sent with $.send.http() is currently hardcoded at 5 seconds. To avoid timeouts, you can try optimizing the target service to respond faster or use another method to pass data between workflows. One alternative is to use Pipedream’s built-in key-value store to store and retrieve data between workflows. This method is less prone to timeouts and can be a more reliable way to pass data between workflows.
There isn’t a built-in way to manually retry a single step within a workflow. However, you can re-run the entire workflow from the beginning by triggering it again. If you need more control over error handling, you can implement error logic in code steps using try / catch statements in Node.js code. This way, you can decide how to handle errors and retry specific steps within your custom code logic.
To re-trigger the entire workflow from the same Row Creation on Google Sheets, you can follow these steps: 1. Go to your Pipedream workflow that’s triggered by the Google Sheets “New Row” event. 2. Click on the trigger step (Google Sheets “New Row” event) to open its settings. 3. In the trigger settings, you’ll find the “Test” button. Click on it to send the last event data (the same Row Creation) to your workflow. This will re-run the entire workflow with the same data as before. Keep in mind that this method is for testing purposes and simulates the same Row Creation event. If you want to trigger the workflow with actual new data, you’ll need to create a new row in your Google Sheet.
The timeout for HTTP requests sent with $.send.http() is currently hardcoded at 5 seconds. To avoid timeouts, you can try optimizing the target service to respond faster or use another method to pass data between workflows. One alternative is to use Pipedream’s built-in key-value store to store and retrieve data between workflows. This method is less prone to timeouts and can be a more reliable way to pass data between workflows.
How do I do this key-value store method? Please give me step-by-step instructions like I’m stupid
Sure, I’ll guide you through using Pipedream’s key-value store to pass data between workflows. Here are the step-by-step instructions: Step 1: Store data in the first workflow In the first workflow, where you want to store data, add a new code step and use the following code to store the data: const key = "your_key_name"; // Replace with a unique key name const value = "your_data"; // Replace with the data you want to store await $store.set(key, value); Step 2: Retrieve data in the second workflow In the second workflow, where you want to retrieve the stored data, add a new code step and use the following code to get the data: const key = "your_key_name"; // Use the same key name as in the first workflow const value = await $store.get(key); console.log("Retrieved data:", value); Now, the value variable in the second workflow will contain the data stored in the first workflow. You can use this data in subsequent steps of your second workflow as needed. Remember to replace your_key_name and your_data with appropriate values for your use case.
if I store the data in this method – what is the best way to ensure that the next workflow in the sequence is triggered (without using the HTTP request, which times out regularly)
To trigger the next workflow without using an HTTP request, you can use Pipedream’s built-in event emitters. Here’s a step-by-step guide: Step 1: Emit an event in the first workflow In the first workflow, after storing the data using the key-value store, add a new code step to emit an event: const event = { key: "your_key_name", // Use the same key name as when storing data data: "your_data", // Replace with the data you want to pass }; await $send.emit(event); Step 2: Create a new workflow with an Event Source trigger In Pipedream, create a new workflow and set the trigger to “Event Source”. This trigger listens for events emitted by other workflows. Step 3: Retrieve the data in the second workflow In the second workflow, you can access the event data using steps.trigger.event. Then, use the key-value store to retrieve the data: ```
const key = steps.trigger.event.key; const value = await $store.get(key); console.log(“Retrieved data:”, value);
To reduce the likelihood of an HTTP request timing out, you can try the following: 1. Optimize the target service to respond faster, if possible. 2. Increase the timeout value for the HTTP request. However, this is not possible with Pipedream’s $.send.http() method, as the timeout is hardcoded at 5 seconds. If timeouts continue to be an issue, you might consider using an alternative method to trigger the next workflow, such as using Pipedream’s built-in event emitters, as I described in a previous response. This method is less prone to timeouts and can be a more reliable way to pass data between workflows.