What is the Updated Status of Pipedream Not Picking Up Missed Tasks After Changing Schedule Frequency?

And about the table snowflake.account_usage.task_history. This is a view. Which means that something inside of snowflake update this view. Probably they have some task to update it. TASK_HISTORY View | Snowflake Documentation
The advantage of this view is that we can fetch data from 365 days while using the function, we can fetch from 7 days. The problem with this view is that it is not instantly update as the function does.

What I can do is to create a new source using this view, but I wouldn’t change the current source because it may have some side effects. In my view, the better action here is understand why your snowflake function is not returning the data, and the people of snowflake may help you with this

Cc:

I have opened a support case with Snowflake regarding behavior of TABLE(INFORMATION_SCHEMA.TASK_HISTORY())

Awesome thanks Alex

Posted thread to Discourse: Update on Pipedream not picking up missed tasks after schedule change to 6 hours?

@U02GNNVAGS0 This is what I got from Snowflake support, but I am not sure that it is helpful. I can pass any comments from you to our Snowflake support.

When the TASK_HISTORY function is queried, its arguments are applied first followed by the WHERE clause.

So when the following is executed,

SELECT ** FROM TABLE(INFORMATION_SCHEMA.TASK_HISTORY()) WHERE NAME = 'MY_INFO_SCHEMA_TASK6' ORDER BY COMPLETED_TIME DESC;

This will retrieve only the most recent 100 task executions of all tasks executed, as that is the default RESULT_LIMIT and then applies the WHERE clause to filter on the returned 100 rows which would reduce the rows as per the # of rows that would match the filter NAME = ‘MY_INFO_SCHEMA_TASK6’

Which would mean it will only show the most recent task for ‘MY_INFO_SCHEMA_TASK6’

Instead of using the WHERE clause for filtering on NAME output, can you please try using the TASK_NAME argument?

Ex :

select ** from table(information_schema.task_history( task_name=>'MY_INFO_SCHEMA_TASK6'));

that makes sense, thanks for following up. Could you try using that pattern? I think we should adjust the query history SQL in the same way

Got it! Will do this

Thank you. Please let me know when you have a fix. I would love to test it.

The PR is done. Now the PR will be merged for the weekly reviewer and tested by our QA team before be merged and available . You can check the status here: [Bugxfix] Snowflake - Failed Task in Schema by vellames · Pull Request #6712 · PipedreamHQ/pipedream · GitHub

I tested in my account and everything looks good. Looks like the problem was happening because you have a lot of tasks running, since it shows only the last 100 by default and you increased the time of fetching, some data was outside of these first 100. I updated the limit to 10000. This is the limit for this function. This should work properly now

Thank you

Few questions. When will the fix be in production?

Would I need to reconfigure triggers?

What version should I be lookin for in config?

It will be deployed some minutes after the PR status change to Merged. It is in QA right now, this is the last step before merged. I can’t precisely give you a deadline, but I imagine that would be good to go in the next days. Version is 0.0.6. You don’t need the reconfigure the whole workflow, just update your trigger as commented before.

Sorry, you will need to remove and re-add the trigger — there is not currently any way to update the version of sources today, just actions.

merged, you can test right now, please let me know the result for you