Why are Slack notifications for task failures in our Snowflake databases delayed by 12 hours despite a two-hour trigger check?

This topic was automatically generated from Slack. You can find the original thread here.

hello , we have some triggers set to send us a slack message if any task fails in one of our snowflake databases. There were a handful of failed tasks this morning at 09:22EST, but the slack messages dont end up sending to us until around 12 hours later at 21:19EST. The trigger is to check for task failures every two hours. Anything I should be checking configuration-wise? I didn’t see anything obvious that would cause this.

I apologize for the delay you’re experiencing with the Slack messages. To help you better, I need to understand how your workflow is set up. Can you please provide more information about the trigger you’re using and any additional steps in your workflow? This will help me identify any potential issues in the configuration.

the workflow is a snowflake trigger “FAILED TASK IN SCHEMA”, timer: Every 2 hours

then, a send_message_public_channel task

did the trigger for your workflow emit an event at the expected time, but the message didn’t send until later? Or did the trigger emit the failed task later than you expected?

The trigger did not emit the event at the expected time. Thanks

A little bit ugly to read with the time zone differences between the logs and the UI, but the log is on pacific time in this case.

can you help triage this when you get a minute?

I’ll look into this now

Hi, I couldn’t seems to reproduce this issue on my side. I’ll share this with the component dev to get more insight.

Specifically, I’ve created a task and manually executed it 2 times. Then waited for the source to execute. It emitted 2 events correctly

, for more context, would you mind sharing the screenshot of your source log?

Sure thing

apologies , I couldnt get to this on Friday and I don’t believe my logs will go back to the 25th of October. Might be stuck until this happens again