This topic was automatically generated from Slack. You can find the original thread here.
How does the row monitoring work for Supabase? I have a source configured to watch for a new row being added. It works initially but if I delete rows and add new ones it isn’t triggered until I hit a new max row count.
If I look in the source logs, the db["offset"] never decreases when deleting rows. It stays at the high watermark / highest row count seen
It seems like you’re using the Supabase “New Row Added” trigger in your Pipedream workflow. When you delete rows and add new ones, the trigger might not be immediately triggered until it reaches a new max row count. To better understand and debug this issue, you can check the logs and configuration of your event source at https://pipedream.com/sources. If you still face issues or need further assistance, please visit https://pipedream.com/support for more support options.
I couldn’t find any specific documentation about the row monitoring behavior you described for Supabase in the provided sources. If you need further assistance or clarification, please visit https://pipedream.com/support for more support options.
I apologize for the confusion earlier. I misunderstood your question and provided an incorrect response. I couldn’t find any specific information about the row monitoring behavior you described for Supabase in the provided sources. To better understand and debug this issue, you can check the logs and configuration of your event source at https://pipedream.com/sources. If you still face issues or need further assistance, please visit https://pipedream.com/support for more support options.
I don’t think just using the offset would be viable for tables that could see multiple changes during a polling interval. (If polling is set to 15min, if you delete 5 rows and add 5 new ones, there wouldn’t be a change in offset). I think I’ll try out the webhook option that triggers on every insert/update/delete operation
Thanks Chip - let us know if the webhook event works for your particular use case here, otherwise we can dig in further to how we might be able to improve the polling trigger that you were using earlier.
But I’m curious if you have any preference or thoughts on that topic as a user — is that confusing to you to have the sources published that way? Or does it make sense?
For my use case, I don’t want to miss a row action so polling probably isn’t the best use case. I’m happy to manage the parsing of an event stream and pick the ones I need. I was testing with polling due to ease of use…
Obviously there’d be a lot more source invocations if it’s firing for every event (like Slack interaction events) but I guess that’s y’all’s concern since the first 30sec of a source invocation is free