Why Did TCPSocket Errors Occur and Ay Cause Our Source to Persistently Hang During Delete Operations?

This topic was automatically generated from Slack. You can find the original thread here.

Hi Team
I’m on an enterprise plan but wasn’t sure where to reach out first, so posting here.

We ran into an issue today where one of our sources consistently hung during a DELETE operation until eventually throwing this error:
Net::ReadTimeout with #<TCPSocket:(closed)>

This started around noon pst. Here are some key observations:
• In the Sources UI, the events panel showed no events listed but indicated that >300 events had occurred in the past 24 hours.
• Despite this, I could see newly created events when manually adding one.
• Pressing “DELETE ALL EVENTS” in the UI did nothing—no feedback, and the ~300 events remained.
• A DELETE API call (https://api.pipedream.com/v1/sources/#{@source_id}/events) consistently hung until it timed out with the above error (we’re using Ruby with HTTParty).
The issue was ultimately resolved by creating a new source (the correct fix) or deleting the problematic source (a temporary workaround).

Would be great to connect on this to better understand the TCPSocket errors and what might have caused the source to get into this state—it led to some pipeline downtime on our side. Let me know if I can provide the (now deleted) source_id for further investigation.

Thanks!

Hi , thank you for reaching out.

In order to investigate the issue, would you mind sending an email to support@pipedream.com with you message? We’ll take care of you on that channel.

Yes, as Leo mentioned please submit a support ticket at Support - Pipedream and provide the source_id for new trigger. I don’t think we’ll be able to get much information from the deleted one but feel free to share that one too. Thanks!

Hi , apologize for the downtime issues caused in your pieline. Could you also share the workflow URL that was affected in the support ticket?