Why Do Simple Workflows in Pipedream Consume So Much Memory and Require Extended Timeouts?

This topic was automatically generated from Slack. You can find the original thread here.

Did a search can’t find it here.

Problem 1: Does anyone know if there’s a way to see why simple workflows consume so much memory. I need to allocate four GB of memory to run a workflow that does a few Bigquery queries and then formats a report to Slack. I have a hard time believing this would consume 4 GB if ran locally.

Problem 2: Same question but with timeouts. While building the workflow, it consistently runs in less than 10 seconds, but when deployed, it consistently blows through a 30 second timeout.

There’s no way I can use Pipedream for anything scalable if basic workflows consume that kind of memory and need extended timeouts.

Does anyone know of any memory profiling techniques or tools for Pipedream?

Check out here: Troubleshooting Common Issues - Pipedream