How to Allow Team Access to Pipedream Workflow Logs Without Giving them Account Access?

This topic was automatically generated from Slack. You can find the original thread here.

Hey gang - I’m looking for some solutions for logging workflows. I have a few pipedream workflows connected to my team production Notion databases, and I want to give my team access to workflow logs without actually letting them into the pipedream account. I’m assuming I’ll need to add logging to each step in pipedream, then push to an outside third party tool for my team to visualize it?

I’ve seen tools like Better Stack, but I’m not sure if this is the direction to go.

Ideally, I’d be able to write logic that handles incident(s) from each step and offers logs that will provide insight for the team. (API call failed, try your request again, or critical data missing from input, please add ***** and resubmit)

Ideally, this should be a native Pipedream feature (but it isn’t yet). :disappointed:

DataDog would be another popular option for this kind of stuff.

Although my favorite option would just be to push everything into Firehose → S3 → Snowflake, and then analyze everything there (mostly because that is how we already handle pretty much everything).

I think Pipedream should offer some native options for this, but also allow us to implement our own custom loggers.

Just like we can implement our own custom code steps or even our own custom components (but in this case is would be a logger component instead of an action or trigger component).

That would give us full control over:

• What exactly it is we want to log: trigger/event, step parameters/inputs, step response/exports (if any), errors & stack trace (before it bubbles up into the Pipedream runtime), console logs (or other custom logs), and other metadata (such as step details, start & end time, etc.)
• How to handle sensitive data (credentials, PII, etc.)
• Where do we want it to go: this could be one single destination, or it could be multiple ones.
• What format do we want the data to be in: JSON, protobuf, binary, etc., or some proprietary format of the API(s) we’re integrating with.
• How to handle logger failures/downtime (maybe save to the datastore or file store temporarily?)

And then we’d just need to configure that logger on whichever steps we want to log (or even on entire workflows if we want to log all of the steps).

: FYI. :point_up_2:

That is how I imagine the ideal logger implementation. Something flexible, not limited to whatever Pipedream is offering natively.

The Python logging library could also provide some inspiration.

So instead of console.log(), we could also do this.logger.log() (or something like that).

In that sense, the logger could be a configurable app prop for each step. That would be pretty flexible.

: But otherwise for now you’ll need to implement all of that manually in code steps (or custom components), which is a lot of overhead to add everywhere. :disappointed:

We’d love nothing more than to ship this! Feel free to track this GitHub issue — we’ll post updates there when we make progress.

The AWS equivalent of this would basically be CloudWatch logs (which can then be integrated with virtually anything else).

This is actually an interesting comment from the GitHub issue:

: That might be a good avenue for you to explore :point_up_2:

Although of course it’s a lot of extra credits just to handle all of those logs (i.e. receive the emit as a workflow trigger, and then send the data somewhere).

Imagine if you want to log all of the steps of a workflow… extra credits = workflow executions * number of steps. :exploding_head:

Ah, although I’m just reading now that there is a REST API to retrieve the events! :open_mouth:

yes, you can even send them to webhooks

Although I just saw this in the documentation:

Destination delivery is asynchronous: emits are sent after your workflow finishes.

So I assume that if the workflow fails, that still counts as “finished”.

But probably not in the case of OOM errors, since those executions are just broken?

: Can you confirm? :point_up_2:

correct but those should still trigger errors via the global / workflow-specific error streams