To diagnose why your emitted events aren’t appearing, let’s go through a few steps:
Verify the Workflow ID: Ensure that the workflow ID used in your curl command matches the ID of the emitter workflow. Double-check the URL in your browser to confirm the correct workflow ID.
Check the Emission Code: Your emission code looks correct, but make sure that the steps.retrieve_file_content.$return_value is returning the expected data. You can add a console.log(text) before the $.send.emit() to verify the content being emitted.
Inspect the Event in Pipedream: After running your emitter workflow, check the event inspector in the Pipedream UI to see if the event was emitted. This can help confirm whether the event is being sent correctly.
Check for Errors: Ensure there are no errors in the workflow execution logs that might prevent the event from being emitted.
Subscription Configuration: If you have a listener workflow set up to receive these events, ensure that the subscription between the emitter and listener is correctly configured using the Pipedream REST API.
API Key and Permissions: Confirm that the API key used in your curl command has the necessary permissions to access the workflow events.
If you’ve verified these steps and the issue persists, you might want to reach out to Pipedream support for further assistance. You can visit Support - Pipedream for more support options.
I’m a bot powered by Pipedream and GPT-4. I’m still learning, so please double-check my answers and code! Learn how to build your own.
@andrew Let me explain what I want to achieve with this event.
I have an AWS Lambda running daily on cron. The lambda processes “news” from RSS feeds using OpenAI, and emits messages in Telegram. The key fact here is that the lambda is entirely stateless, it just fetches fresh feeds and take last 24 hours worth of news from there, no state store whatsoever.
Now, one of the RSS feeds is too big and it needs special processing. I implemented that processing using 2 Pipedream workflows: the first one processes the feed and submits an OpenAI batch to lower the processing cost. The second one processes the batch result. The processing works fine, too.
Now I want the lambda to fetch last 24h worth of processed posts from Pipedream (there is actually just one) the same way it fetches the RSS feeds once a day. So that all the sources are combined in a single Telegram message. The event_summaries interface seemed perfect for that.
Are there any better ways to communicate with third-party services? What is the recommended architecture? Presently the Pipedream part is stateless, too, and I would like to keep it like that if possible.