# Introduction To Pipedream
Source: https://pipedream.com/docs
export const PUBLIC_APPS = '2,700';
Pipedream provides the toolkit to add thousands of integrations to your app and enables you to automate any process.
* [Pipedream Connect](/docs/connect/) makes it easy to add thousands of customer-facing integrations to your app or AI agent in minutes — [get started with the CLI](/docs/connect/quickstart)
* Our [visual builder](/docs/workflows/) lets you build and run complex workflows with code-level control when you need it, and no code when you don’t
The Pipedream platform includes:
* A [serverless runtime](/docs/workflows/building-workflows/code/) and [workflow service](/docs/workflows/building-workflows/)
* SDKs to handle [user authentication](/docs/connect/) for {PUBLIC_APPS}+ APIs
* Source-available [triggers](/docs/workflows/building-workflows/triggers/) and [actions](/docs/workflows/building-workflows/actions/) for [thousands of integrated apps](https://pipedream.com/explore/)
* One-click [OAuth and key-based authentication](/docs/apps/connected-accounts/) for more than {PUBLIC_APPS} APIs (use tokens directly in code or with pre-built actions)
{/*
*/}
## Getting Started
To get started, [sign up for a free account](https://pipedream.com/auth/signup) (no credit card required):
* **Building integrations for your app?** Follow our [Connect quickstart](/docs/connect/quickstart) to add integrations using the CLI
* **Building workflows?** Follow our [workflow quickstart](/docs/workflows/quickstart) to create your first automation
Learn how to provide Pipedream's documentation as context for your AI assistant or IDE [here](/docs/ai-tooling).
## Use Cases
Pipedream supports use cases from prototype to production and is trusted by more than 1 million developers from startups to Fortune 500 companies:
The platform processes billions of events and is built and [priced](https://pipedream.com/pricing/) for use at scale. [Our team](https://pipedream.com/about) has built internet scale applications and managed data pipelines in excess of 10 million events per second (EPS) at startups and high-growth environments like BrightRoll, Yahoo!, Affirm, and Dropbox.
Our [community](https://pipedream.com/support) uses Pipedream for a wide variety of use cases including:
* AI agents and chatbots
* Workflow builders and SaaS automation
* API orchestration and automation
* Database automations
* Custom notifications and alerting
* Event queueing and concurrency management
* Webhook inspection and routing
* Prototyping and demos
## Source-available
Pipedream maintains a [source-available component registry](https://github.com/PipedreamHQ/pipedream) on GitHub so you can avoid writing boilerplate code for common API integrations. Use components as no code building blocks in workflows, or use them to scaffold code that you can customize. You can also [create a PR to contribute new components](/docs/components/contributing/#contribution-process) via GitHub.
## Contributing
We hope that by providing a generous free tier, you will not only get value from Pipedream, but you will give back to help us improve the product for the entire community and grow the platform by:
* [Contributing components](/docs/components/contributing/) to the [Pipedream registry](https://github.com/PipedreamHQ/pipedream) or sharing via your own GitHub repo
* Asking and answering questions in our [public community](https://pipedream.com/community/)
* [Reporting bugs](https://pipedream.com/community/c/bugs/9) and [requesting features](https://github.com/PipedreamHQ/pipedream/issues/new?assignees=\&labels=enhancement\&template=feature_request.md\&title=%5BFEATURE%5D+) that help us build a better product
* Following us on [Twitter](https://twitter.com/pipedream), starring our [GitHub repo](https://github.com/PipedreamHQ/pipedream) and subscribing to our [YouTube channel](https://www.youtube.com/c/pipedreamhq)
* Recommending us to your friends and colleagues
Learn about [all the ways you can contribute](https://pipedream.com/contributing).
## Support & Community
If you have any questions or feedback, please [reach out in our community forum](https://pipedream.com/community) or [to our support team](https://pipedream.com/support).
## Service Status
Pipedream operates a status page at [https://status.pipedream.com](https://status.pipedream.com/). That page displays the uptime history and current status of every Pipedream service.
When incidents occur, updates are published to the **#incidents** channel of [Pipedream’s Slack Community](https://pipedream.com/support) and to the [@PipedreamStatus](https://twitter.com/PipedreamStatus) account on Twitter. On the status page itself, you can also subscribe to updates directly.
# Billing Settings
Source: https://pipedream.com/docs/account/billing-settings
You’ll find information on your usage data (for specific [Pipedream limits](/docs/workflows/limits/)) in your [Billing Settings](https://pipedream.com/settings/billing). You can also upgrade to [paid plans](https://pipedream.com/pricing) from this page.
## Subscription
If you’ve already upgraded, you’ll see an option to **Manage Subscription** here, which directs you to your personal Stripe portal. Here, you can change your payment method, review the details of previous invoices, and more.
## Usage
[Credits](/docs/pricing/#credits-and-billing) are Pipedream’s billable unit, and users on the [free plan](/docs/pricing/#free-plan) are limited on the number of daily free credits allocated. The **Usage** section displays a chart of the daily credits across a historical range of time to provide insight into your usage patterns.
Credit usage from [Connect](/docs/connect/) is not yet reflected in this section.
Hover over a specific column in the chart to see the number of credits run for that specific day:
Click on a specific column to see credits for that day, broken out by workflow / source:
Users on the free tier will see the last 30 days of usage in this chart. Users on [paid plans](https://pipedream.com/pricing) will see the cumulative usage tied to their current billing period.
## Compute Budget
Control the maximum number of credits permitted on your account with a **Credit Budget**.
This will restrict your workspace-wide usage to the specified number of [credits](/docs/pricing/#credits-and-billing) on a monthly or daily basis. The compute budget does not apply to credits incurred by [dedicated workers](/docs/workflows/building-workflows/settings/#eliminate-cold-starts) or Pipedream Connect.
To enable this feature, click on the toggle and define your maximum number of credits in the period.
Due to how credits are accrued, there may be cases where your credit usage may go slightly over the cap.
In an example scenario, with a cap set at 20 credits and a long-running workflow that uses 10 credits per run, it’s possible that two concurrent events trigger the workflow, and the cap won’t apply until after the concurrent events are processed.
## Limits
For users on the [Free tier](/docs/pricing/#free-plan), this section displays your usage towards your [credits quota](/docs/workflows/limits/#daily-credits-limit) for the current UTC day.
# User Settings
Source: https://pipedream.com/docs/account/user-settings
You can find important account details, text editor configuration, and more in your [User Settings](https://pipedream.com/user).
## Changing your email
Pipedream sends system emails to the email address tied to your Pipedream login. You can change the email address to which these emails are delivered by modifying the **Email** in your Account Settings. Once changed, an email will be delivered to the new address requesting you verify it.
Pipedream marketing emails may still be sent to the original email address you used when signing up for Pipedream. To change the email address tied to marketing emails, please [reach out to our team](https://pipedream.com/support).
## Two-Factor Authentication
Two-factor authentication (2FA) adds an additional layer of security for your Pipedream account and is recommended for all users.
### Configuring 2FA
1. Open your [Account Settings](https://pipedream.com/user)
2. Click **Configure** under **Two-Factor Authentication**
3. Scan the QR code in an authenticator app like [1Password](https://1password.com/) or Google Authenticator (available on [iOS](https://apps.apple.com/us/app/google-authenticator/id388497605) and [Android](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2\&hl=en_US\&gl=US))
4. If you’re unable to scan the QR code, you can view the setup key to configure 2FA manually in your authenticator app
5. Enter the one-time-password (OTP) from your authenticator app
6. **Save your recovery codes in a secure location**. You’ll need these to access your Pipedream account in the case you lose access to your authenticator app.
Save your recovery codes
If you lose access to your authenticator app and your recovery codes, you will permanently lose access to your Pipedream account. **Pipedream Support cannot recover these accounts.**
### Signing in with 2FA
1. You’ll be prompted to enter your OTP the next time you sign in to Pipedream
2. When prompted, you can enter the OTP from your authenticator app or a recovery code
Using recovery codes
Each recovery code is a one-time-use code, so make sure to generate new recovery codes in your [Account Settings](https://pipedream.com/user) when you need to. All previously generated recovery codes expire when you generate new ones.
2FA is not currently supported with Single Sign On Pipedream recommends enabling 2FA with your identity provider.
### Requiring 2-Factor Authentication
Workspaces on the Business plan can [require all workspace members to configure 2FA](/docs/workspaces/#requiring-two-factor-authentication) in order to log in to Pipedream.
If you are a member of any workspace where 2FA is required, you cannot disable 2FA, but you can still reconfigure it in your [account settings](https://pipedream.com/account/) if necessary.
Admins and Owners control 2FA settings
Only workspace owner and admin members can enable or disable 2FA for an entire workspace.
## Pipedream API Key
Pipedream provides a [REST API](/docs/rest-api/) for interacting with Pipedream programmatically. You’ll find your API key here, which you use to [authorize requests to the API](/docs/rest-api/auth/).
You can revoke and regenerate your API key from here at any time.
## Delete Account
You can delete your Pipedream account at any time by visiting your Account Settings and pressing the **Delete your Account** button. Account deletion is immediately and irreversible.
## Application
You can change how the Pipedream app displays data, and basic text editor config, in your [Application Settings](https://pipedream.com/settings/app).
For example, you can:
* Change the clock format to 12-hour or 24-hour mode
* Enable Vim keyboard shortcuts in the Pipedream text editor, or enable word wrap
* Set the number of spaces that will be added in the editor when pressing `Tab`
## Environment Variables
Environment variables allow you to securely store secrets or other config values that you can access in Pipedream workflows via `process.env`. [Read more about environment variables here](/docs/workflows/environment-variables/).
# Privacy Policy
Source: https://pipedream.com/docs/additional-resources/privacy
# Status
Source: https://pipedream.com/docs/additional-resources/status
# Support
Source: https://pipedream.com/docs/additional-resources/support
# Terms of Service
Source: https://pipedream.com/docs/additional-resources/terms
# Using AI tooling with Pipedream
Source: https://pipedream.com/docs/ai-tooling
Enable your AI tools to help integrate Pipedream Connect
Pipedream provides developer tools to make it easy to work with Pipedream Connect aloginside whatever AI tools you use. You can use these tools to get instant access to Pipedream's documentation, API references, and code examples directly in your AI assistant or IDE.
## AI contextual menu
Pipedream's documentation includes built-in tools to make it easy to share content with AI assistants:
### Copy a page as Markdown
Press Command + C (Ctrl + C on Windows) on any page in the docs to copy it as Markdown to your clipboard. You can then paste this directly into your AI assistant for context-aware help.
### Send a page to your AI assistant
Click the AI menu in the top right of any page in the docs to send the current page to ChatGPT, Claude, or Perplexity to easily get immediate help with the specific topic you're reading about.
## LLM-optimized documentation
Pipedream's documentation is optimized for AI consumption with automatically generated `llms.txt` and `llms-full.txt` files:
### llms.txt
Similar to how a sitemap helps search engines, `llms.txt` provides a structured index of all the pages in the docs. AI tools can use this to quickly understand our documentation structure and find relevant content.
View it at: [pipedream.com/docs/llms.txt](https://pipedream.com/docs/llms.txt)
### llms-full.txt
For comprehensive context, `llms-full.txt` flattens the entire docs site into a single file. You can provide this URL to AI tools for more accurate responses about Pipedream.
View it at: [pipedream.com/docs/llms-full.txt](https://pipedream.com/docs/llms-full.txt)
## Pipedream Docs MCP Server
For a more integrated development experience, you can add the Pipedream Docs MCP ([Model Context Protocol](https://modelcontextprotocol.io/)) server to your IDE. This enables your AI assistant to directly access and search Pipedream's documentation without leaving your development environment.
If you're looking to use Pipedream MCP to add thousands of tools to the app you're building, see [these docs](/docs/connect/mcp/developers).
### Benefits
Adding the Pipedream Docs MCP server to your development environment provides:
* **Instant docs access**: Your AI assistant can search and reference Pipedream docs without leaving your IDE
* **Faster development**: Reduce context switching between documentation and code
* **Accurate answers**: AI responses are grounded in our official documentation
### Installation
To use the Pipedream Docs MCP server with stdio transport in your IDE:
```bash
npx mint-mcp add pipedream
```
The Pipedream Docs MCP server works with any IDE or AI assistant that supports MCP, including:
* Cursor
* VSCode
* Claude Code
* Claude Desktop
* Any other MCP-compatible IDE
### What you can do
With the Pipedream Docs MCP server enabled, your AI assistant can help you:
* Find the right Connect SDK methods and parameters
* Generate code examples for common integration patterns
* Troubleshoot authentication and API issues
* Discover available endpoints and capabilities
### Example prompts
Here are some example prompts you can use with your AI assistant once the MCP server is configured:
* "How do list apps in Pipedream Connect?"
* "Show me how to send a Slack message on behalf of my users with Pipedream Connect"
* "How can I add Pipedream MCP to my agent?"
* "How do I deploy a Google Sheets trigger for my end users?"
## Next steps
* Go through the [Pipedream Connect Quickstart](/docs/connect/quickstart) to get started
* Check out the [SDK playground](https://pipedream.com/connect/demo) to see Connect in action
* Use [Pipedream Chat](https://chat.pipedream.com) to see how you can add Pipedream MCP to your AI app
# Integrated Apps
Source: https://pipedream.com/docs/apps
export const PUBLIC_APPS = '2,700';
Pipedream has built-in integrations with more than {PUBLIC_APPS} apps. Since you can [write any code](/docs/workflows/building-workflows/code/nodejs/) on Pipedream, and pass API keys or credentials using [environment variables](/docs/workflows/environment-variables/), you can connect to virtually any service, so the list is not exhaustive.
But Pipedream-integrated apps provide a few benefits:
* You can [connect the app once](/docs/apps/connected-accounts/) and [link that connected account to any step of a workflow](/docs/apps/connected-accounts/#connecting-accounts)
* Pipedream provides [pre-built actions](/docs/components/contributing/#actions) that wrap common operations for the app. You shouldn’t have to write the code to send a message to Slack, or add a new row to a Google Sheet, so actions make that easy. Actions are just code, so you can fork and modify them, or even [publish your own to the Pipedream community](/docs/components/contributing/).
* [You have access to your API keys and access tokens in code steps](/docs/workflows/building-workflows/code/nodejs/auth/), so you can write any code to authorize custom requests to these apps.
## Premium Apps
The vast majority of integrated apps on Pipedream are free to use in your workflows across any plan. However, in order to use any of the below apps in an active workflow, your workspace will need to have access to [Premium Apps](https://pipedream.com/pricing):
* [ActiveCampaign](https://pipedream.com/apps/activecampaign)
* [ADP](https://pipedream.com/apps/adp)
* [Amazon Advertising](https://pipedream.com/apps/amazon_advertising)
* [Asana](https://pipedream.com/apps/asana)
* [AWS](https://pipedream.com/apps/aws)
* [Azure OpenAI Service](https://pipedream.com/apps/azure-openai-service)
* [BigCommerce](https://pipedream.com/apps/bigcommerce)
* [Cisco Webex](https://pipedream.com/apps/cisco-webex)
* [Cisco Webex (Custom App)](https://pipedream.com/apps/cisco-webex-custom-app)
* [Close](https://pipedream.com/apps/close)
* [Cloudinary](https://pipedream.com/apps/cloudinary)
* [Customer.io](https://pipedream.com/apps/customer-io)
* [Datadog](https://pipedream.com/apps/datadog)
* [dbt Cloud](https://pipedream.com/apps/dbt)
* [ERPNext](https://pipedream.com/apps/erpnext)
* [Exact](https://pipedream.com/apps/exact)
* [Freshdesk](https://pipedream.com/apps/freshdesk)
* [Google Cloud](https://pipedream.com/apps/google-cloud)
* [Gorgias](https://pipedream.com/apps/gorgias-oauth)
* [HubSpot](https://pipedream.com/apps/hubspot)
* [Intercom](https://pipedream.com/apps/intercom)
* [Jira](https://pipedream.com/apps/jira)
* [Jira Service Desk](https://pipedream.com/apps/jira-service-desk)
* [Klaviyo](https://pipedream.com/apps/klaviyo)
* [Linkedin](https://pipedream.com/apps/linkedin)
* [Linkedin Ads](https://pipedream.com/apps/linkedin-ads)
* [Mailchimp](https://pipedream.com/apps/mailchimp)
* [Mailgun](https://pipedream.com/apps/mailgun)
* [MongoDB](https://pipedream.com/apps/mongodb)
* [Outreach](https://pipedream.com/apps/outreach)
* [PagerDuty](https://pipedream.com/apps/pagerduty)
* [Pinterest](https://pipedream.com/apps/pinterest)
* [Pipedrive](https://pipedream.com/apps/pipedrive)
* [Pipefy](https://pipedream.com/apps/pipefy)
* [Propeller](https://pipedream.com/apps/propeller)
* [Quickbooks](https://pipedream.com/apps/quickbooks)
* [Rebrandly](https://pipedream.com/apps/rebrandly)
* [ReCharge](https://pipedream.com/apps/recharge)
* [Salesforce (REST API)](https://pipedream.com/apps/salesforce_rest_api)
* [Segment](https://pipedream.com/apps/segment)
* [SendinBlue](https://pipedream.com/apps/sendinblue)
* [ServiceNow](https://pipedream.com/apps/servicenow)
* [ShipStation](https://pipedream.com/apps/shipstation)
* [Shopify](https://pipedream.com/apps/shopify)
* [Snowflake](https://pipedream.com/apps/snowflake)
* [Stripe](https://pipedream.com/apps/stripe)
* [Twilio SendGrid](https://pipedream.com/apps/sendgrid)
* [WhatsApp Business](https://pipedream.com/apps/whatsapp-business)
* [WooCommerce](https://pipedream.com/apps/woocommerce)
* [Xero Accounting](https://pipedream.com/apps/xero_accounting_api)
* [Zendesk](https://pipedream.com/apps/zendesk)
* [Zoom Admin](https://pipedream.com/apps/zoom_admin)
* [Zoho Books](https://pipedream.com/apps/zoho_books)
* [Zoho CRM](https://pipedream.com/apps/zoho_crm)
* [Zoho People](https://pipedream.com/apps/zoho_people)
* [Zoho SalesIQ](https://pipedream.com/apps/zoho_salesiq)
Missing an integration? If we don’t have an integration for an app that you’d like to see, please [let us know](https://pipedream.com/support) or [contribute it to the source available Pipedream registry](/docs/components/contributing/).
**Check out the full list of integrated apps [here](https://pipedream.com/apps).**
# App Partners
Source: https://pipedream.com/docs/apps/app-partners
export const PUBLIC_APPS = '2,700';
By integrating your app with Pipedream, your users will be able to connect your app with over {PUBLIC_APPS} supported apps on our platform. You gain access and exposure to a community of over 800,000 developers, and can spend more time building your product and less time navigating app integrations.
## Benefits of Integrating With Pipedream
* **End-to-End Development:** Pipedream will handle the entire development process of the integration, from managing authentication, setting up an official Pipedream OAuth client where applicable, and final QA.
* **Custom Triggers and Actions:** We will create up to three triggers and three actions, based on the methods available within your APIs.
* **Extensive Developer Exposure:** Your app will be accessible to a large and growing community of developers.
* **Dedicated App Page:** Control and customize your dedicated app page to market your app to potential users, and share example use cases or workflow templates that can be built to help users get started.
## Integration Process
A
Integrating with Pipedream is a straightforward process:
1. Pipedream requires an account for testing and development along with API documentation in order to get started. We will build the initial integration, and run test requests to test the connection.
2. Our team will build no-code triggers and actions that make the most sense from a workflow development perspective - if you have specific triggers and actions in mind to start with, we’ll start there.
3. Our QA team will thoroughly test the no-code components to ensure that they work as intended, and then we will release and announce the completed integration in our public Slack with over 5,000 active members.
## Get Started
Are you ready to integrate with Pipedream? [Contact our integrations team](https://pipedream.com/support) today to get started.
# Connected Accounts
Source: https://pipedream.com/docs/apps/connected-accounts
export const PD_EGRESS_IP_RANGE = '44.223.89.56/29';
export const PUBLIC_APPS = '2,700';
Pipedream provides native integrations for [{PUBLIC_APPS}+ APIs](https://pipedream.com/apps). Once you connect an account, you can
* [Link that account to any step of a workflow](/docs/apps/connected-accounts/#connecting-accounts), using the associated credentials to make API requests to any service.
* [Manage permissions](/docs/apps/connected-accounts/#managing-connected-accounts), limiting access to sensitive accounts
Pipedream handles OAuth for you, ensuring you always have a fresh access token to authorize requests, and [credentials are tightly-secured](/docs/privacy-and-security/#third-party-oauth-grants-api-keys-and-environment-variables).
If you use an existing secrets store, or manage credentials in a database, you can also [pass those to Pipedream at runtime](/docs/apps/external-auth/) instead of connecting accounts in the UI.
## Supported Apps
Pipedream supports [{PUBLIC_APPS}+ apps](https://pipedream.com/apps), and we’re adding more every day.
If you don’t see an integration for a service you need, you can [request the integration here](/docs/apps/connected-accounts/#requesting-a-new-app-or-service), or [use environment variables](/docs/workflows/environment-variables/) to manage custom credentials.
## Types of Integrations
### OAuth
For services that support OAuth, Pipedream operates an OAuth application that mediates access to the service so you don’t have to maintain your own app, store refresh and access tokens, and more.
When you connect an account, you’ll see a new window open where you authorize the Pipedream application to access data in your account. Pipedream stores the OAuth refresh token tied to your authorization grant, automatically generating access tokens you can use to authorized requests to the service’s API. You can [access these tokens in code steps](/docs/workflows/building-workflows/code/nodejs/auth/).
### Key-based
We also support services that use API keys or other long-lived tokens to authorize requests.
For those services, you’ll have to create your keys in the service itself, then add them to your connected accounts in Pipedream.
For example, if you add a new connected account for **Sendgrid**, you’ll be asked to add your Sendgrid API key.
## Connecting accounts
This section discusses connecting **your own account** within the Pipedream UI. If you’re looking to use the connected accounts for your customers, check out the [Connect docs](/docs/connect/).
### From an action
Prebuilt actions that connect to a specific service require you connect your account for that service before you run your workflow. Click the **Connect \[APP]** button to get started.
Depending on the integration, this will either:
* Open the OAuth flow for the target service, prompting you to authorize Pipedream to access your account, or
* Open a modal asking for your API credentials for key-based services
If you’ve already connected an account for this app, you’ll also see a list of existing accounts to select from.
### From the HTTP Request action
Craft a custom HTTP request in a workflow with a connected account *without code*.
In a new step, select the **Send any HTTP Request** to start a new HTTP Request action.
Then, within the new HTTP request, open the **Authorization Type** dropdown to select a **Select an app**:
This will open a new prompt to select an app to connect with. Once you select an app, the HTTP request will be updated with the correct headers to authenticate with that app’s API.
Once you connect the selected app account Pipedream will autmatically include your account’s authentication keys in the request in the headers, as well as update the URL to match the selected service.
Now you can modify the request path, method, body or query params to perform an action on the endpoint with your authenticated account.
### From a code step
You can connect accounts to code steps by using an `app` prop. Refer to the [connecting apps in Node.js documentation](/docs/workflows/building-workflows/code/nodejs/auth/).
For example, you can connect to Slack from Pipedream (via their OAuth integration), and use the access token Pipedream generates to authorize requests:
```javascript
import { WebClient } from '@slack/web-api';
// Sends a message to a Slack Channel
export default defineComponent({
props: {
slack: {
type: 'app',
app: 'slack'
}
},
async run({ steps, $ }) {
const web = new WebClient(this.slack.$auth.oauth_access_token)
return await web.chat.postMessage({
text: "Hello, world!",
channel: "#general",
})
}
});
```
## Managing Connected Accounts
Visit your [Accounts Page](https://pipedream.com/accounts) to see a list of all your connected accounts.
On this page you can:
* Connect your account for any integrated app
* [View and manage access](/docs/apps/connected-accounts/#access-control) for your connected accounts
* Delete a connected account
* Reconnect an account
* Change the nickname associated with an account
You’ll also see some data associated with these accounts:
* For many OAuth apps, we’ll list the scopes for which you’ve granted Pipedream access
* The workflows that are using the account
### Connecting a new account
1. Visit [https://pipedream.com/accounts](https://pipedream.com/accounts)
2. Click the **Connect an app** button at the top-right.
3. Select the app you’d like to connect.
### Reconnecting an account
If you encounter errors in a step that appear to be related to credentials or authorization, you can reconnect your account:
1. Visit [https://pipedream.com/accounts](https://pipedream.com/accounts)
2. Search for your account
3. Click on the *…* next to your account, on the right side of the page
4. Select the option to **Reconnect** your account
## Access Control
**New connected accounts are private by default** and can only be used by the person who added it.
Connected accounts created prior to August 2023 were accessible to all workspace members by default. You can [restrict access](/docs/apps/connected-accounts/#managing-access) at any time.
### Managing access
* Find the account on the Accounts page and click the 3 dots on the far right of the row
* Select **Manage Access**
* You may be prompted to reconnect your account first to verify ownership of the account
* You can enable access to the entire workspace or individual members
### Collaborating with others
Even if a workspace member doesn’t have access to a private connected account, you can still collaborate together on the same workflows.
Workspace members who don’t have access to a connected account **can perform the following actions** on workflows:
* Reference step exports
* Inspect prop inputs, step logs, and errors
* Test any step, so they can effectively develop and debug workflows end to end
Workspace members who do **not** have access to a given connected account **cannot modify prop inputs or edit any code** with that account.
To make changes to steps that are locked in read-only mode, you can:
* Ask the account owner to [grant access](/docs/apps/connected-accounts/#managing-access)
* Click **More Actions** and change the connected account to one that you have access to (note that this may remove some prop configurations)
### Access
Access to connected accounts is enforced at the step-level within workflows and is designed with security and control in mind.
When you connect an account in Pipedream, you are the owner of that connected account, and you always have full access. You can:
* Manage access
* Delete
* Reconnect
* Add to any step or trigger
For connected accounts that are **not** shared with other workspace members:
| Operation | Workspace Owner & Admin | Other Members |
| -------------------------------------------------- | ----------------------- | ------------- |
| View on [Accounts](https://pipedream.com/accounts) | ✅ | ❌ |
| Add to a new trigger or step | ❌ | ❌ |
| Modify existing steps | ❌ | ❌ |
| Test exising steps | ✅ | ✅ |
| Manage access | ✅ | ❌ |
| Reconnect | ✅ | ❌ |
| Delete | ✅ | ❌ |
For connected accounts that **are** shared with other workspace members:
| Operations | Workspace Owner & Admin | Other Members |
| -------------------------------------------------- | ----------------------- | ------------- |
| View on [Accounts](https://pipedream.com/accounts) | ✅ | ✅ |
| Add to a new trigger or step | ✅ | ✅ |
| Modify existing steps | ✅ | ✅ |
| Test exising steps | ✅ | ✅ |
| Manage access | ✅ | ❌ |
| Reconnect | ✅ | ❌ |
| Delete | ✅ | ❌ |
### FAQ
#### Why isn’t my connected account showing up in the legacy workflow builder?
In order to use a connected account in the legacy (v1) workflow builder, the account must be shared with the entire workspace. Private accounts are accessible in the latest version of the workflow builder.
#### What is the “Owner” column?
The owner column on the Accounts page indicates who in the workspace originally connected the account (that is the only person who has permissions to manage access).
#### Why is there no “Owner” for certain connected accounts?
Accounts that were connected before August 2023 don’t have an owner associated with them, and are shared with the entire workspace. In order to manage access for any of those accounts, we’ll first prompt you to reconnect.
#### How can I restrict access to a connected account shared with the workspace?
See above for info on [managing access](/docs/apps/connected-accounts/#managing-access).
#### Can I still work with other people on a single workflow, even if I don’t want them to have access to my connected account?
Yes, see the section on [collaborating with others](/docs/apps/connected-accounts/#collaborating-with-others).
## Accessing credentials via API
You can access credentials for any connected account via API, letting you build services anywhere and use Pipedream to handle auth. See [the guide for accessing credentials via API](/docs/connect/api-reference/list-accounts) for more details.
## Passing external credentials at runtime
If you use a secrets store like [HashiCorp Vault](https://www.vaultproject.io/) or something else, or if you store credentials in a database, you can retrieve these secrets at runtime and pass them to any step. [See the full guide here](/docs/apps/external-auth/).
## Connecting to apps with IP restrictions
These IP addresses are tied to **app connections only**, not workflows or other Pipedream services. To whitelist requests from Pipedream workflows, [use VPCs](/docs/workflows/vpc/).
If you’re connecting to an app that enforces IP restrictions, you may need to whitelist the Pipedream API’s IP addresses:
{PD_EGRESS_IP_RANGE}
## Account security
[See our security docs](/docs/privacy-and-security/#third-party-oauth-grants-api-keys-and-environment-variables) for details on how Pipedream secures your connected accounts.
## Requesting a new app or service
1. Visit [https://pipedream.com/support](https://pipedream.com/support)
2. Scroll to the bottom, where you’ll see a Support form.
3. Select **App / Integration questions** and submit the request.
# OAuth Clients
Source: https://pipedream.com/docs/apps/oauth-clients
By default, OAuth apps in Pipedream use our official OAuth client. When you connect an account for these apps, you grant Pipedream the requested permissions (scopes) on OAuth authorization.
Pipedream apps solve for a broad range of use cases, which means the scopes our OAuth client requests may include a different set than your specific use case. To define the exact scope of access you’d like to grant, you can configure a custom OAuth client.
## Configuring custom OAuth clients
For example, if you want to use a custom OAuth client for GitHub, you’ll need to locate [their documentation](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/creating-an-oauth-app) and create an OAuth app in your developer settings.
Open the [OAuth Clients page in your Pipedream account](https://pipedream.com/@/accounts/oauth-clients) and click **New OAuth Client**.
Choose the app you need. If you can’t find what you’re looking for, feel free to [submit an integration request](https://pipedream.com/support).
* **Name:** Give the OAuth client a name so it’s easy to identify
* **Description:** Optionally add a brief description for additional context
* **Client ID and Secret**: Paste these values from the app’s settings that you’re configuring (the client secret is sensitive – we’ll encrypt and hide it from the UI)
* **Redirect URI:** Copy this Redirect URI and paste it into the app’s settings
* **Scopes:** We’ll list the scopes from Pipedream’s official OAuth client by default. Add or remove scopes as needed based on your use case.
And finally, click **Save**.
Make sure to include all the scopes you need based on your use case. You can modify the scopes later (you’ll need to reconnect your account for changes to take effect). Refer to the app’s API documentation for information on what scopes you’ll need.
## Connecting your account with with a custom OAuth client
Once you’ve created the OAuth client, anyone in your workspace can connect their account:
Now you’re ready to use the connected account in any workflow, just like any other account in Pipedream:
### Limitations
* The vast majority of OAuth apps in Pipedream support custom OAuth clients. However, due to the unique integration requirements for certain apps, custom OAuth clients are not supported in **triggers** for these apps (custom OAuth clients work in actions and code steps): [Discord](https://pipedream.com/apps/discord/), [Dropbox](https://pipedream.com/apps/dropbox/), [Slack](https://pipedream.com/apps/slack/), and [Zoom](https://pipedream.com/apps/zoom/).
# Installing The CLI
Source: https://pipedream.com/docs/cli/install
## macOS
### Homebrew
```bash
brew tap pipedreamhq/pd-cli
brew install pipedreamhq/pd-cli/pipedream
```
### From source
Run the following command:
```bash
curl https://cli.pipedream.com/install | sh
```
This will automatically download and install the `pd` CLI to your Mac. You can also [download the macOS build](https://cli.pipedream.com/darwin/amd64/latest/pd.zip), unzip that archive, and place the `pd` binary somewhere in [your `PATH`](https://opensource.com/article/17/6/set-path-linux).
If this returns a permissions error, you may need to run:
```bash
curl https://cli.pipedream.com/install | sudo sh
```
If you encounter the error `bad CPU type in executable: pd`, you will need to install Rosetta 2 on your Mac by running the following command:
```
softwareupdate --install-rosetta
```
## Linux
Download the [CLI build](/docs/cli/install/#cli-builds) for your architecture below. Unzip that archive, and place the `pd` binary somewhere in [your `PATH`](https://opensource.com/article/17/6/set-path-linux).
## Windows (native)
[Download the CLI build for Windows](https://cli.pipedream.com/windows/amd64/latest/pd.zip). Unzip that archive, save `pd.exe` in Program Files, and [add its file path to `Path` in your system environment variables](https://www.architectryan.com/2018/03/17/add-to-the-path-on-windows-10/). Use `pd.exe` in a terminal that supports ANSI colors, like the [Windows Terminal](https://github.com/microsoft/terminal).
## Windows (WSL)
Download the appropriate [Linux CLI build](/docs/cli/install/#cli-builds) for your architecture. Unzip that archive, and place the `pd` binary somewhere in [your `PATH`](https://opensource.com/article/17/6/set-path-linux).
## CLI Builds
Pipedream publishes the following builds of the CLI. If you need to use the CLI on another OS or architecture, [please reach out](https://pipedream.com/support/).
| Operating System | Architecture | link |
| ---------------- | ------------ | ----------------------------------------------------------------- |
| Linux | amd64 | [download](https://cli.pipedream.com/linux/amd64/latest/pd.zip) |
| Linux | 386 | [download](https://cli.pipedream.com/linux/386/latest/pd.zip) |
| Linux | arm | [download](https://cli.pipedream.com/linux/arm/latest/pd.zip) |
| Linux | arm64 | [download](https://cli.pipedream.com/linux/arm64/latest/pd.zip) |
| macOS | amd64 | [download](https://cli.pipedream.com/darwin/amd64/latest/pd.zip) |
| Windows | amd64 | [download](https://cli.pipedream.com/windows/amd64/latest/pd.zip) |
## Community Libraries
Please note that Pipedream does not verify the correctness or security of these community libraries. Use them at your own risk.
### Nix
The `pd` binary is available via Nix flake [here](https://github.com/planet-a-ventures/pipedream-cli)
## Help
Run `pd` to see a list of all commands, or `pd help ` to display help docs for a specific command.
See the [CLI reference](/docs/cli/reference/) for detailed usage and examples for each command.
# Logging Into The CLI
Source: https://pipedream.com/docs/cli/login
To start using the Pipedream CLI, you’ll need to link it to your Pipedream account. If you don’t have a Pipedream account, you can sign up from the CLI.
## Existing Pipedream account
If you already have a Pipedream account, run
```
pd login
```
This will open up a new window in your default browser. If you’re already logged into your Pipedream account in this browser, this will immediately link the CLI to your account, writing your API key for that account to your [`pd` config file](/docs/cli/reference/#cli-config-file).
Otherwise, you’ll be asked to login.
Once you’re done, go back to your shell and you should see confirmation that your account is linked:
```
> pd login
Logged in as dylburger (dylan@pipedream.com)
```
Then [follow this guide](/docs/cli/reference/#creating-a-profile-for-a-workspace) to learn how to find your workspace ID and associate it with a `pd` profile.
## Signing up for Pipedream via the CLI
If you haven’t signed up for a Pipedream account, you can create an account using the CLI:
```
pd signup
```
This will open up a new window in your default browser. You’ll be asked to sign up for Pipedream here. Once you do, your account will be linked to the CLI, writing your API key for that account to your [`pd` config file](/docs/cli/reference/#cli-config-file).
Once you’re done, go back to your shell and you should see confirmation that your account is linked:
```
> pd signup
Logged in as dylburger (dylan@pipedream.com)
```
## Logging out of the CLI
You can log out of the CLI by running:
```
pd logout
```
This will remove your API key from the [`pd` config file](/docs/cli/reference/#cli-config-file).
## Using the CLI to manage multiple accounts
If you have multiple Pipedream accounts, you can use [profiles](/docs/cli/reference/#profiles) to ensure the CLI can manage resources for each.
# CLI Reference
Source: https://pipedream.com/docs/cli/reference
## Installing the CLI
[See the CLI installation docs](/docs/cli/install/) to learn how to install the CLI for your OS / architecture.
## Command Reference
Run `pd` to see a list of all commands with basic usage info, or run `pd help ` to display help docs for a specific command.
We’ve also documented each command below, with usage examples for each.
### General Notes
Everywhere you can refer to a specific component as an argument, you can use the component’s ID *or* its name slug. For example, to retrieve details about a specific source using `pd describe`, you can use either of the following commands:
```
> pd describe dc_abc123
id: dc_abc123
name: http
endpoint: https://myendpoint.m.pipedream.net
> pd describe http
Searching for sources matching http
id: dc_abc123
name: http
endpoint: https://myendpoint.m.pipedream.net
```
### `pd delete`
Deletes an event source. Run:
```
pd delete
```
Run `pd list so` to display a list of your event sources.
### `pd deploy`
Deploy an event source from local or remote code.
Running `pd deploy`, without any arguments, brings up an interactive menu asking you select a source. This list of sources is retrieved from the registry of public sources [published to Github](https://github.com/PipedreamHQ/pipedream/tree/master/components).
When you select a source, we’ll deploy it and start listening for new events.
You can also deploy a specific source via the source’s `key` (defined in the component file for the source):
```
pd deploy http-new-requests
```
or author a component locally and deploy that local file:
```
pd deploy http.js
```
[Read more about authoring your own event sources](/docs/components/contributing/sources-quickstart/).
### `pd describe`
Display the details for a source: its id, name, and other configuration details:
```
pd describe SOURCE_ID_OR_NAME
```
### `pd dev`
`pd dev` allows you to interactively develop a source from a local file.`pd dev` will link your local file with the deployed component and watch your local file for changes. When you save changes to your local file, your component will automatically be updated on Pipedream.
```
pd dev FILE_OR_NAME
```
If you quit `pd dev` and want to link the same deployed source to your local file, you can pass the deployed component ID using the `--dc` flag:
```
pd dev --dc SOURCE_ID FILE_OR_NAME
```
### `pd events`
Returns historical events sent to a source, and streams emitted events directly to the CLI.
```
pd events SOURCE_ID
```
By default, `pd events` prints (up to) the last 10 events sent to your source.
```
pd events -n 100 SOURCE_ID_OR_NAME
```
`pd events -n N` retrieves the last `N` events sent to your source. We store the last 100 events sent to a source, so you can retrieve a max of 100 events using this command.
```
pd events -f SOURCE_ID_OR_NAME
```
`pd events -f` connects to the [SSE stream tied to your source](/docs/workflows/data-management/destinations/sse/) and displays events as the source produces them.
```
pd events -n N -f SOURCE_ID_OR_NAME
```
You can combine the `-n` and `-f` options to list historical events *and* follow the source for new events.
### `pd help`
Displays help for any command. Run `pd help events`, `pd help describe`, etc.
### `pd init`
Generate new app and component files from templates.
#### `pd init app`
Creates a directory and [an app file](/docs/components/contributing/guidelines/#app-files) from a template
```
# Creates google_calendar/ directory and google_calendar.mjs file
pd init app google_calendar
```
#### `pd init action`
Creates a new directory and [a component action](/docs/components/contributing/#actions) from a template.
```
# Creates add-new-event/ directory and add-new-event.mjs file
pd init action add-new-event
```
#### `pd init source`
Creates a new directory and [an event source](/docs/workflows/building-workflows/triggers/) from a template.
```
# Creates cancelled-event/ directory and cancelled_event.mjs file
pd init source cancelled-event
```
You can attach [database](/docs/components/contributing/api/#db), [HTTP](/docs/components/contributing/api/#http), or [Timer](/docs/components/contributing/api/#timer) props to your template using the following flags:
| Prop type | Flag |
| --------- | --------- |
| Database | `--db` |
| HTTP | `--http` |
| Timer | `--timer` |
For example, running:
```
pd init source cancelled-event --db --http --timer
```
will include the following props in your new event source:
```javascript
props: {
db: "$.service.db",
http: "$.interface.http",
timer: "$.interface.timer",
}
```
### `pd list`
Lists Pipedream sources running in your account. Running `pd list` without any arguments prompts you to select the type of resource you’d like to list.
You can also list specific resource types directly:
```
pd list components
```
```
pd list streams
```
`sources` and `streams` have shorter aliases, too:
```
pd list so
```
```
pd list st
```
### `pd login`
Log in to Pipedream CLI and persist API key locally. See [Logging into the CLI](/docs/cli/login/) for more information.
### `pd logout`
Unsets the local API key tied to your account.
Running `pd logout` without any arguments removes the default API key from your [config file](/docs/cli/reference/#cli-config-file).
You can remove the API key for a specific profile by running:
```
pd logout -p PROFILE
```
### `pd logs`
Event sources produce logs that can be useful for troubleshooting issues with that source. `pd logs` displays logs for a source.
Running `pd logs ` connects to the [SSE logs stream tied to your source](/docs/workflows/building-workflows/triggers/), displaying new logs as the source produces them.
Any errors thrown by the source will also appear here.
### `pd publish`
To publish an action, use the `pd publish` command.
```
pd publish
```
For example:
```
pd publish my-action.js
```
### `pd signup`
Sign up for Pipedream via the CLI and persist your API key locally. See the docs on [Signing up for Pipedream via the CLI](/docs/cli/login/#signing-up-for-pipedream-via-the-cli) for more information.
### `pd unpublish`
Unpublish a component you’ve published to your account. If you publish a source or action that you no longer need, you can unpublish it by component `key`:
```
pd unpublish component
```
### `pd update`
Updates the code, props, or metadata for an event source.
If you deployed a source from Github, for example, someone might publish an update to that source, and you may want to run the updated code.
```
pd update SOURCE_ID_OR_NAME \
--code https://github.com/PipedreamHQ/pipedream/blob/master/components/http/sources/new-requests/new-requests.js
```
You can change the name of a source:
```
pd update SOURCE_ID_OR_NAME --name NEW_NAME
```
You can deactivate a source if you want to stop it from running:
```
pd update SOURCE_ID_OR_NAME --deactivate
```
or activate a source you previously deactivated:
```
pd update SOURCE_ID_OR_NAME --activate
```
## Profiles
Profiles allow you to work with multiple, named Pipedream accounts via the CLI.
### Creating a new profile
When you [login to the CLI](/docs/cli/login/), the CLI writes the API key for that account to your config file, in the `api_key` field:
```
api_key = abc123
```
You can set API keys for other, named profiles, too. Run
```
pd login -p
```
`` can be any string of shell-safe characters that you’d like to use to identify this new profile. The CLI opens up a browser asking you to login to your target Pipedream account, then writes the API key to a section of the config file under this profile:
```
[your_profile]
api_key = def456
```
You can also run `pd signup -p ` if you’d like to sign up for a new Pipedream account via the CLI and set a named profile for that account.
### Creating a profile for a workspace
If you’re working with resources in an [workspace](/docs/workspaces/), you’ll need to add an `org_id` to your profile.
1. [Retrieve your workspaces’s ID](/docs/workspaces/#finding-your-workspaces-id)
2. Open up your [Pipedream config file](/docs/cli/reference/#cli-config-file) and create a new [profile](/docs/cli/reference/#profiles) with the following information:
```
[profile_name]
api_key =
org_id =
```
When using the CLI, pass `--profile ` when running any command. For example, if you named your profile `workspace`, you’d run this command to publish a component:
```
pd publish file.js --profile workspace
```
### Using profiles
You can set a profile on any `pd` command by setting the `-p` or `--profile` flag. For example, to list the sources in a specific account, run:
```
pd list sources --profile PROFILE
```
## Version
To get the current version of the `pd` CLI, run
```
pd --version
```
## Auto-upgrade
The CLI is configured to check for new versions automatically. This ensures you’re always running the most up-to-date version.
## CLI config file
The `pd` config file contains your Pipedream API keys (tied to your default account, or other [profiles](/docs/cli/reference/#profiles)) and other configuration used by the CLI.
If the `XDG_CONFIG_HOME` env var is set, the config file will be found in `$XDG_CONFIG_HOME/pipedream`.
Otherwise, it will be found in `$HOME/.config/pipedream`.
## Analytics
Pipedream tracks CLI usage data to report errors and usage stats. We use this data exclusively for the purpose of internal analytics (see [our privacy policy](https://pipedream.com/privacy) for more information).
If you’d like to opt-out of CLI analytics, set the `PD_CLI_DO_NOT_TRACK` environment variable to `true` or `1`.
# Overview
Source: https://pipedream.com/docs/components
export const PUBLIC_APPS = '2,700';
## What are Components?
Components are [Node.js modules](/docs/components/contributing/api/#component-structure) that run on Pipedream's serverless infrastructure. They can use Pipedream managed auth for [{PUBLIC_APPS}+ apps](https://pipedream.com/explore) (for both OAuth and key-based APIs) and [use most npm packages](/docs/components/contributing/api/#using-npm-packages) with no `npm install` or `package.json` required.
Components are most commonly used as the building blocks of Pipedream workflows, but they can also be used like typical serverless functions. You can explore curated components for popular apps in Pipedream's [Marketplace](https://pipedream.com/explore) and [GitHub repo](https://github.com/PipedreamHQ/pipedream/tree/master/components) or you can author and share your own.
Our TypeScript component API is in **beta**. If you're interested in developing TypeScript components and providing feedback, [see our TypeScript docs](/docs/components/contributing/typescript/).
## Component Types
Pipedream supports two types of components — [sources](#sources) and [actions](#actions).
### Sources
[Sources](/docs/workflows/building-workflows/triggers/) must be instantiated and they run as independent resources on Pipedream. They are commonly used as workflow triggers (but can also be used as standalone serverless functions).
**Capabilities**
* Accept user input on deploy via `props`
* [Trigger](/docs/components/contributing/api/#interface-props) on HTTP requests, timers, cron schedules, or manually
* Emit events that can be inspected, trigger Pipedream [workflows](/docs/workflows/building-workflows/) and that can be consumed in your own app via [API](/docs/rest-api/)
* Store and retrieve state using the [built-in key-value store](/docs/components/contributing/api/#db)
* Use any of Pipedream's built-in [deduping strategies](/docs/components/contributing/api/#dedupe-strategies)
* Deploy via Pipedream's UI, CLI or API
**Example**
The [New Files (Instant)](https://github.com/PipedreamHQ/pipedream/blob/master/components/google_drive/sources/new-files-instant/new-files-instant.mjs) source for Google Drive is a prebuilt component in Pipedream's registry that can be deployed in seconds and emits an event every time a new file is added to the user's Google Drive, and can also be configured to watch for changes to a specific folder within that drive. Each new event that is emitted can be used to trigger a workflow.
### Actions
Actions are components that may be used as steps in a workflow. Unlike sources, actions cannot run independently (outside of a workflow).
**Capabilities**
* Accept user input via `props`
* May `return` JSON serializable data
**Example**
The Add Single Row action for Google Sheets is a prebuilt component in Pipedream's registry that can be added to a workflow and configured in seconds. Users can configure it in seconds and send workflow data to Google Sheets without having to learn the Google Sheets API.
## Using Components
Components may be instantiated or added to workflows via Pipedream's UI.
* Sources may be instantiated and consumed via [UI](https://pipedream.com/sources/new), [CLI](/docs/cli/reference/#pd-deploy) or API
* Actions may be added to [workflows](https://pipedream.com/new)
### Using Private Actions
Private action components published from the [CLI](/docs/cli/reference/#pd-publish) or from a [Node.js Code Step](/docs/workflows/building-workflows/code/nodejs/sharing-code/) are available for use across your workflows.
To use a published action, add a new step to your workflow and click **My Actions**. Your privately published action components will appear in this list.

### Using Private Sources
Private source components deployed from your account via the [CLI](/docs/cli/reference/#pd-deploy) will automatically create a new Source in your account with the prop configuration you specified.
Then in the workflow builder, when creating the trigger, select the *Existing* sources tab in the upper right corner to select your deployed source:

You can also deploy new instances of a source from the [Components dashboard](/images/image-20210411165325045_ia5sd5.png). To deploy a new instance of a source, click the menu on the right hand side and select **Create source**.

## Developing Components
Develop components locally using your preferred code editor (and maintain your code in your own GitHub repo) and deploy or publish using Pipedream's [CLI](/docs/cli/reference/#pd-deploy).
* Sources may be deployed directly from local code or published to your account and instantiated via Pipedream's UI
* Actions may only be published — published actions may be added to workflows via Pipedream's UI
Published components are only available to your own account by default. If published to a team account, the component (source or action) may be discovered and selected by any member of the team.
### Prerequisites
* A free [Pipedream](https://pipedream.com) account
* A free [GitHub](https://github.com) account
* Basic proficiency with Node.js or Javascript
* Pipedream [CLI](/docs/cli/reference/)
Finally, the target app must be integrated with Pipedream. You can explore all apps supported by Pipedream in the [marketplace](https://pipedream.com/explore). If your app is not listed, please [create a GitHub issue](https://github.com/PipedreamHQ/pipedream/issues/new?assignees=\&labels=app%2C+enhancement\&template=app---service-integration.md\&title=%5BAPP%5D) to request it and [reach out](https://pipedream.com/community/c/dev/11) to our team to let us know that you're blocked on source or action development.
### Quickstart Guides
* [Sources](/docs/components/contributing/sources-quickstart/)
* [Actions](/docs/components/contributing/actions-quickstart/)
### Component API Reference
After getting familiar with source/action development using the quickstart guides, check out [the Component API Reference](/docs/components/contributing/api/) and [examples on GitHub](https://github.com/PipedreamHQ/pipedream/tree/master/components) to learn more.
## Managing Privately Published Components
Components published to your workspace are available in the [Components](https://pipedream.com/components) section of the dashboard.
Your private components published from the CLI or from Node.js code steps are listed here.
### Unpublishing Privately Published Components
To unpublish components belonging to your workspace, open the menu on the right hand side of the component details and select **Unpublish Component**.
A prompt will open to confirm the action, click **Confirm** to unpublish your action.

Unpublishing a component is a permanent action, please be careful to ensure you still have access to the source code.
## Sharing Components
Contribute to the Pipedream community by publishing and sharing new components, and contributing to the maintenance of existing components.
### Verified Components
Pipedream maintains a source-available registry of components (sources and actions) that have been curated for the community. Registered components are verified by Pipedream through the [GitHub PR process](/docs/components/contributing/#contribution-process) and:
* Can be trusted by end users
* Follow consistent patterns for usability
* Are supported by Pipedream if issues arise
Registered components also appear in the Pipedream marketplace and are listed in Pipedream's UI when building workflows.
### Community Components
Developers may create, deploy and share components that do not conform to these guidelines, but they will not be eligible to be listed in the curated registry (e.g., they may be hosted in a Github repo). If you develop a component that does not adhere to these guidelines, but you believe there is value to the broader community, please [reach out in our community forum](https://pipedream.com/community/c/dev/11).
# Pipedream Registry
Source: https://pipedream.com/docs/components/contributing
When developing workflows with pre-built actions and triggers, under the hood you’re using [components](/docs/components/contributing/) from the [Pipedream Registry Github Repository](https://github.com/PipedreamHQ/pipedream).
Components contributed to the [Pipedream Registry Github Repository](https://github.com/PipedreamHQ/pipedream) are published to the [Pipedream marketplace](https://pipedream.com/apps) and are listed in the Pipedream UI when building workflows.
What is a component?
If you haven’t yet, we recommend starting with our Component Development Quickstart Guides for [sources](/docs/components/contributing/sources-quickstart/) and [actions](/docs/components/contributing/actions-quickstart/) to learn how to build components and privately publish them to your account.
## Registry Components Structure
All Pipedream registry components live in [this GitHub repo](https://github.com/PipedreamHQ/pipedream) under the [`components`](https://github.com/PipedreamHQ/pipedream/tree/master/components) directory.
Every integrated app on Pipedream has a corresponding directory that defines the actions and sources available for that app. Below is a simplified version of the [Airtable app directory](https://github.com/PipedreamHQ/pipedream/tree/master/components/airtable) within the registry:
🗁 airtable
🗎 README.md
🗎 airtable.app.mjs
🗎 package.json
🗁 actions
🗁 get-record
🗎 get-record.mjs
🗁 node\_modules
🗎 ...here be dragons
🗁 sources
🗁 new-records
🗎 new-records.mjs
In the example above, the `components/airtable/actions/get-record/get-record.mjs` component is published as the **Get Record** action under the **Airtable** app within the workflow builder in Pipedream.
The repository is missing the app directory I’d like to add components for
You can request to have new apps integrated into Pipedream.
Once the Pipedream team integrates the app, we’ll create a directory for the app in the [`components`](https://github.com/PipedreamHQ/pipedream/tree/master/components) directory of the GitHub repo.
## Contribution Process
Anyone from the community can build [sources](/docs/workflows/building-workflows/triggers/) and [actions](/docs/components/contributing/#actions) for integrated apps.
To submit new components or update existing components:
1. Fork the public [Pipedream Registry Github Repository](https://github.com/PipedreamHQ/pipedream).
2. Create a new component within the corresponding app’s directory within the `components` directory (if applicable).
3. [Create a PR for the Pipedream team to review](https://github.com/PipedreamHQ/pipedream/compare).
4. Address any feedback provided by Pipedream based on the best practice [Component Guidelines & Patterns](/docs/components/contributing/guidelines/).
5. Once the review is complete and approved, Pipedream will merge the PR to the `master` branch
6. The component will be available for use within workflows for all Pipedream developers! 🎉
### Component Development Discussion
Join the discussion with other Pipedream component developers at the [#contribute channel](https://pipedream-users.slack.com/archives/C01E5KCTR16) in Slack or [on Discourse](https://pipedream.com/community/c/dev/11).
Not sure what to build?
Need inspiration? Check out [sources](https://github.com/PipedreamHQ/pipedream/issues?q=is%3Aissue+is%3Aopen+%5BSOURCE%5D+in%3Atitle) and [actions](https://github.com/PipedreamHQ/pipedream/issues?q=is%3Aissue+is%3Aopen+%5BACTION%5D+in%3Atitle+) requested by the community!
## Reference Components
The following components are references for developing sources and actions for Pipedream’s registry.
### Reference Sources
| Name | App | Type |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | -------------------------------- |
| [New Card](https://github.com/PipedreamHQ/pipedream/blob/master/components/trello/sources/new-card/new-card.mjs) | Trello | Webhook |
| [New or Modified Files](https://github.com/PipedreamHQ/pipedream/blob/master/components/google_drive/sources/new-or-modified-files/new-or-modified-files.mjs) | Google Drive | Webhook + Polling |
| [New Submission](https://github.com/PipedreamHQ/pipedream/blob/master/components/jotform/sources/new-submission/new-submission.mjs) | Jotform | Webhook (with no unique hook ID) |
### Reference Actions
| Name | App |
| -------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- |
| [Add Multiple Rows](https://github.com/PipedreamHQ/pipedream/blob/master/components/google_sheets/actions/add-multiple-rows/add-multiple-rows.mjs) | Google Sheets |
| [Send Message](https://github.com/PipedreamHQ/pipedream/blob/master/components/discord_bot/actions/send-message/send-message.mjs) | Discord |
| [Append Text](https://github.com/PipedreamHQ/pipedream/blob/master/components/google_docs/actions/append-text/append-text.mjs) | Google Docs |
| [`GET` request](https://github.com/PipedreamHQ/pipedream/blob/master/components/http/actions/get-request/get-request.mjs) | HTTP |
# Quickstart: Action Development
Source: https://pipedream.com/docs/components/contributing/actions-quickstart
## Overview
This document is intended for developers who want to author and edit [Pipedream Actions](/docs/components/contributing/#actions). After completing this quickstart, you’ll understand how to:
* Develop Pipedream components
* Publish private actions and use them in workflows
* Use props to capture user input
* Update an action
* Use npm packages
* Use Pipedream managed auth for a 3rd party app
## Prerequisites
* Create a free account at [https://pipedream.com](https://pipedream.com)
* Download and install the [Pipedream CLI](/docs/cli/install/)
* Once the CLI is installed, [link your Pipedream account](/docs/cli/login/#existing-pipedream-account) to the CLI by running `pd login` in your terminal
> **NOTE:** See the [CLI reference](/docs/cli/reference/) for detailed usage and examples beyond those covered below.
## Walkthrough
We recommend that you complete the examples below in order.
**hello world! (\~5 minutes)**
* Develop a `hello world!` action
* Publish it (private to your account) using the Pipedream CLI
* Add it to a workflow and run it
**hello \[name]! (\~5 minutes)**
* Capture user input using a `string` prop
* Publish a new version of your action
* Update the action in your workflow
**Use an npm Package (\~5 mins)**
* Require the `axios` npm package
* Make a simple API request
* Export data returned by the API from your action
**Use Managed Auth (\~10 mins)**
* Use Pipedream managed OAuth for Github with the `octokit` npm package
* Connect your Github account to the action in a Pipedream workflow
* Retrieve details for a repo and return them from the action
### hello world!
The following code represents a simple component that can be published as an action ([learn more](/docs/components/contributing/api/) about the component structure). When used in a workflow, it will export `hello world!` as the return value for the step.
```javascript
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.1",
type: "action",
props: {},
async run() {
return `hello world!`;
},
};
```
To get started, save the code to a local `.js` file (e.g., `action.js`) and run the following CLI command:
```
pd publish action.js
```
The CLI will publish the component as an action in your account with the key `action_demo`. **The key must be unique across all components in your account (sources and actions). If it’s not unique, the existing component with the matching key will be updated.**
The CLI output should look similar to this:
```
sc_v4iaWB Action Demo 0.0.1 just now action_demo
```
To test the action:
1. Open Pipedream in your browser
2. Create a new workflow with a **Schedule** trigger
3. Click the **+** button to add a step to your workflow
4. Click on **My Actions** and then select the **Action Demo** action to add it to your workflow.
5. Deploy your workflow
6. Click **RUN NOW** to execute your workflow and action
You should see `hello world!` returned as the value for `steps.action_demo.$return_value`.
Keep the browser tab open. We’ll return to this workflow in the rest of the examples as we update the action.
### hello \[name]!
Next, let’s update the component to capture some user input. First, add a `string` [prop](/docs/components/contributing/api/#props) called `name` to the component.
```javascript
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.1",
type: "action",
props: {
name: {
type: "string",
label: "Name",
}
},
async run() {
return `hello world!`
},
}
```
Next, update the `run()` function to reference `this.name` in the return value.
```javascript
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.1",
type: "action",
props: {
name: {
type: "string",
label: "Name",
},
},
async run() {
return `hello ${this.name}!`;
},
};
```
Finally, update the component version to `0.0.2`. If you fail to update the version, the CLI will throw an error.
```javascript
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.2",
type: "action",
props: {
name: {
type: "string",
label: "Name",
},
},
async run() {
return `hello ${this.name}!`;
},
};
```
Save the file and run the `pd publish` command again to update the action in your account.
```
pd publish action.js
```
The CLI will update the component in your account with the key `action_demo`. You should see something like this:
```
sc_Egip04 Action Demo 0.0.2 just now action_demo
```
Next, let’s update the action in the workflow from the previous example and run it.
1. Hover over the action in your workflow —you should see an update icon at the top right. Click the icon to update the action to the latest version and then save the workflow. If you don’t see the icon, verify that the CLI successfully published the update or try refreshing the page.
2. After saving the workflow, you should see an input field appear. Enter a value for the `Name` input (e.g., `foo`).
3. Deploy the workflow and click **RUN NOW**
You should see `hello foo!` (or the value you entered for `Name`) as the value returned by the step.
### Use an npm Package
Next, we’ll update the component to get data from the Star Wars API using the `axios` npm package. To use the `axios` package, just `import` it.
```javascript
import { axios } from "@pipedream/platform";
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.2",
type: "action",
props: {
name: {
type: "string",
label: "Name",
},
},
async run() {
return `hello ${this.name}!`;
},
};
```
To use most npm packages on Pipedream, just `import` or `require` them — there is no `package.json` or `npm install` required.
Then, update the `run()` method to:
* Make a request to the following endpoint for the Star Wars API: `https://swapi.dev/api/people/1/`
* Reference the `name` field of the payload returned by the API
```javascript
import { axios } from "@pipedream/platform";
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.2",
type: "action",
props: {
name: {
type: "string",
label: "Name",
},
},
async run({ $ }) {
const data = await axios($, {
url: "https://swapi.dev/api/people/1/",
});
return `hello ${data.name}!`;
},
};
```
Next, remove the `name` prop since we’re no longer using it.
```javascript
import { axios } from "@pipedream/platform";
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.2",
type: "action",
props: {},
async run({ $ }) {
const data = await axios($, {
url: "https://swapi.dev/api/people/1/",
});
return `hello ${data.name}!`;
},
};
```
Finally, update the version to `0.0.3`. If you fail to update the version, the CLI will throw an error.
```javascript
import { axios } from "@pipedream/platform";
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.3",
type: "action",
props: {},
async run({ $ }) {
const data = await axios($, {
url: "https://swapi.dev/api/people/1/",
});
return `hello ${data.name}!`;
},
};
```
Save the file and run the `pd publish` command again to update the action in your account.
```
pd publish action.js
```
The CLI will update the component in your account with the key `action_demo`. You should see something like this:
```
sc_ZriKEn Action Demo 0.0.3 1 second ago action_demo
```
Follow the steps in the previous example to update and run the action in your workflow. You should see `hello Luke Skywalker!` as the return value for the step.
### Use Managed Auth
For the last example, we’ll use Pipedream managed auth to retrieve and emit data from the Github API (which uses OAuth for authentication). First, remove the line that imports `axios` and clear the `run()` function from the last example. Your code should look like this:
```javascript
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.3",
type: "action",
async run() {},
};
```
Next, import Github’s `octokit` npm package
```javascript
import { Octokit } from "@octokit/rest";
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.3",
type: "action",
async run() {},
};
```
Then add an [app prop](/docs/components/contributing/api/#app-props) to use Pipedream managed auth with this component. For this example, we’ll add an app prop for Github:
```javascript
import { Octokit } from "@octokit/rest";
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.3",
type: "action",
props: {
github: {
type: "app",
app: "github",
},
},
async run() {},
};
```
The value for the `app` property is the name slug for the app in Pipedream. This is not currently discoverable, but it will be in the near future on app pages in the [Pipedream Marketplace](https://pipedream.com/explore). For the time being, if you want to know how to reference an app, please please [reach out](https://pipedream.com/community).
Next, update the `run()` method to get a repo from Github and return it. For this example, we’ll pass static values to get the `pipedreamhq/pipedream` repo. Notice that we’re passing the `oauth_access_token` in the authorization header by referencing the `$auth` property of the app prop — `this.github.$auth.oauth_access_token`. You can discover how to reference auth tokens in the **Authentication Strategy** section for each app in the [Pipedream Marketplace](https://pipedream.com/explore).
```javascript
import { Octokit } from "@octokit/rest";
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.3",
type: "action",
props: {
github: {
type: "app",
app: "github",
},
},
async run() {
const octokit = new Octokit({
auth: this.github.$auth.oauth_access_token,
});
return (
await octokit.rest.repos.get({
owner: `pipedreamhq`,
repo: `pipedream`,
})
).data;
},
};
```
In order to help users understand what’s happening with each action step, we recommend surfacing a brief summary with `$summary` ([read more](/docs/components/contributing/api/#actions) about exporting data using `$.export`).
```javascript
import { Octokit } from "@octokit/rest";
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.3",
type: "action",
props: {
github: {
type: "app",
app: "github",
},
},
async run({ $ }) {
const octokit = new Octokit({
auth: this.github.$auth.oauth_access_token,
});
const { data } = await octokit.rest.repos.get({
owner: `pipedreamhq`,
repo: `pipedream`,
});
$.export("$summary", `Successfully fetched info for \`${data.full_name}\``);
return data;
},
};
```
Finally, update the version to `0.0.4`. If you fail to update the version, the CLI will throw an error.
```javascript
import { Octokit } from "@octokit/rest";
export default {
name: "Action Demo",
description: "This is a demo action",
key: "action_demo",
version: "0.0.4",
type: "action",
props: {
github: {
type: "app",
app: "github",
},
},
async run({ $ }) {
const octokit = new Octokit({
auth: this.github.$auth.oauth_access_token,
});
const { data } = await octokit.rest.repos.get({
owner: `pipedreamhq`,
repo: `pipedream`,
});
$.export("$summary", `Successfully fetched info for \`${data.full_name}\``);
return data;
},
};
```
Save the file and run the `pd publish` command again to update the action in your account.
```
pd publish action.js
```
The CLI will update the component in your account with the key `action_demo`. You should see something like this:
```
sc_k3ia53 Action Demo 0.0.4 just now action_demo
```
Follow the steps in the earlier example to update the action in your workflow (you may need to save your workflow after refreshing the action). You should now see a prompt to connect your Github account to the step:
Select an existing account or connect a new one, and then deploy your workflow and click **RUN NOW**. You should see the results returned by the action:
## What’s Next?
You’re ready to start authoring and publishing actions on Pipedream! You can also check out the [detailed component reference](/docs/components/contributing/api/#component-api) at any time!
If you have any questions or feedback, please [reach out](https://pipedream.com/community)!
# Component API Reference
Source: https://pipedream.com/docs/components/contributing/api
export const CONFIGURED_PROPS_SIZE_LIMIT = '64KB';
Our TypeScript component API is in **beta**. If you’re interested in developing TypeScript components and providing feedback, [see our TypeScript docs](/docs/components/contributing/typescript/).
This document was created to help developers author and use [Pipedream components](/docs/components/contributing/). Not only can you develop [sources](/docs/components/contributing/sources-quickstart/) (workflow triggers) and [actions](/docs/components/contributing/actions-quickstart/) using the component API, but you can also develop [Node.js steps](/docs/workflows/building-workflows/code/nodejs/) right in your workflows - without leaving your browser! You can publish components to your account for private use, or [contribute them to the Pipedream registry](/docs/components/contributing/) for anyone to run.
While sources and actions share the same core component API, they differ in both how they’re used and written, so certain parts of the component API apply only to one or the other. [This section of the docs](/docs/components/contributing/api/#differences-between-sources-and-actions) explains the core differences. When this document uses the term “component”, the corresponding feature applies to both sources and actions. If a specific feature applies to only sources *or* actions, the correct term will be used.
If you have any questions about component development, please reach out [in our community](https://pipedream.com/community/c/dev/11).
## Overview
### What is a component?
Components are Node.js modules that run on Pipedream’s serverless infrastructure.
* Trigger Node.js code on HTTP requests, timers, cron schedules, or manually
* Emit data on each event to inspect it. Trigger Pipedream hosted workflows or access it outside of Pipedream via API
* Accept user input on deploy via [CLI](/docs/cli/reference/#pd-deploy), [API](/docs/rest-api/), or [UI](https://pipedream.com/sources)
* Connect to [2,700+ apps](https://pipedream.com/apps) using Pipedream managed auth
* Use most npm packages with no `npm install` or `package.json` required
* Store and retrieve state using the [built-in key-value store](/docs/components/contributing/api/#db)
### Quickstarts
To help you get started, we created a step-by-step walkthrough for developing both [sources](/docs/components/contributing/sources-quickstart/) and [actions](/docs/components/contributing/actions-quickstart/). We recommend starting with those docs and using the API reference below as you develop.
### Differences between sources and actions
Sources and actions share the same component API. However, certain features of the API only apply to one or the other:
* Actions are defined with `type: action` ([see the docs on the `type` property](/docs/components/contributing/api/#component-structure)). Sources require no `type` property be set. Components without a `type` are considered sources.
* Sources emit events [using `this.$emit`](/docs/components/contributing/api/#emit), which trigger linked workflows. Any features associated with emitting events (e.g., [dedupe strategies](/docs/components/contributing/api/#dedupe-strategies)) can only be used with sources. Actions [return data using `return` or `$.export`](/docs/components/contributing/api/#returning-data-from-steps), which is made available to future steps of the associated workflow.
* Sources have access to [lifecycle hooks](/docs/components/contributing/api/#lifecycle-hooks), which are often required to configure the source to listen for new events. Actions do not have access to these lifecycle hooks.
* Actions have access to [a special `$` variable](/docs/components/contributing/api/#actions), passed as a parameter to the `run` method. This variable exposes functions that allow you to send data to destinations, export data from the action, return HTTP responses, and more.
* Sources can be developed iteratively using `pd dev`. Actions currently cannot (please follow [this issue](https://github.com/PipedreamHQ/pipedream/issues/1437) to be notified of updates).
* You use `pd deploy` to deploy sources to your account. You use `pd publish` to publish actions, making them available for use in workflows.
* You can attach [interfaces](/docs/components/contributing/api/#interface-props) (like HTTP endpoints, or timers) to sources. This defines how the source is invoked. Actions do not have interfaces, since they’re run step-by-step as a part of the associated workflow.
### Getting Started with the CLI
Several examples below use the Pipedream CLI. To install it, [follow the instructions for your OS / architecture](/docs/cli/install/).
See the [CLI reference](/docs/cli/reference/) for detailed usage and examples beyond those covered below.
### Example Components
You can find hundreds of example components in the `components/` directory of the [`PipedreamHQ/pipedream` repo](https://github.com/PipedreamHQ/pipedream).
## Component API
### Component Structure
Pipedream components export an object with the following properties:
```javascript
export default {
name: "",
key: "",
type: "",
version: "",
description: "",
props: {},
methods: {},
hooks: {
async activate() {},
async deactivate() {},
async deploy() {},
},
dedupe: "",
async run(event) {
this.$emit(event);
},
};
```
| Property | Type | Required? | Description |
| ------------- | -------- | ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `name` | `string` | required | The name of the component, a string which identifies components deployed to users’ accounts. This name will show up in the Pipedream UI, in CLI output (for example, from `pd list` commands), etc. It will also be converted to a unique slug on deploy to reference a specific component instance (it will be auto-incremented if not unique within a user account). |
| `key` | `string` | recommended | The `key` uniquely identifies a component within a namespace. The default namespace for components is your account. When publishing components to the Pipedream registry, the `key` must be unique across registry components and should follow the pattern: `app_name_slug`-`slugified-component-name` |
| `type` | `string` | required | When publishing an action, `type: "action"` is required. When publishing a source, use `type: "source"`. |
| `version` | `string` | required | The component version. There are no constraints on the version, but [semantic versioning](https://semver.org/) is required for any components published to the [Pipedream registry](/docs/components/contributing/guidelines/). |
| `description` | `string` | recommended | The description will appear in the Pipedream UI to aid in discovery and to contextualize instantiated components |
| `props` | `object` | optional | [Props](/docs/components/contributing/api/#props) are custom attributes you can register on a component. When a value is passed to a prop attribute, it becomes a property on that component instance. You can reference these properties in component code using `this` (e.g., `this.propName`). |
| `methods` | `object` | optional | Define component methods for the component instance. They can be referenced via `this` (e.g., `this.methodName()`). |
| `hooks` | `object` | optional (sources only) | [Hooks](/docs/components/contributing/api/#hooks) are functions that are executed when specific component lifecycle events occur. |
| `dedupe` | `string` | optional (sources only) | You may specify a [dedupe strategy](/docs/components/contributing/api/#dedupe-strategies) to be applied to emitted events |
| `run` | `method` | required | Each time a component is invoked (for example, via HTTP request), [its `run` method](/docs/components/contributing/api/#run) is called. The event that triggered the component is passed to `run`, so that you can access it within the method. Events are emitted using `this.$emit()`. |
### Props
Props are custom attributes you can register on a component. When a value is passed to a prop attribute, it becomes a property on that component instance. You can reference these properties in component code using `this` (e.g., `this.propName`).
| Prop Type | Description |
| ------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------- |
| [User Input](/docs/components/contributing/api/#user-input-props) | Enable components to accept input on deploy |
| [Interface](/docs/components/contributing/api/#interface-props) | Attaches a Pipedream interface to your component (e.g., an HTTP interface or timer) |
| [Service](/docs/components/contributing/api/#service-props) | Attaches a Pipedream service to your component (e.g., a key-value database to maintain state) |
| [App](/docs/components/contributing/api/#app-props) | Enables managed auth for a component |
| [Data Store](/docs/workflows/data-management/data-stores/#using-data-stores-in-code-steps) | Provides access to a Pipedream [data store](/docs/workflows/data-management/data-stores/) |
| [HTTP Request](/docs/components/contributing/api/#http-request-prop) | Enables components to execute HTTP requests based on user input |
| [Alert](/docs/components/contributing/api/#alert-prop) | Renders an informational alert in the prop form to help users configure the source or action |
#### User Input Props
User input props allow components to accept input on deploy. When deploying a component, users will be prompted to enter values for these props, setting the behavior of the component accordingly.
##### General
**Definition**
```javascript
props: {
myPropName: {
type: "",
label: "",
description: "",
options: [], // OR async options() {} to return dynamic options
optional: true || false,
propDefinition: [],
default: "",
secret: true || false,
min: ,
max: ,
disabled: true || false,
hidden: true || false
},
},
```
| Property | Type | Required? | Description |
| ---------------- | ------------------------------------ | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `type` | `string` | required | Value must be set to a valid `PropType` (see below). Suffix with `[]` (e.g. `string[]`) to denote array of that type (if supported). |
| `label` | `string` | optional | A friendly label to show to user for this prop. If a label is not provided, the `propName` is displayed to the user. |
| `description` | `string` | optional | Displayed near the prop input. Typically used to contextualize the prop or provide instructions to help users input the correct value. Markdown is supported. |
| `options` | `string[]` or `object[]` or `method` | optional | Provide an array to display options to a user in a drop down menu. **`[]` Basic usage** Array of strings. E.g., `['option 1', 'option 2']` **`object[]` Define Label and Value** `[{ label: 'Label 1', value: 'label1'}, { label: 'Label 2', value: 'label2'}]` **`method` Dynamic Options** You can generate options dynamically (e.g., based on real-time API requests with pagination). See configuration details below. |
| `useQuery` | `boolean` | optional | Use in conjunction with **Dynamic Options**. If set to `true`, the prop accepts a real-time query that can be used by the `options` method to obtain results according to that query. |
| `optional` | `boolean` | optional | Set to `true` to make this prop optional. Defaults to `false`. |
| `propDefinition` | `[]` | optional | Re-use a prop defined in an app file. When you include a prop definition, the prop will inherit values for all the properties listed here. However, you can override those values by redefining them for a given prop instance. See **propDefinitions** below for usage. |
| `default` | `string` | optional | Define a default value if the field is not completed. Can only be defined for optional fields (required fields require explicit user input). |
| `secret` | `boolean` | optional | If set to `true`, this field will hide your input in the browser like a password field, and its value will be encrypted in Pipedream’s database. The value will be decrypted when the component is run in [the execution environment](/docs/privacy-and-security/#execution-environment). Defaults to `false`. Only allowed for `string` props. |
| `min` | `integer` | optional | Minimum allowed integer value. Only allowed for `integer` props.. |
| `max` | `integer` | optional | Maximum allowed integer value . Only allowed for `integer` props. |
| `disabled` | `boolean` | optional | Set to `true` to disable usage of this prop. Defaults to `false`. |
| `hidden` | `boolean` | optional | Set to `true` to hide this field. Defaults to `false`. |
**Prop Types**
| Prop Type | Array Supported | Supported in Sources? | Supported in Actions? | Custom properties |
| ------------------- | --------------- | --------------------- | --------------------- | ------------------------------------------------------------------------------------------------------- |
| `app` | | ✓ | ✓ | See [App Props](/docs/components/contributing/api/#app-props) below |
| `boolean` | ✓ | ✓ | ✓ | |
| `integer` | ✓ | ✓ | ✓ | - `min` (`integer`): Minimum allowed integer value. - `max` (`integer`): Maximum allowed integer value. |
| `string` | ✓ | ✓ | ✓ | - `secret` (`boolean`): Whether to treat the value as a secret. |
| `object` | | ✓ | ✓ | |
| `any` | | | ✓ | |
| `$.interface.http` | | ✓ | | |
| `$.interface.timer` | | ✓ | | |
| `$.service.db` | | ✓ | | |
| `data_store` | | | ✓ | |
| `http_request` | | | ✓ | |
| `alert` | | ✓ | ✓ | See [Alert Prop](/docs/components/contributing/api/#alert-prop) below |
**Usage**
| Code | Description | Read Scope | Write Scope |
| ----------------- | ---------------------------------------- | ------------------------- | --------------------------------------------------------------------------------------- |
| `this.myPropName` | Returns the configured value of the prop | `run()` `hooks` `methods` | n/a (input props may only be modified on component deploy or update via UI, CLI or API) |
**Example**
Following is an example source that demonstrates how to capture user input via a prop and emit it on each event:
```javascript
export default {
name: "User Input Prop Example",
version: "0.1",
props: {
msg: {
type: "string",
label: "Message",
description: "Enter a message to `console.log()`",
},
},
async run() {
this.$emit(this.msg);
},
};
```
To see more examples, explore the [curated components in Pipedream’s GitHub repo](/docs/components/contributing/api/#example-components).
##### Advanced Configuration
##### Async Options (example)
Async options allow users to select prop values that can be programmatically-generated (e.g., based on a real-time API response).
```javascript
async options({
page,
prevContext,
query,
}) {},
```
| Property | Type | Required? | Description |
| ------------- | --------- | --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `options()` | `method` | optional | Typically returns an array of values matching the prop type (e.g., `string`) or an array of object that define the `label` and `value` for each option. The `page` and `prevContext` input parameter names are reserved for pagination (see below). When using `prevContext` for pagination, it must return an object with an `options` array and a `context` object with a `nextPageToken` key. E.g., `{ options, context: { nextPageToken }, }` |
| `page` | `integer` | optional | Returns a `0` indexed page number. Use with APIs that accept a numeric page number for pagination. |
| `prevContext` | `string` | optional | Returns a string representing the context for the previous `options` execution. Use with APIs that accept a token representing the last record for pagination. |
| `query` | `string` | optional | Returns a string with the user input if the prop has the `useQuery` property set to `true`. Use with APIs that return items based on a query or search parameter. |
Following is an example source demonstrating the usage of async options:
```javascript
export default {
name: "Async Options Example",
version: "0.1",
props: {
msg: {
type: "string",
label: "Message",
description: "Select a message to `console.log()`",
async options() {
// write any node code that returns a string[] or object[] (with label/value keys)
return ["This is option 1", "This is option 2"];
},
},
},
async run() {
this.$emit(this.msg);
},
};
```
##### Prop Definitions (example)
Prop definitions enable you to reuse props that are defined in another object. A common use case is to enable re-use of props that are defined for a specific app.
```javascript
props: {
myPropName: {
propDefinition: [
app,
"propDefinitionName",
inputValues
]
},
},
```
| Property | Type | Required? | Description |
| -------------------- | -------- | --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `propDefinition` | `array` | optional | An array of options that define a reference to a `propDefinitions` within the `propDefinitions` for an `app` |
| `app` | `object` | required | An app object |
| `propDefinitionName` | `string` | required | The name of a specific `propDefinition` defined in the corresponding `app` object |
| `inputValues` | `object` | optional | Values to pass into the prop definition. To reference values from previous props, use an arrow function. E.g.,: `c => ({ variableName: c.previousPropName })` [See these docs](/docs/components/contributing/api/#referencing-values-from-previous-props) for more information. |
Following is an example source that demonstrates how to use `propDefinitions`.
```javascript
const rss = {
type: "app",
app: "rss",
propDefinitions: {
urlDef: {
type: "string",
label: "RSS URL",
description: "Enter a URL for an RSS feed.",
},
},
};
export default {
name: "Prop Definition Example",
description: `This component captures an RSS URL and logs it`,
version: "0.1",
props: {
rss,
url: { propDefinition: [rss, "urlDef"] },
},
async run() {
console.log(this.url);
},
};
```
##### Referencing values from previous props
When you define a prop in an app file, and that prop depends on the value of another prop, you’ll need to pass the value of the previous props in a special way. Let’s review an example from [Trello](https://trello.com), a task manager.
You create Trello *boards* for new projects. Boards contain *lists*. For example, this **Active** board contains two lists:
In Pipedream, users can choose from lists on a specific board:
Both **Board** and **Lists** are defined in the Trello app file:
```javascript
board: {
type: "string",
label: "Board",
async options(opts) {
const boards = await this.getBoards(this.$auth.oauth_uid);
const activeBoards = boards.filter((board) => board.closed === false);
return activeBoards.map((board) => {
return { label: board.name, value: board.id };
});
},
},
lists: {
type: "string[]",
label: "Lists",
optional: true,
async options(opts) {
const lists = await this.getLists(opts.board);
return lists.map((list) => {
return { label: list.name, value: list.id };
});
},
}
```
In the `lists` prop, notice how `opts.board` references the board. You can pass `opts` to the prop’s `options` method when you reference `propDefinitions` in specific components:
```javascript
board: { propDefinition: [trello, "board"] },
lists: {
propDefinition: [
trello,
"lists",
(configuredProps) => ({ board: configuredProps.board }),
],
},
```
`configuredProps` contains the props the user previously configured (the board). This allows the `lists` prop to use it in the `options` method.
##### Dynamic props
Some prop definitions must be computed dynamically, after the user configures another prop. We call these **dynamic props**, since they are rendered on-the-fly. This technique is used in [the Google Sheets **Add Single Row** action](https://github.com/PipedreamHQ/pipedream/blob/master/components/google_sheets/actions/add-single-row/add-single-row.mjs), which we’ll use as an example below.
First, determine the prop whose selection should render dynamic props. In the Google Sheets example, we ask the user whether their sheet contains a header row. If it does, we display header fields as individual props:
To load dynamic props, the header prop must have the `reloadProps` field set to `true`:
```javascript
hasHeaders: {
type: "string",
label: "Does the first row of the sheet have headers?",
description: "If the first row of your document has headers we'll retrieve them to make it easy to enter the value for each column.",
options: [
"Yes",
"No",
],
reloadProps: true,
},
```
When a user chooses a value for this prop, Pipedream runs the `additionalProps` component method to render props:
```javascript
async additionalProps() {
const sheetId = this.sheetId?.value || this.sheetId;
const props = {};
if (this.hasHeaders === "Yes") {
const { values } = await this.googleSheets.getSpreadsheetValues(sheetId, `${this.sheetName}!1:1`);
if (!values[0]?.length) {
throw new ConfigurationError("Cound not find a header row. Please either add headers and click \"Refresh fields\" or adjust the action configuration to continue.");
}
for (let i = 0; i < values[0]?.length; i++) {
props[`col_${i.toString().padStart(4, "0")}`] = {
type: "string",
label: values[0][i],
optional: true,
};
}
} else if (this.hasHeaders === "No") {
props.myColumnData = {
type: "string[]",
label: "Values",
description: "Provide a value for each cell of the row. Google Sheets accepts strings, numbers and boolean values for each cell. To set a cell to an empty value, pass an empty string.",
};
}
return props;
},
```
The signature of this function is:
```javascript
async additionalProps(previousPropDefs)
```
where `previousPropDefs` are the full set of props (props merged with the previous `additionalProps`). When the function is executed, `this` is bound similar to when the `run` function is called, where you can access the values of the props as currently configured, and call any `methods`. The return value of `additionalProps` will replace any previous call, and that return value will be merged with props to define the final set of props.
Following is an example that demonstrates how to use `additionalProps` to dynamically change a prop’s `disabled` and `hidden` properties:
```javascript
async additionalProps(previousPropDefs) {
if (this.myCondition === "Yes") {
previousPropDefs.myPropName.disabled = true;
previousPropDefs.myPropName.hidden = true;
} else {
previousPropDefs.myPropName.disabled = false;
previousPropDefs.myPropName.hidden = false;
}
return previousPropDefs;
},
```
Dynamic props can have any one of the following prop types:
* `app`
* `boolean`
* `integer`
* `string`
* `object`
* `any`
* `$.interface.http`
* `$.interface.timer`
* `data_store`
* `http_request`
#### Interface Props
Interface props are infrastructure abstractions provided by the Pipedream platform. They declare how a source is invoked — via HTTP request, run on a schedule, etc. — and therefore define the shape of the events it processes.
| Interface Type | Description |
| ------------------------------------------------- | --------------------------------------------------------------- |
| [Timer](/docs/components/contributing/api/#timer) | Invoke your source on an interval or based on a cron expression |
| [HTTP](/docs/components/contributing/api/#http) | Invoke your source on HTTP requests |
#### Timer
To use the timer interface, declare a prop whose value is the string `$.interface.timer`:
**Definition**
```javascript
props: {
myPropName: {
type: "$.interface.timer",
default: {},
},
}
```
| Property | Type | Required? | Description |
| --------- | -------- | --------- | ------------------------------------------------------------------------------------------------------------------------- |
| `type` | `string` | required | Must be set to `$.interface.timer` |
| `default` | `object` | optional | **Define a default interval** `{ intervalSeconds: 60, },` **Define a default cron expression** `{ cron: "0 0 * * *", },` |
**Usage**
| Code | Description | Read Scope | Write Scope |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | ------------------------------------------------------------------------------------------- |
| `this.myPropName` | Returns the type of interface configured (e.g., `{ type: '$.interface.timer' }`) | `run()` `hooks` `methods` | n/a (interface props may only be modified on component deploy or update via UI, CLI or API) |
| `event` | Returns an object with the execution timestamp and interface configuration (e.g., `{ "timestamp": 1593937896, "interval_seconds": 3600 }`) | `run(event)` | n/a (interface props may only be modified on source deploy or update via UI, CLI or API) |
**Example**
Following is a basic example of a source that is triggered by a `$.interface.timer` and has default defined as a cron expression.
```javascript
export default {
name: "Cron Example",
version: "0.1",
props: {
timer: {
type: "$.interface.timer",
default: {
cron: "0 0 * * *", // Run job once a day
},
},
},
async run() {
console.log("hello world!");
},
};
```
Following is an example source that’s triggered by a `$.interface.timer` and has a `default` interval defined.
```javascript
export default {
name: "Interval Example",
version: "0.1",
props: {
timer: {
type: "$.interface.timer",
default: {
intervalSeconds: 60 * 60 * 24, // Run job once a day
},
},
},
async run() {
console.log("hello world!");
},
};
```
##### HTTP
To use the HTTP interface, declare a prop whose value is the string `$.interface.http`:
```javascript
props: {
myPropName: {
type: "$.interface.http",
customResponse: true, // optional: defaults to false
},
}
```
**Definition**
| Property | Type | Required? | Description |
| --------- | -------- | --------- | ------------------------------------------------------------------------------------------------------------ |
| `type` | `string` | required | Must be set to `$.interface.http` |
| `respond` | `method` | required | The HTTP interface exposes a `respond()` method that lets your component issue HTTP responses to the client. |
**Usage**
| Code | Description | Read Scope | Write Scope |
| --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- | ------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `this.myPropName` | Returns an object with the unique endpoint URL generated by Pipedream (e.g., `{ endpoint: 'https://abcde.m.pipedream.net' }`) | `run()` `hooks` `methods` | n/a (interface props may only be modified on source deploy or update via UI, CLI or API) |
| `event` | Returns an object representing the HTTP request (e.g., `{ method: 'POST', path: '/', query: {}, headers: {}, bodyRaw: '', body: {}, }`) | `run(event)` | The shape of `event` corresponds with the the HTTP request you make to the endpoint generated by Pipedream for this interface |
| `this.myPropName.respond()` | Returns an HTTP response to the client (e.g., `this.http.respond({status: 200})`). | n/a | `run()` |
###### Responding to HTTP requests
The HTTP interface exposes a `respond()` method that lets your source issue HTTP responses. You may run `this.http.respond()` to respond to the client from the `run()` method of a source. In this case you should also pass the `customResponse: true` parameter to the prop.
| Property | Type | Required? | Description |
| --------- | -------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `status` | `integer` | required | An integer representing the HTTP status code. Return `200` to indicate success. Standard status codes range from `100` - `599` |
| `headers` | `object` | optional | Return custom key-value pairs in the HTTP response |
| `body` | `string` `object` `buffer` | optional | Return a custom body in the HTTP response. This can be any string, object, or Buffer. |
###### HTTP Event Shape
Following is the shape of the event passed to the `run()` method of your source:
```javascript
{
method: 'POST',
path: '/',
query: {},
headers: {},
bodyRaw: '',
body:
}
```
**Example**
Following is an example source that’s triggered by `$.interface.http` and returns `{ 'msg': 'hello world!' }` in the HTTP response. On deploy, Pipedream will generate a unique URL for this source:
```javascript
export default {
name: "HTTP Example",
version: "0.0.1",
props: {
http: {
type: "$.interface.http",
customResponse: true,
},
},
async run(event) {
this.http.respond({
status: 200,
body: {
msg: "hello world!",
},
headers: {
"content-type": "application/json",
},
});
console.log(event);
},
};
```
#### Service Props
| Service | Description |
| ------- | ---------------------------------------------------------------------------------------------------- |
| *DB* | Provides access to a simple, component-specific key-value store to maintain state across executions. |
##### DB
**Definition**
```javascript
props: {
myPropName: "$.service.db",
}
```
**Usage**
| Code | Description | Read Scope | Write Scope |
| ----------------------------------- | -------------------------------------------------------------------------------------------- | ------------------------------------- | -------------------------------------- |
| `this.myPropName.get('key')` | Method to get a previously set value for a key. Returns `undefined` if a key does not exist. | `run()` `hooks` `methods` | Use the `set()` method to write values |
| `this.myPropName.set('key', value)` | Method to set a value for a key. Values must be JSON-serializable data. | Use the `get()` method to read values | `run()` `hooks` `methods` |
#### App Props
App props are normally defined in an [app file](/docs/components/contributing/guidelines/#app-files), separate from individual components. See [the `components/` directory of the pipedream GitHub repo](https://github.com/PipedreamHQ/pipedream/tree/master/components) for example app files.
**Definition**
```javascript
props: {
myPropName: {
type: "app",
app: "",
propDefinitions: {}
methods: {},
},
},
```
| Property | Type | Required? | Description |
| ----------------- | -------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `type` | `string` | required | Value must be `app` |
| `app` | `string` | required | Value must be set to the name slug for an app registered on Pipedream. [App files](/docs/components/contributing/guidelines/#app-files) are programmatically generated for all integrated apps on Pipedream. To find your app’s slug, visit the `components` directory of [the Pipedream GitHub repo](https://github.com/PipedreamHQ/pipedream/tree/master/components), find the app file (the file that ends with `.app.mjs`), and find the `app` property at the root of that module. If you don’t see an app listed, please [open an issue here](https://github.com/PipedreamHQ/pipedream/issues/new?assignees=\&labels=app%2C+enhancement\&template=app---service-integration.md\&title=%5BAPP%5D). |
| `propDefinitions` | `object` | optional | An object that contains objects with predefined user input props. See the section on User Input Props above to learn about the shapes that can be defined and how to reference in components using the `propDefinition` property |
| `methods` | `object` | optional | Define app-specific methods. Methods can be referenced within the app object context via `this` (e.g., `this.methodName()`) and within a component via `this.myAppPropName` (e.g., `this.myAppPropName.methodName()`). |
**Usage**
| Code | Description | Read Scope | Write Scope |
| --------------------------------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------- | ----------- |
| `this.$auth` | Provides access to OAuth tokens and API keys for Pipedream managed auth | **App Object:** `methods` | n/a |
| `this.myAppPropName.$auth` | Provides access to OAuth tokens and API keys for Pipedream managed auth | **Parent Component:** `run()` `hooks` `methods` | n/a |
| `this.methodName()` | Execute a common method defined for an app within the app definition (e.g., from another method) | **App Object:** `methods` | n/a |
| `this.myAppPropName.methodName()` | Execute a common method defined for an app from a component that includes the app as a prop | **Parent Component:** `run()` `hooks` `methods` | n/a |
**Note:** The specific `$auth` keys supported for each app will be published in the near future.
#### HTTP Request Prop
**Usage**
| Code | Description | Read Scope | Write Scope |
| --------------------------- | ------------------------------------- | ---------- | ----------------- |
| `this.myPropName.execute()` | Execute an HTTP request as configured | n/a | `run()` `methods` |
**Example**
Following is an example action that demonstrates how to accept an HTTP request configuration as input and execute the request when the component is run:
```javascript
export default {
name: "HTTP Request Example",
version: "0.0.1",
props: {
httpRequest: {
type: "http_request",
label: "API Request",
default: {
method: "GET",
url: "https://jsonplaceholder.typicode.com/posts",
}
},
},
async run() {
const { data } = await this.httpRequest.execute();
return data;
},
};
```
For more examples, see the [docs on making HTTP requests with Node.js](/docs/workflows/building-workflows/code/nodejs/http-requests/#send-a-get-request-to-fetch-data).
#### Alert Prop
Sometimes you may need to surface contextual information to users within the prop form. This might be information that’s not directly related to a specific prop, so it doesn’t make sense to include in a prop description, but rather, it may be related to the overall configuration of the prop form.
**Usage**
| Property | Type | Required? | Description |
| ----------- | -------- | --------- | ---------------------------------------------------------------------------------------------------------------- |
| `type` | `string` | required | Set to `alert` |
| `alertType` | `string` | required | Determines the color and UI presentation of the alert prop. Can be one of `info`, `neutral`, `warning`, `error`. |
| `content` | `string` | required | Determines the text that is rendered in the alert. Both plain text and markdown are supported. |
```javascript
export default defineComponent({
props: {
alert: {
type: "alert",
alertType: "info",
content: "Admin rights on the repo are required in order to register webhooks. In order to continue setting up your source, configure a polling interval below to check for new events.",
}
},
})
```
Refer to GitHub’s component sources in the `pipedream` repo for an [example implementation](https://github.com/PipedreamHQ/pipedream/blob/b447d71f658d10d6a7432e8f5153bbda56ba9810/components/github/sources/common/common-flex.mjs#L27).
#### Limits on props
When a user configures a prop with a value, it can hold at most {CONFIGURED_PROPS_SIZE_LIMIT} data. Consider this when accepting large input in these fields (such as a base64 string).
The {CONFIGURED_PROPS_SIZE_LIMIT} limit applies only to static values entered as raw text. In workflows, users can pass expressions (referencing data in a prior step). In that case the prop value is simply the text of the expression, for example `{{steps.nodejs.$return_value}}`, well below the limit. The value of these expressions is evaluated at runtime, and are subject to [different limits](/docs/workflows/limits/).
### Methods
You can define helper functions within the `methods` property of your component. You have access to these functions within the [`run` method](/docs/components/contributing/api/#run), or within other methods.
Methods can be accessed using `this.`. For example, a `random` method:
```javascript
methods: {
random() {
return Math.random()
},
}
```
can be run like so:
```javascript
const randomNum = this.random();
```
### Hooks
```javascript
hooks: {
async deploy() {},
async activate() {},
async deactivate() {},
},
```
| Property | Type | Required? | Description |
| ------------ | -------- | --------- | ----------------------------------------------------- |
| `deploy` | `method` | optional | Executed each time a component is deployed |
| `activate` | `method` | optional | Executed each time a component is deployed or updated |
| `deactivate` | `method` | optional | Executed each time a component is deactivated |
### Dedupe Strategies
> **IMPORTANT:** To use a dedupe strategy, you must emit an `id` as part of the event metadata (dedupe strategies are applied to the submitted `id`)
| Strategy | Description |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `unique` | Pipedream maintains a cache of 100 emitted `id` values. Events with `id` values that are not in the cache are emitted, and the `id` value is added to the cache. After 100 events, `id` values are purged from the cache based on the order received (first in, first out). A common use case for this strategy is an RSS feed which typically does not exceed 100 items |
| `greatest` | Pipedream caches the largest `id` value (must be numeric). Only events with larger `id` values are emitted, and the cache is updated to match the new, largest value.. |
| `last` | Pipedream caches the ID associated with the last emitted event. When new events are emitted, only events after the matching `id` value will be emitted as events. If no `id` values match, then all events will be emitted. |
### Run
Each time a component is invoked, its `run` method is called. Sources are invoked by their [interface](/docs/components/contributing/api/#interface-props) (for example, via HTTP request). Actions are run when their parent workflow is triggered.
You can reference `this` within the `run` method. `this` refers to the component, and provides access to [props](/docs/components/contributing/api/#props), [methods](/docs/components/contributing/api/#methods), and more.
#### Sources
When a source is invoked, the event that triggered the source is passed to `run`, so that you can access it within the method:
```javascript
async run(event) {
console.log(event)
}
```
##### \$emit
`this.$emit()` is a method in scope for the `run` method of a source
```javascript
this.$emit(event, {
id,
name,
summary,
ts,
});
```
| Property | Type | Required? | Description |
| --------- | ---------------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `event` | JSON serializable data | optional | The data to emit as the event |
| `id` | `string` or `number` | Required if a dedupe strategy is applied | A value to uniquely identify this event. Common `id` values may be a 3rd party ID, a timestamp, or a data hash |
| `name` | `string` | optional | The name of the “channel” you’d like to emit the event to. By default, events are emitted to the `default` channel. If you set a different channel here, listening sources or workflows can subscribe to events on this channel, running the source or workflow only on events emitted to that channel. |
| `summary` | `string` | optional | Define a summary to customize the data displayed in the events list to help differentiate events at a glance |
| `ts` | `integer` | optional | Accepts an epoch timestamp in **milliseconds**. If you submit a timestamp, events will automatically be ordered and emitted from oldest to newest. If using the `last` dedupe strategy, the value cached as the `last` event for an execution will correspond to the event with the newest timestamp. |
Following is a basic example that emits an event on each component execution.
```javascript
export default {
name: "this.$emit() example",
description: "Deploy and run this component manually via the Pipedream UI",
async run() {
this.$emit({ message: "hello world!" });
},
};
```
##### Logs
You can view logs produced by a source’s `run` method in the **Logs** section of the [Pipedream source UI](https://pipedream.com/sources), or using the `pd logs` CLI command:
```
pd logs
```
##### Events
If the `run` method emits events using `this.$emit`, you can access the events in the **EVENTS** section of the Pipedream UI for the component, or using the `pd events` CLI command:
```
pd events
```
#### Actions
When an action is run in a workflow, Pipedream passes an object with a `$` variable that gives you access to special functions, outlined below:
```javascript
async run({ $ }) {
// You have access to $ within your action
}
```
##### Returning data from steps
By default, variables declared within an action are scoped to that action. To return data from a step, you have two options: 1) use the `return` keyword, or 2) use `$.export` to return a named export from a step.
**`return`**
Use `return` to return data from an action:
```javascript
async run({ $ }) {
return "data"
}
```
When you use return, the exported data will appear at `steps.[STEP NAME].$return_value`. For example, if you ran the code above in a step named `nodejs`, you’d reference the returned data using `steps.nodejs.$return_value`.
**`$.export`**
You can also use `$.export` to return named exports from an action. `$.export` takes the name of the export as the first argument, and the value to export as the second argument:
```javascript
async run({ $ }) {
$.export("name", "value")
}
```
When your workflow runs, you’ll see the named exports appear below your step, with the data you exported. You can reference these exports in other steps using `steps.[STEP NAME].[EXPORT NAME]`.
##### Returning HTTP responses with `$.respond`
`$.respond` lets you issue HTTP responses from your workflow. [See the full `$.respond` docs for more information](/docs/workflows/building-workflows/triggers/#customizing-the-http-response).
```javascript
async run({ $ }) {
$.respond({
status: 200,
body: "hello, world"
})
}
```
##### Ending steps early with `return $.flow.exit`
`return $.flow.exit` terminates the entire workflow. It accepts a single argument: a string that tells the workflow why the workflow terminated, which is displayed in the Pipedream UI.
```javascript
async run({ $ }) {
return $.flow.exit("reason")
}
```
##### `$.summary`
`$.summary` is used to surface brief, user-friendly summaries about what happened when an action step succeeds. For example, when [adding items to a Spotify playlist](https://github.com/PipedreamHQ/pipedream/blob/master/components/spotify/actions/add-items-to-playlist/add-items-to-playlist.mjs#L51):
Example implementation:
```javascript
const data = [1, 2];
const playlistName = "Cool jams";
$.export(
"$summary",
`Successfully added ${data.length} ${
data.length == 1 ? "item" : "items"
} to "${playlistName}"`
);
```
##### `$.send`
`$.send` allows you to send data to [Pipedream destinations](/docs/workflows/data-management/destinations/).
**`$.send.http`**
[See the HTTP destination docs](/docs/workflows/data-management/destinations/http/#using-sendhttp-in-component-actions).
**`$.send.email`**
[See the Email destination docs](/docs/workflows/data-management/destinations/email/#using-sendemail-in-component-actions).
**`$.send.s3`**
[See the S3 destination docs](/docs/workflows/data-management/destinations/s3/#using-sends3-in-component-actions).
**`$.send.emit`**
[See the Emit destination docs](/docs/workflows/data-management/destinations/emit/#using-sendemit-in-component-actions).
**`$.send.sse`**
[See the SSE destination docs](/docs/workflows/data-management/destinations/sse/#using-sendsse-in-component-actions).
##### `$.context`
`$.context` exposes [the same properties as `steps.trigger.context`](/docs/workflows/building-workflows/triggers/#stepstriggercontext), and more. Action authors can use it to get context about the calling workflow and the execution.
All properties from [`steps.trigger.context`](/docs/workflows/building-workflows/triggers/#stepstriggercontext) are exposed, as well as:
| Property | Description |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `deadline` | An epoch millisecond timestamp marking the point when the workflow is configured to [timeout](/docs/workflows/limits/#time-per-execution). |
| `JIT` | Stands for “just in time” (environment). `true` if the user is testing the step, `false` if the step is running in production. |
| `run` | An object containing metadata about the current run number. See [the docs on `$.flow.rerun`](/docs/workflows/building-workflows/triggers/#stepstriggercontext) for more detail. |
### Environment variables
[Environment variables](/docs/workflows/environment-variables/) are not accessible within sources or actions directly. Since components can be used by anyone, you cannot guarantee that a user will have a specific variable set in their environment.
In sources, you can use [`secret` props](/docs/components/contributing/api/#props) to reference sensitive data.
In actions, you’ll see a list of your environment variables in the object explorer when selecting a variable to pass to a step:
### Using npm packages
To use an npm package in a component, just require it. There is no `package.json` or `npm install` required.
```javascript
import axios from "axios";
```
When you deploy a component, Pipedream downloads the latest versions of these packages and bundles them with your deployment.
Some packages that rely on large dependencies or on unbundled binaries — may not work on Pipedream. Please [reach out](https://pipedream.com/support) if you encounter a specific issue.
#### Referencing a specific version of a package
*This currently applies only to sources*.
If you’d like to use a *specific* version of a package in a source, you can add that version in the `require` string, for example: `require("axios@0.19.2")`. Moreover, you can pass the same version specifiers that npm and other tools allow to specify allowed [semantic version](https://semver.org/) upgrades. For example:
* To allow for future patch version upgrades, use `require("axios@~0.20.0")`
* To allow for patch and minor version upgrades, use `require("axios@^0.20.0")`
## Managing Components
Sources and actions are developed and deployed in different ways, given the different functions they serve in the product.
* [Managing Sources](/docs/components/contributing/api/#managing-sources)
* [Managing Actions](/docs/components/contributing/api/#managing-actions)
### Managing Sources
#### CLI - Development Mode
***
The easiest way to develop and test sources is with the `pd dev` command. `pd dev` deploys a local file, attaches it to a component, and automatically updates the component on each local save. To deploy a new component with `pd dev`, run:
```
pd dev
```
To attach to an existing deployed component, run:
```
pd dev --dc
```
#### CLI - Deploy
##### From Local Code
To deploy a source via CLI, use the `pd deploy` command.
```
pd deploy
```
E.g.,
```
pd deploy my-source.js
```
##### From Pipedream Github Repo
You can explore the components available to deploy in [Pipedream’s GitHub repo](https://github.com/PipedreamHQ/pipedream/tree/master/components).
```
pd deploy
```
E.g.,
```javascript
pd deploy http-new-requests
```
##### From Any URL
```
pd deploy
```
E.g.,
```bash
pd deploy https://raw.githubusercontent.com/PipedreamHQ/pipedream/master/components/http/sources/new-requests/new-requests.js
```
#### CLI - Update
View the [CLI command reference](/docs/cli/reference/#command-reference).
#### CLI - Delete
View the [CLI command reference](/docs/cli/reference/#command-reference).
#### UI - Deploy
You can find and deploy curated components at [https://pipedream.com/sources/new](https://pipedream.com/sources/new), or you can deploy code via the UI using following URL patterns.
##### From Pipedream Github Repo
```javascript
https://pipedream.com/sources?action=create&key=
```
E.g.,
```javascript
https://pipedream.com/sources?action=create&key=http-new-requests
```
##### From Any URL
```javascript
https://pipedream.com/sources?action=create&url=
```
E.g.,
```javascript
https://pipedream.com/sources?action=create&url=https%3A%2F%2Fraw.githubusercontent.com%2FPipedreamHQ%2Fpipedream%2Fmaster%2Fcomponents%2Fhttp%2Fhttp.js
```
#### UI - Update
You can update the code and props for a component from the **Configuration** tab for a source in the Pipedream UI.
#### UI - Delete
You can delete a component via the UI at [https://pipedream.com/sources](https://pipedream.com/sources).
#### API
See the [REST API docs](/docs/rest-api/).
### Managing Actions
#### CLI - Publish
To publish an action, use the `pd publish` command.
```
pd publish FILENAME
```
E.g.,
```
pd publish my-action.js
```
## Source Lifecycle
### Lifecycle hooks
Pipedream sources support the following hooks. The code for these hooks are defined within the component. Learn more about the [component structure](/docs/components/contributing/api/#component-structure) and [hook usage](/docs/components/contributing/api/#hooks).
#### `deploy`
The `deploy()` hook is automatically invoked by Pipedream when a source is deployed. A common use case for the deploy hook is to create webhook subscriptions when the source is deployed, but you can run any Node.js code within the `deploy` hook. To learn more about the `deploy()` hook, refer to the [API documentation](/docs/components/contributing/api/#hooks).
#### `activate`
The `activate()` hook is automatically invoked by Pipedream when a source is deployed or updated. For example, this hook will be run when users update component props, so you can run code here that handles those changes. To learn more about defining a custom `activate()` hook, refer to the [API documentation](/docs/components/contributing/api/#hooks).
#### `deactivate`
The `deactivate()` hook is automatically invoked by Pipedream when a source is updated or deleted. A common use case for the deactivate hook is to automatically delete a webhook subscription when a component is deleted, but you can run any Node.js code within the `deactivate` hook. To learn more about the `deactivate()` hook, refer to the [API documentation](/docs/components/contributing/api/#hooks).
### States
#### Saved Component
A saved component is non-instantiated component code that has previously been deployed to Pipedream. Each saved component has a unique saved component ID. Saved components cannot be invoked directly —they must first be deployed.
#### Deployed Component
A deployed component is an instance of a saved component that can be invoked. Deployed components can be active or inactive. On deploy, Pipedream instantiates a saved component and invokes the `activate()` hook.
#### Deleted Component
On delete, Pipedream invokes the `deactivate()` hook and then deletes the deployed component instance.
### Operations
#### Deploy
On deploy, Pipedream creates an instance of a saved component and invokes the optional `deploy()` and `activate()` hooks. A unique deployed component ID is generated for the component.
You can deploy a component via the CLI, UI or API.
#### Update
On update, Pipedream, invokes the optional `deactivate()` hook, updates the code and props for a deployed component, and then invokes the optional `activate()` hook. The deployed component ID is not changed by an update operation.
#### Delete
On delete, Pipedream invokes the optional `deactivate()` hook and deletes the component instance.
## Source Event Lifecycle
The event lifecycle applies to deployed sources. Learn about the [source lifecycle](/docs/components/contributing/api/#source-lifecycle).
### Diagram
### Triggering Sources
Sources are triggered when you manually run them (e.g., via the **RUN NOW** button in the UI) or when one of their [interfaces](/docs/components/contributing/api/#interface-props) is triggered. Pipedream sources currently support **HTTP** and **Timer** interfaces.
When a source is triggered, the `run()` method of the component is executed. Standard output and errors are surfaced in the **Logs** tab.
### Emitting Events from Sources
Sources can emit events via `this.$emit()`. If you define a [dedupe strategy](/docs/components/contributing/api/#dedupe-strategies) for a source, Pipedream automatically dedupes the events you emit.
> **TIP:** if you want to use a dedupe strategy, be sure to pass an `id` for each event. Pipedream uses this value for deduping purposes.
### Consuming Events from Sources
Pipedream makes it easy to consume events via:
* The UI
* Workflows
* APIs
* CLI
#### UI
When you navigate to your source [in the UI](https://pipedream.com/sources), you’ll be able to select and inspect the most recent 100 events (i.e., an event bin). For example, if you send requests to a simple HTTP source, you will be able to inspect the events (i.e., a request bin).
#### Workflows
[Trigger hosted Node.js workflows](/docs/workflows/building-workflows/) on each event. Integrate with 2,700+ apps including Google Sheets, Discord, Slack, AWS, and more!
#### API
Events can be retrieved using the [REST API](/docs/rest-api/) or [SSE stream tied to your component](/docs/workflows/data-management/destinations/sse/). This makes it easy to retrieve data processed by your component from another app. Typically, you’ll want to use the [REST API](/docs/rest-api/) to retrieve events in batch, and connect to the [SSE stream](/docs/workflows/data-management/destinations/sse/) to process them in real time.
#### CLI
Use the `pd events` command to retrieve the last 10 events via the CLI:
```
pd events -n 10
```
# Components Guidelines & Patterns
Source: https://pipedream.com/docs/components/contributing/guidelines
For a component to be accepted into the Pipedream registry, it should follow these guidelines below. These guidelines help ensure components are high quality, are intuitive for both Pipedream users and component developers to use and extend.
Questions about best practices?
Join the discussion with fellow Pipedream component developers at the [#contribute channel](https://pipedream-users.slack.com/archives/C01E5KCTR16) in Slack or [on Discourse](https://pipedream.com/community/c/dev/11).
## Local Checks
When submitting pull requests, the new code will run through a series of automated checks like linting the code. If you want to run those checks locally for quicker feedback you must have [pnpm](https://pnpm.io/) installed and run the following commands at the root of the project:
1. To install all the project’s dependencies (only needed once):
```
pnpm install
```
2. To install all required dependencies:
```
npx pnpm install -r
```
3. To run the linter checks against your code (assuming that your changes are located at `components/foo` for example):
```bash
npx eslint components/foo
```
4. Optionally, you can automatically fix any linter issues by running the following command:
```bash
npx eslint --fix components/foo
```
Keep in mind that not all issues can be automatically fixed by the linter since they could alter the behaviour of the code.
## General
### Components Should Be ES Modules
The Node.js community has started publishing [ESM-only](https://flaviocopes.com/es-modules/) packages that do not work with [CommonJS modules](https://nodejs.org/docs/latest/api/modules.html#modules_modules_commonjs_modules). This means you must `import` the package. You can’t use `require`.
You also cannot mix ESM with CJS. This will **not** work:
```javascript
// ESM
import axios from "axios";
// CommonJS - this should be `export default`
module.exports = {
// ...
}
```
Therefore, all components should be written as ES modules:
```javascript
import axios from "axios";
export default {
//...
}
```
**You’ll need to use [the `.mjs` file extension](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules#aside_%E2%80%94_.mjs_versus_.js) for any components written as ES modules**.
You’ll notice that many of the existing components are written as CommonJS modules. Please fix these and submit a pull request as you refactor related code. For example, if you’re developing new Spotify actions, and you notice the existing event sources use CommonJS, change them to ESM:
1. Rename the file extension from `.js` to `.mjs` using `git mv` (e.g. `git mv source.js source.mjs`).
2. Change all `require` statements to `import`s.
3. Change instances of `module.exports` to `export default`.
### Component Scope
Create components to address specific use cases whenever possible. For example, when a user subscribes to a Github webhook to listen for “star” activity, events can be generated when users star or unstar a repository. The “New Star” source filters events for only new star activity so the user doesn’t have to.
There may be cases where it’s valuable to create a generic component that provides users with broad latitude (e.g., see the [custom webhook](https://github.com/PipedreamHQ/pipedream/blob/master/components/github/sources/custom-webhook-events) event source for GitHub). However, as a general heuristic, we found that tightly scoped components are easier for users to understand and use.
### Required Metadata
Registry [components](/docs/components/contributing/api/#component-structure) require a unique `key` and `version`, and a friendly `name` and `description`. Action components require a `type` field to be set to `action` (sources will require a type to be set in the future). Action components require the description to include a link to the relevant documentation in the following format: \[See the documentation]\([https://public-api.com](https://public-api.com))
```javascript
export default {
key: "google_drive-new-shared-drive",
name: "New Shared Drive",
description: "Emits a new event any time a shared drive is created.",
version: "0.0.1",
};
```
### Component Key Pattern
When publishing components to the Pipedream registry, the `key` must be unique across registry components and should follow the pattern:
`app_name_slug`-`slugified-component-name`
**Source** keys should use past tense verbs that describe the event that occurred (e.g., `linear_app-issue-created-instant`). For **action** keys, use active verbs to describe the action that will occur, (e.g., `linear_app-create-issue`).
### Versioning
When you first publish a component to the registry, set its version to `0.0.1`.
Pipedream registry components try to follow [semantic versioning](https://semver.org/). From their site:
Given a version number `MAJOR.MINOR.PATCH`, increment the:
1. `MAJOR` version when you make incompatible API changes,
2. `MINOR` version when you add functionality in a backwards compatible manner, and
3. `PATCH` version when you make backwards compatible bug fixes.
When you’re developing actions locally, and you’ve incremented the version in your account multiple times, make sure to set it to the version it should be at in the registry prior to submitting your PR. For example, when you add an action to the registry, the version should be `0.0.1`. If the action was at version `0.1.0` and you’ve fixed a bug, change it to `0.1.1` when committing your final code.
If you update a file, you must increment the versions of all components that import or are affected by the updated file.
### Folder Structure
Registry components are organized by app in the `components` directory of the `pipedreamhq/pipedream` repo.
```
/components
/[app-name-slug]
/[app-name-slug].app.mjs
/actions
/[action-name-slug]
/[action-name-slug].mjs
/sources
/[source-name-slug]
/[source-name-slug].mjs
```
* The name of each app folder corresponds with the name slug for each app
* The app file should be in the root of the app folder (e.g., `/components/[app_slug]/[app_slug].app.mjs`)
* Components for each app are organized into `/sources` and `/actions` subfolders
* Each component should be placed in its own subfolder (with the name of the folder and the name of the `js` file equivalent to the slugified component name). For example, the path for the “Search Mentions” source for Twitter is `/components/twitter/sources/search-mentions/search-mentions.mjs`.
* Aside from `app_slug`, words in folder and file names are separated by dashes (-) (i.e., in kebab case)
* Common files (e.g., `common.mjs`, `utils.mjs`) must be placed within a common folder: `/common/common.mjs`.
You can explore examples in the [components directory](https://github.com/PipedreamHQ/pipedream/tree/master/components).
#### Using APIs vs Client Libraries
If the app has a well-supported [Node.js client library](/docs/components/contributing/api/#using-npm-packages), feel free to use that instead of manually constructing API requests.
### `package.json`
Each app should have a `package.json` in its root folder. If one doesn’t exist, run `npm init` in the app’s root folder and customize the file using [this `package.json`](https://github.com/PipedreamHQ/pipedream/blob/55236b3aa993cbcb545e245803d8654c6358b0a2/components/stripe/package.json) as a template.
Each time you change the code for an app file, or change the dependencies for any app component, modify the package `version`.
Save any dependencies in the component app directory:
```bash
npm i --save package
npm i --save-dev package
```
#### Error-Handling and Input Validation
When you use the SDK of a popular API, the SDK might raise clear errors to the user. For example, if the user is asked to pass an email address, and that email address doesn’t validate, the library might raise that in the error message.
But other libraries will *not* raise clear errors. In these cases, you may need to `throw` your own custom error that wraps the error from the API / lib. [See the Airtable components](https://github.com/PipedreamHQ/pipedream/blob/9e4e400cda62335dfabfae384d9224e04a585beb/components/airtable/airtable.app.js#L70) for an example of custom error-handling and input validation.
In general, **imagine you are a user troubleshooting an issue. Is the error easy-to-understand? If not, `throw` a better error**.
### `README` files
New actions and sources should include `README.md` files within the same directory to describe how to use the action or source to users.
Here’s an example `README.md` structure:
```md
# Overview
# Example Use Cases
# Getting Started
# Troubleshooting
```
These sections will appear within the correponding app, source and action page, along with any subheadings and content.
Here’s an example of an [app `README.md` within the `discord` component on the Pipedream registry](https://github.com/PipedreamHQ/pipedream/blob/master/components/discord/README.md). That same content is rendered within the [Pipedream integration page for the Discord app](https://pipedream.com/apps/discord).
You can add additional subheadings to each of the top level `Overview`, `Example Use Cases`, `Getting Started` and `Troubleshooting` headings:
```md
# Overview
## Limitations
Perhaps there are some limitations about the API that users should know about.
# Example Use Cases
1. Sync data in real time
2. Automate tedious actions
3. Introduce A.I. into the workflow
# Getting Started
## Generating an API Key
Instructions on how to generate an API key from within the service's dashboard.
# Troubleshooting
## Required OAuth Scopes
Please take note, you'll need to have sufficient privileges in order to complete
authentication.
```
Only these three top level headings `Overview`, `Getting Starting` and `Troubleshooting` will appear within the corresponding App Marketplace page. All other headings will be ignored.
#### Pagination
When making API requests, handle pagination to ensure all data/events are processed. Moreover, if the underlying account experiences and/or generates too much data paginating through the entire collection of records, it might cause out-of-memory or timeout issues (or both!), so as a rule of thumb the pagination logic should:
* Be encapsulated as a [generator](https://mzl.la/37z6Sh6) so that the component can start processing records after the very first API call. As an example, you can check the [Microsoft OneDrive methods](https://github.com/PipedreamHQ/pipedream/tree/master/components/microsoft_onedrive/microsoft_onedrive.app.mjs) to list files.
* Accept a “next token/page/ID” whenever possible, so that API calls do not retrieve the entire collection of records during every execution but rather from a recent point in time. The `scanDeltaItems` generator method in the example above follows this pattern.
* Persist the last page number, token or record ID right after processing, so that following executions of the component process new records to minimize the amount of duplicate events, execution time and delayed events. Following the same Microsoft OneDrive example, check the `processEvent` method [in this component](https://github.com/PipedreamHQ/pipedream/tree/master/components/microsoft_onedrive/sources/new-file/new-file.mjs) for an example.
#### Capturing Sensitive Data
If users are required to enter sensitive data, always use [secret](/docs/components/contributing/api/#general) props.
### Promoting Reusability
#### App Files
App files contain components that declare the app and include prop definitions and methods that may be reused across components. App files should adhere to the following naming convention: `[app_name_slug].app.mjs`. If an app file does not exist for your app, please [reach out](https://pipedream.com/community/c/dev/11).
##### Prop Definitions
Whenever possible, reuse existing [prop definitions](/docs/components/contributing/api/#prop-definitions-example).
If a prop definition does not exist and you are adding an app-specific prop that may be reused in future components, add it as a prop definition to the app file. Prop definitions will also be surfaced for apps the Pipedream marketplace.
##### Methods
Whenever possible, reuse [methods](/docs/components/contributing/api/#methods) defined in the app file. If you need to use an API for which a method is not defined and it may be used in future components, define a new method in the app file.
Use the [JS Docs](https://jsdoc.app/about-getting-started.html) pattern for lightweight documentation of each method in the app file. Provide a description and define @params and @returns block tags (with default values if applicable — e.g., `[foo=bar]`). This data will both help with reusability and will be surfaced in documentation for apps in the Pipedream marketplace. For example:
```javascript
export default {
methods: {
/**
* Get the most recently liked Tweets for a user
*
* @params {Object} opts - An object representing the configuration options
* for this method
* @params {String} opts.screenName - The user's Twitter screen name (e.g.,
* `pipedream`)
* @params {String} [opts.count=200] - The maximum number of Tweets to
* return
* @params {String} [opts.tweetMode=extended] - Use the default of
* `extended` to return non-truncated Tweets
* @returns {Array} Array of most recent Tweets liked by the specified user
*/
async getLikedTweets(opts = {}) {
const { screenName, count = 200, tweetMode = "extended" } = opts;
const { data } = await this._makeRequest({
url: "https://api.twitter.com/1.1/favorites/list.json",
params: {
screen_name: screenName,
count,
tweet_mode: tweetMode,
},
});
return data;
},
},
};
```
#### Testing
Pipedream does not currently support unit tests to validate that changes to app files are backwards compatible with existing components. Therefore, if you make changes to an app file that may impact other sources, you must currently test potentially impacted components to confirm their functionality is not negatively affected. We expect to support a testing framework in the future.
### Common Files (Optional)
An optional pattern to improve reusability is to use a `common` module to abstract elements that are used across to multiple components. The trade-off with this approach is that it increases complexity for end-users who have the option of customizing the code for components within Pipedream. When using this approach, the general pattern is:
* The `.app.mjs` module contains the logic related to making the actual API calls (e.g. calling `axios.get`, encapsulate the API URL and token, etc).
* The `common.mjs` module contains logic and structure that is not specific to any single component. Its structure is equivalent to a component, except that it doesn’t define attributes such as `version`, `dedupe`, `key`, `name`, etc (those are specific to each component). It defines the main logic/flow and relies on calling its methods (which might not be implemented by this component) to get any necessary data that it needs. In OOP terms, it would be the equivalent of a base abstract class.
* The component module of each action would inherit/extend the `common.mjs` component by setting additional attributes (e.g. `name`, `description`, `key`, etc) and potentially redefining any inherited methods.
* Common files (e.g., `common.mjs`, `utils.mjs`) must be placed within a common folder: `/common/common.mjs`.
See [Google Drive](https://github.com/PipedreamHQ/pipedream/tree/master/components/google_drive) for an example of this pattern. When using this approach, prop definitions should still be maintained in the app file.
Please note that the name `common` is just a convention and depending on each case it might make sense to name any common module differently. For example, the [AWS sources](https://github.com/PipedreamHQ/pipedream/tree/master/components/aws) contains a `common` directory instead of a `common.mjs` file, and the directory contains several modules that are shared between different event sources.
## Props
As a general rule of thumb, we should strive to incorporate all relevant options from a given API as props.
### Labels
Use [prop](/docs/components/contributing/api/#user-input-props) labels to customize the name of a prop or propDefinition (independent of the variable name in the code). The label should mirror the name users of an app are familiar with; i.e., it should mirror the equivalent label in the app’s UI. This applies to usage in labels, descriptions, etc. E.g., the Twitter API property for search keywords is “q”, but its label is set to “Search Term”.
### Descriptions
Include a description for [props](/docs/components/contributing/api/#user-input-props) to help the user understand what they need to do. Use Markdown as appropriate to improve the clarity of the description or instructions. When using Markdown:
* Enclose sample input values in backticks (`` ` ``)
* Refer to other props using **bold** by surrounding with double asterisks (\*)
* Use Markdown links with descriptive text rather than displaying a full URL.
* If the description isn’t self-explanatory, link to the API docs of the relevant method to further clarify how the prop works. When the value of the prop is complex (for example, an object with many properties), link to the section of the API docs that include details on this format. Users may pass values from previous steps using expressions, so they’ll need to know how to structure the input data.
Examples:
* The async option to select an Airtable Base is self-explanatory so includes no description:
* The “Search Term” prop for Twitter includes a description that helps the user understand what values they can enter, with specific values highlighted using backticks and links to external content.
### Optional vs Required Props
Use optional [props](/docs/components/contributing/api/#user-input-props) whenever possible to minimize the input fields required to use a component.
For example, the Twitter search mentions source only requires that a user connect their account and enter a search term. The remaining fields are optional for users who want to filter the results, but they do not require any action to activate the source:
### Default Values
Provide [default values](/docs/components/contributing/api/#user-input-props) whenever possible. NOTE: the best default for a source doesn’t always map to the default recommended by the app. For example, Twitter defaults search results to an algorithm that balances recency and popularity. However, the best default for the use case on Pipedream is recency.
### Async Options
Avoid asking users to enter ID values. Use [async options](/docs/components/contributing/api/#async-options-example) (with label/value definitions) so users can make selections from a drop down menu. For example, Todoist identifies projects by numeric IDs (e.g., 12345). The async option to select a project displays the name of the project as the label, so that’s the value the user sees when interacting with the source (e.g., “My Project”). The code referencing the selection receives the numeric ID (12345).
Async options should also support [pagination](/docs/components/contributing/api/#async-options-example) (so users can navigate across multiple pages of options for long lists). See [Hubspot](https://github.com/PipedreamHQ/pipedream/blob/a9b45d8be3b84504dc22bb2748d925f0d5c1541f/components/hubspot/hubspot.app.mjs#L136) for an example of offset-based pagination. See [Twitter](https://github.com/PipedreamHQ/pipedream/blob/d240752028e2a17f7cca1a512b40725566ea97bd/components/twitter/twitter.app.mjs#L200) for an example of cursor-based pagination.
### Dynamic Props
[Dynamic props](/docs/components/contributing/api/#dynamic-props) can improve the user experience for components. They let you render props in Pipedream dynamically, based on the value of other props, and can be used to collect more specific information that can make it easier to use the component. See the Google Sheets example in the linked component API docs.
### Interface & Service Props
In the interest of consistency, use the following naming patterns when defining [interface](/docs/components/contributing/api/#interface-props) and [service](/docs/components/contributing/api/#service-props) props in source components:
| Prop | **Recommended Prop Variable Name** |
| ------------------- | ---------------------------------- |
| `$.interface.http` | `http` |
| `$.interface.timer` | `timer` |
| `$.service.db` | `db` |
Use getters and setters when dealing with `$.service.db` to avoid potential typos and leverage encapsulation (e.g., see the [Search Mentions](https://github.com/PipedreamHQ/pipedream/blob/master/components/twitter/sources/search-mentions/search-mentions.mjs#L83-L88) event source for Twitter).
## Source Guidelines
These guidelines are specific to [source](/docs/workflows/building-workflows/triggers/) development.
### Webhook vs Polling Sources
Create subscription webhooks sources (vs polling sources) whenever possible. Webhook sources receive/emit events in real-time and typically use less compute time from the user’s account. Note: In some cases, it may be appropriate to support webhook and polling sources for the same event. For example, Calendly supports subscription webhooks for their premium users, but non-premium users are limited to the REST API. A webhook source can be created to emit new Calendly events for premium users, and a polling source can be created to support similar functionality for non-premium users.
### Source Name
Source name should be a singular, title-cased name and should start with “New” (unless emits are not limited to new items). Name should not be slugified and should not include the app name. NOTE: Pipedream does not currently distinguish real-time event sources for end-users automatically. The current pattern to identify a real-time event source is to include “(Instant)” in the source name. E.g., “New Search Mention” or “New Submission (Instant)”.
### Source Description
Enter a short description that provides more detail than the name alone. Typically starts with “Emit new”. E.g., “Emit new Tweets that matches your search criteria”.
### Emit a Summary
Always [emit a summary](/docs/components/contributing/api/#emit) for each event. For example, the summary for each new Tweet emitted by the Search Mentions source is the content of the Tweet itself.
If no sensible summary can be identified, submit the event payload in string format as the summary.
### Deduping
Use built-in [deduping strategies](/docs/components/contributing/api/#dedupe-strategies) whenever possible (`unique`, `greatest`, `last`) vs developing custom deduping code. Develop custom deduping code if the existing strategies do not support the requirements for a source.
### Surfacing Test Events
In order to provide users with source events that they can immediately reference when building their workflow, we should implement 2 strategies whenever possible:
#### Emit Events on First Run
* Polling sources should always emit events on the first run (see the [Spotify: New Playlist](https://github.com/PipedreamHQ/pipedream/blob/master/components/spotify/sources/new-playlist/new-playlist.mjs) source as an example)
* Webhook-based sources should attempt to fetch existing events in the `deploy()` hook during source creation (see the [Jotform: New Submission](https://github.com/PipedreamHQ/pipedream/blob/master/components/jotform/sources/new-submission/new-submission.mjs) source)
*Note – make sure to emit the most recent events (considering pagination), and limit the count to no more than 50 events.*
#### Include a Static Sample Event
There are times where there may not be any historical events available (think about sources that emit less frequently, like “New Customer” or “New Order”, etc). In these cases, we should include a static sample event so users can see the event shape and reference it while building their workflow, even if it’s using fake data.
To achieve this, follow these steps:
1. Copy the JSON output from the source’s emit (what you get from `steps.trigger.event`) and **make sure to remove or scrub any sensitive or personal data** (you can also copy this from the app’s API docs)
2. Add a new file called `test-event.mjs` in the same directory as the component source and export the JSON event via `export default` ([example](https://github.com/PipedreamHQ/pipedream/blob/master/components/jotform/sources/new-submission/test-event.mjs))
3. In the source component code, make sure to import that file as `sampleEmit` ([example](https://github.com/PipedreamHQ/pipedream/blob/master/components/jotform/sources/new-submission/new-submission.mjs#L2))
4. And finally, export the `sampleEmit` object ([example](https://github.com/PipedreamHQ/pipedream/blob/master/components/jotform/sources/new-submission/new-submission.mjs#L96))
This will render a “Generate Test Event” button in the UI for users to emit that sample event:
### Polling Sources
#### Default Timer Interval
As a general heuristic, set the default timer interval to 15 minutes. However, you may set a custom interval (greater or less than 15 minutes) if appropriate for the specific source. Users may also override the default value at any time.
For polling sources in the Pipedream registry, the default polling interval is set as a global config. Individual sources can access that default within the props definition:
```javascript
import { DEFAULT_POLLING_SOURCE_TIMER_INTERVAL } from "@pipedream/platform";
export default {
props: {
timer: {
type: "$.interface.timer",
default: {
intervalSeconds: DEFAULT_POLLING_SOURCE_TIMER_INTERVAL,
},
},
},
// rest of component...
}
```
#### Rate Limit Optimization
When building a polling source, cache the most recently processed ID or timestamp using `$.service.db` whenever the API accepts a `since_id` or “since timestamp” (or equivalent). Some apps (e.g., Github) do not count requests that do not return new results against a user’s API quota.
If the service has a well-supported Node.js client library, it’ll often build in retries for issues like rate limits, so using the client lib (when available) should be preferred. In the absence of that, [Bottleneck](https://www.npmjs.com/package/bottleneck) can be useful for managing rate limits. 429s should be handled with exponential backoff (instead of just letting the error bubble up).
### Webhook Sources
#### Hooks
[Hooks](/docs/components/contributing/api/#hooks) are methods that are automatically invoked by Pipedream at different stages of the [component lifecycle](/docs/components/contributing/api/#source-lifecycle). Webhook subscriptions are typically created when components are instantiated or activated via the `activate()` hook, and deleted when components are deactivated or deleted via the `deactivate()` hook.
#### Helper Methods
Whenever possible, create methods in the app file to manage [creating and deleting webhook subscriptions](/docs/components/contributing/api/#hooks).
| **Description** | **Method Name** |
| --------------------------------------- | --------------- |
| Method to create a webhook subscription | `createHook()` |
| Method to delete a webhook subscription | `deleteHook()` |
#### Storing the 3rd Party Webhook ID
After subscribing to a webhook, save the ID for the hook returned by the 3rd party service to the `$.service.db` for a source using the key `hookId`. This ID will be referenced when managing or deleting the webhook. Note: some apps may not return a unique ID for the registered webhook (e.g., Jotform).
#### Signature Validation
Subscription webhook components should always validate the incoming event signature if the source app supports it.
#### Shared Secrets
If the source app supports shared secrets, implement support transparent to the end user. Generate and use a GUID for the shared secret value, save it to a `$.service.db` key, and use the saved value to validate incoming events.
## Action Guidelines
### Action Name
Like [source name](/docs/components/contributing/guidelines/#source-name), action name should be a singular, title-cased name, should not be slugified, and should not include the app name.
As a general pattern, articles are not included in the action name. For example, instead of “Create a Post”, use “Create Post”.
#### Use `@pipedream/platform` axios for all HTTP Requests
By default, the standard `axios` package doesn’t return useful debugging data to the user when it `throw`s errors on HTTP 4XX and 5XX status codes. This makes it hard for the user to troubleshoot the issue.
Instead, [use `@pipedream/platform` axios](/docs/workflows/building-workflows/http/#platform-axios).
#### Return JavaScript Objects
When you `return` data from an action, it’s exposed as a [step export](/docs/workflows/#step-exports) for users to reference in future steps of their workflow. Return JavaScript objects in all cases, unless there’s a specific reason not to.
For example, some APIs return XML responses. If you return XML from the step, it’s harder for users to parse and reference in future steps. Convert the XML to a JavaScript object, and return that, instead.
### ”List” Actions
#### Return an Array of Objects
To simplify using results from “list”/“search” actions in future steps of a workflow, return an array of the items being listed rather than an object with a nested array. [See this example for Airtable](https://github.com/PipedreamHQ/pipedream/blob/cb4b830d93e1495d8622b0c7dbd80cd3664e4eb3/components/airtable/actions/common-list.js#L48-L63).
#### Handle Pagination
For actions that return a list of items, the common use case is to retrieve all items. Handle pagination within the action to remove the complexity of needing to paginate from users. We may revisit this in the future and expose the pagination / offset params directly to the user.
In some cases, it may be appropriate to limit the number of API requests made or records returned in an action. For example, some Twitter actions optionally limit the number of API requests that are made per execution (using a [`maxRequests` prop](https://github.com/PipedreamHQ/pipedream/blob/cb4b830d93e1495d8622b0c7dbd80cd3664e4eb3/components/twitter/twitter.app.mjs#L52)) to avoid exceeding Twitter’s rate limits. [See the Airtable components](https://github.com/PipedreamHQ/pipedream/blob/e2bb7b7bea2fdf5869f18e84644f5dc61d9c22f0/components/airtable/airtable.app.js#L14) for an example of using a `maxRecords` prop to optionally limit the maximum number of records to return.
### Use `$.summary` to Summarize What Happened
[Describe what happened](/docs/components/contributing/api/#returning-data-from-steps) when an action succeeds by following these guidelines:
* Use plain language and provide helpful and contextually relevant information (especially the count of items)
* Whenever possible, use names and titles instead of IDs
* Basic structure: *Successfully \[action performed (like added, removed, updated)] “\[relevant destination]”*
### Don’t Export Data You Know Will Be Large
Browsers can crash when users load large exports (many MBs of data). When you know the content being returned is likely to be large –e.g. files —don’t export the full content. Consider writing the data to the `/tmp` directory and exporting a reference to the file.
## Miscellaneous
* Use camelCase for all props, method names, and variables.
## Database Components
Pipedream supports a special category of apps called [“databases”](/docs/workflows/data-management/databases/), such as [MySQL](https://github.com/PipedreamHQ/pipedream/tree/master/components/mysql), [PostgreSQL](https://github.com/PipedreamHQ/pipedream/tree/master/components/postgresql), [Snowflake](https://github.com/PipedreamHQ/pipedream/tree/master/components/snowflake), etc. Components tied to these apps offer unique features *as long as* they comply with some requirements. The most important features are:
1. A built-in SQL editor that allows users to input a SQL query to be run against their DB
2. Proxied execution of commands against a DB, which guarantees that such requests are always being made from the same range of static IPs (see the [shared static IPs docs](/docs/workflows/data-management/databases/#send-requests-from-a-shared-static-ip))
When dealing with database components, the Pipedream runtime performs certain actions internally to make these features work. For this reason, these components must implement specific interfaces that allows the runtime to properly interact with their code. These interfaces are usually defined in the [`@pipedream/platform`](https://github.com/PipedreamHQ/pipedream/tree/master/platform) package.
### SQL Editor
This code editor is rendered specifically for props of type `sql`, and it uses (whenever possible) the underlying’s database schema information to provide auto-complete suggestions. Each database engine not only has its own SQL dialect, but also its own way of inspecting the schemas and table information it stores. For this reason, each app file must implement the logic that’s applicable to the target engine.
To support the schema retrieval, the app file must implement a method called `getSchema` that takes no parameters, and returns a data structure with a format like this:
```javascript
{
users: { // The entries at the root correspond to table names
metadata: {
rowCount: 100,
},
schema: {
id: { // The entries under `schema` correspond to column names
columnDefault: null,
dataType: "number",
isNullable: false,
tableSchema: "public",
},
email: {
columnDefault: null,
dataType: "varchar",
isNullable: false,
tableSchema: "public",
},
dateOfBirth: {
columnDefault: null,
dataType: "varchar",
isNullable: true,
tableSchema: "public",
},
},
},
}
```
The [`lib/sql-prop.ts`](https://github.com/PipedreamHQ/pipedream/blob/master/platform/lib/sql-prop.ts) file in the `@pipedream/platform` package define the schema format and the signature of the `getSchema` method. You can also check out existing examples in the [MySQL](https://github.com/PipedreamHQ/pipedream/blob/master/components/mysql/mysql.app.mjs), [PostgreSQL](https://github.com/PipedreamHQ/pipedream/blob/master/components/postgresql/postgresql.app.mjs) and [Snowflake](https://github.com/PipedreamHQ/pipedream/blob/master/components/snowflake/snowflake.app.mjs) components.
### Shared Static IPs
When a user runs a SQL query against a database, the request is proxied through a separate internal service that’s guaranteed to always use the same range of static IPs when making outbound requests. This is important for users that have their databases protected behind a firewall, as they can whitelist these IPs to allow Pipedream components to access their databases.
To make this work, the app file must implement the interface defined in the [`lib/sql-proxy.ts`](https://github.com/PipedreamHQ/pipedream/blob/master/platform/lib/sql-proxy.ts) file in the `@pipedream/platform` package. This interface defines the following methods:
1. **`getClientConfiguration`**: This method takes no parameters and returns an object that can be fed directly to the database’s client library to initialize/establish a connection to the database. This guarantees that both the component and the proxy service use the same connection settings, **so make sure the component uses this method when initializing the client**.
2. **`executeQuery`**: This method takes a query object and returns the result of executing the query against the database. The Pipedream runtime will replace this method with a call to the proxy service, so **every component must make use of this method in order to support this feature**.
3. **`proxyAdapter`**: This method allows the proxy service to take the arguments passed to the `executeQuery` method and transform them into a “generic” query object that the service can then use. The expected format looks something like this:
```javascript
{
query: "SELECT * FROM users WHERE id = ?",
params: [42],
}
```
You can check out these example pull requests that allowed components to support this proxy feature:
* [#11201 (MySQL)](https://github.com/PipedreamHQ/pipedream/pull/11201)
* [#11202 (PostgreSQL)](https://github.com/PipedreamHQ/pipedream/pull/11202)
* [#12511 (Snowflake)](https://github.com/PipedreamHQ/pipedream/pull/12511)
# Quickstart: Source Development
Source: https://pipedream.com/docs/components/contributing/sources-quickstart
This document is intended for a technical audience (including those interested in learning how to author and edit components). After completing this quickstart, you will understand how to:
* Deploy components to Pipedream using the CLI
* Invoke a component manually, or on a schedule or HTTP request
* Maintain state across component executions
* Emit deduped events using the `unique` and `greatest` strategies
* Use Pipedream managed OAuth for an app
* Use npm packages in components
We recommend that you execute the examples in order — each one builds on the concepts and practices of earlier examples.
## Quickstart Examples
**Hello World! (\~10 minutes)**
* Deploy a `hello world!` component using the Pipedream CLI and invoke it manually
* Use `$.service.db` to maintain state across executions
* Use `$.interface.timer` to invoke a component on a schedule
* Use `$.interface.http` to invoke code on HTTP requests
**Emit new RSS items on a schedule (\~10 mins)**
* Use the `rss-parser` npm package to retrieve an RSS feed and emit each item
* Display a custom summary for each emitted item in the event list
* Use the `unique` deduping strategy so we only emit new items from the RSS feed
* Add a timer interface to run the component on a schedule
**Poll for new Github issues (\~10 mins)**
* Use Pipedream managed OAuth with Github’s API to retrieve issues for a repo
* Use the `greatest` deduping strategy to only emit new issues
## Prerequisites
**Step 1.** Create a free account at [https://pipedream.com](https://pipedream.com). Just sign in with your Google or Github account.
**Step 2.** [Download and install the Pipedream CLI](/docs/cli/install/).
**Step 3.** Once the CLI is installed, [link your Pipedream account to the CLI](/docs/cli/login/#existing-pipedream-account):
```
pd login
```
See the [CLI reference](/docs/cli/reference/) for detailed usage and examples beyond those covered below.
## CLI Development Mode
The examples in this guide use the `pd dev` command. This command will deploy your code in “development mode”. What that means is that the CLI will attach to the deployed component and watch your local file for changes —when you save changes to your local file, your component will automatically be updated on Pipedream (the alternative is to `pd deploy` and run `pd update` for each change).
If your `pd dev` session is terminated and you need to re-attach to a deployed component, run the following command.
```
pd dev [--dc ]
```
For example, if you’re building a new source at `components/sources/my-source.mjs`, then pass the fully qualified path to `pd dev`:
```
pd dev components/sources/my-source.mjs
```
If you need to update a deployed instance of a source, pass it’s ID to the `dc` argument to update it with new source code:
```
pd dev --dc dc_123456 components/sources/my-source.mjs
```
See the [CLI reference](/docs/cli/reference/) for detailed usage and examples beyond those covered below.
## Hello World!
Here is a simple component that will emit an event with a payload of `{ message: "hello world!" }` on each execution.
```javascript
export default {
name: "Source Demo",
description: "This is a demo source",
async run() {
this.$emit({ message: "hello world!" });
},
};
```
To deploy and run it, save the code to a local `.js` file (e.g., `source.js`) and run the following CLI command:
```
pd dev source.js
```
The CLI will deploy your code in development mode (the CLI will attach to the deployed component and watch your local file for changes —when you save changes to your local file, your component will automatically be updated on Pipedream).
You should see the following output:
```
$ pd dev source.js
watch:add | source.js
Configuring props...
Deploying...
Attached to deployed component: https://pipedream.com/sources/dc_v3uXKz/configuration
Ready for updates!
```
Open the URL returned by the CLI (`https://pipedream.com/sources/dc_v3uXKz` in the sample output above) to view your source in Pipedream’s UI.
Then click **RUN NOW** to invoke your source. Your event will appear in real-time, and you can select it to inspect the emitted data.
### Maintain state across executions
Next, we’ll use Pipedream’s `db` service to track the number of times the component is invoked.
First, we’ll assign `$.service.db` to a prop so we can reference it in our code via `this`.
```javascript
props: {
db: "$.service.db",
},
```
Then we’ll update the `run()` method to:
* Retrieve the value for the `count` key (using the `get()` method of `$.service.db`)
* Display the count in the event summary (event summaries are displayed in the event list next to the event time)
* Increment `count` and save the updated value to `$.service.db` using the `set()` method
```javascript
let count = this.db.get("count") || 1;
this.$emit(
{ message: "hello world!" },
{
summary: `Execution #${count}`,
}
);
this.db.set("count", ++count);
```
Here’s the updated code:
```javascript
export default {
name: "Source Demo",
description: "This is a demo source",
props: {
db: "$.service.db",
},
async run() {
let count = this.db.get("count") || 1;
this.$emit(
{ message: "hello world!" },
{
summary: `Execution #${count}`,
}
);
this.db.set("count", ++count);
},
};
```
Save the changes to your local file. Your component on Pipedream should automatically update. Return to the Pipedream UI and press **RUN NOW** —you should see the execution count appear in the event list.
### Invoke your code on a schedule
Next, we’ll update our component so it runs on a schedule. To do that, we’ll use Pipedream’s `timer` interface and we’ll set the default execution interval to 15 minutes by adding the following code to props:
```javascript
timer: {
type: "$.interface.timer",
default: {
intervalSeconds: 15 * 60,
},
},
```
Here’s the updated code:
```javascript
export default {
name: "Source Demo",
description: "This is a demo source",
props: {
db: "$.service.db",
timer: {
type: "$.interface.timer",
default: {
intervalSeconds: 15 * 60,
},
},
},
async run() {
let count = this.db.get("count") || 1;
this.$emit(
{ message: "hello world!" },
{
summary: `Execution #${count}`,
}
);
this.db.set("count", ++count);
},
};
```
Save the changes to your file (your component on Pipedream should automatically update). and then, return to the Pipedream UI and **reload the page**. You should now see the timer settings in the summary and a countdown to the next execution (you can still run your component manually). Your component will now run every 15 minutes.
**Note**: if you’d like to change the schedule of your deployed component, visit the **Configuration** tab in the Pipedream UI and change the schedule accordingly. Changing the value of `intervalSeconds` within the component’s code will not change the schedule of the running instance of the component. You can also set one value as the default `intervalSeconds` in the component’s code, but run
```
pd dev --prompt-all
```
to set a different schedule than the default specified in the code.
### Invoke your code on HTTP requests
Next, we’ll update our component to run on HTTP requests instead of a timer. To do that, we’ll just replace the `timer` interface with an `http` interface.
```javascript
http: {
type: "$.interface.http",
customResponse: true
},
```
In addition, we’ll update the function signature to pass in the HTTP event so we can reference it in our code:
```javascript
async run(event) { }
```
Finally, let’s update the `run()` method to use `event` to both echo back the request body in the HTTP response and emit it as the event payload.
```javascript
this.http.respond({
status: 200,
body: event.body,
headers: {
"Content-Type": event.headers["Content-Type"],
},
});
this.$emit(event.body, {
summary: `Execution #${count}`,
});
```
Here’s the updated code:
```javascript
export default {
name: "Source Demo",
description: "This is a demo source",
props: {
db: "$.service.db",
http: {
type: "$.interface.http",
customResponse: true,
},
},
async run(event) {
let count = this.db.get("count") || 1;
this.http.respond({
status: 200,
body: event.body,
headers: {
"Content-Type": event.headers["Content-Type"],
},
});
this.$emit(event.body, {
summary: `Execution #${count}`,
});
this.db.set("count", ++count);
},
};
```
Save the changes to your file and your component on Pipedream should automatically update.
Return to the Pipedream UI and refresh the page. Instead of the countdown timer, you will now see a unique URL generated by Pipedream. Copy the URL and update and execute the CURL command below to invoke your component. This command passes `{ message: "hello world!" }` as the request body. `{ message: "hello world!" }` will be emitted as the event similar to the earlier examples, and it will also be echoed back in the HTTP response.
```sh
curl -d '{ "message": "hello world!" }' \
-H "Content-Type: application/json" \
"INSERT-YOUR-ENDPOINT-URL-HERE"
```
## Emit new RSS items on a schedule (\~10 mins)
Next, let’s cover some real-world examples starting with RSS. Continue editing the same file, but start with the following scaffolding for this example.
```javascript
export default {
name: "Source Demo",
description: "This is a demo source",
async run() {},
};
```
### Emit items in an RSS Feed
**Note:** The code for the examples below was adapted from the samples provided in the readme for the `rss-parser` package at [https://www.npmjs.com/package/rss-parser](https://www.npmjs.com/package/rss-parser). To use most npm packages on Pipedream, just `import` them — there is no `package.json` or `npm install` required.
To parse the RSS feed, we’ll use the `rss-parser` npm package.
```javascript
import Parser from "rss-parser";
let parser = new Parser();
```
Then, update the `run()` method to:
* Parse the feed at `https://lorem-rss.herokuapp.com/feed` (it’s important you use this feed — a new item is added every minute, so it will help us test deduplication)
* Loop through the array of returned RSS items and emit each one
```javascript
let feed = await parser.parseURL("https://lorem-rss.herokuapp.com/feed");
feed.items.forEach((item) => {
this.$emit(item);
});
```
Here’s the updated code:
```javascript
import Parser from "rss-parser";
let parser = new Parser();
export default {
name: "Source Demo",
description: "This is a demo source",
async run() {
let feed = await parser.parseURL("https://lorem-rss.herokuapp.com/feed");
feed.items.forEach((item) => {
this.$emit(item);
});
},
};
```
Save the changes to your file, and then refresh your source in the Pipedream UI and click **RUN NOW**. You should see 10 events emitted. Each event corresponds with an RSS item. You can select each event to inspect it.
### Add an optional summary for each emitted event
Next, we’ll add a summary for each event. The summary is displayed in the event list and makes it easy to differentiate events at a glance in the list. For this example, let’s emit the `title` as the summary for each RSS item. To do that, we add a metadata object to `this.$emit()`.
Add summary to emit metadata…
```javascript
this.$emit(item, {
summary: item.title,
});
```
Here’s the updated code:
```javascript
import Parser from "rss-parser";
let parser = new Parser();
export default {
name: "Source Demo",
description: "This is a demo source",
async run() {
let feed = await parser.parseURL("https://lorem-rss.herokuapp.com/feed");
feed.items.forEach((item) => {
this.$emit(item, {
summary: item.title,
});
});
},
};
```
Save the changes to your file and then click **RUN NOW** in the Pipedream UI. You should again see 10 events emitted, but this time each event should have a corresponding summary in the event list.
### Only emit new items in the RSS Feed
In the previous examples, we always emit any data that is returned. However, we are emitting duplicate events — e.g., if you run invoke the component twice, you will see the same events emitted twice.
Pipedream provides built-in deduplication strategies to make it easy to emit new events only. For this example, we’ll use the `unique` strategy. This strategy caches the last 100 event `id` values, and only emits events with `id` values that are not contained in that cache.
To dedupe with the `unique` strategy, we need to first declare it:
```javascript
dedupe: "unique",
```
And then we need to pass an `id` value in the metadata for `this.$emit()` for Pipedream to use for deduping:
```bash
this.$emit(item, {
summary: item.title,
id: item.guid,
});
```
Here’s the updated code:
```javascript
import Parser from "rss-parser";
let parser = new Parser();
export default {
name: "Source Demo",
description: "This is a demo source",
dedupe: "unique",
async run() {
let feed = await parser.parseURL("https://lorem-rss.herokuapp.com/feed");
feed.items.forEach((item) => {
this.$emit(item, {
summary: item.title,
id: item.guid,
});
});
},
};
```
Save the changes to your file and then click **RUN NOW** in the Pipedream UI. Similar to previous executions, you should see 10 events emitted. Now, run the component **again**. You should see a maximum of **one, if any** events emitted (the reason one event may be emitted is if a new item was added to the RSS feed). If no new events were emitted, wait for \~1 minute and try again.
### Add a timer interface to invoke the component on a schedule
Now we’re ready to add a timer to our component to check for new RSS items automatically. Similar to the **hello world!** example above, we’ll add a timer prop, and we’ll set the default interval to 5 minutes:
```javascript
props: {
timer: {
type: "$.interface.timer",
default: {
intervalSeconds: 60 * 5,
},
},
},
```
Here’s the updated code:
```javascript
import Parser from "rss-parser";
let parser = new Parser();
export default {
name: "Source Demo",
description: "This is a demo source",
props: {
timer: {
type: "$.interface.timer",
default: {
intervalSeconds: 60 * 5,
},
},
},
dedupe: "unique",
async run() {
let feed = await parser.parseURL("https://lorem-rss.herokuapp.com/feed");
feed.items.forEach((item) => {
this.$emit(item, {
id: item.guid,
summary: item.title,
});
});
},
};
```
**Save** your component then return to the UI and reload the page. You should see the updated configuration on your summary card and a countdown to the next execution. You can still click **RUN NOW** to execute your source manually.
## Use managed auth to pull data from Github (\~10 mins)
In the last example, we were able to retrieve data to emit without any authentication. Now we’ll use Pipedream managed auth to retrieve and emit data from the Github API (which uses OAuth for authentication). Similar to the last example, continue editing the same file, but start with the following scaffolding:
```javascript
export default {
name: "Source Demo",
description: "This is a demo source",
async run() {},
};
```
### Get issues for a repo
First, import `axios` so we can make a request to the Github REST API:
```javascript
import { axios } from "@pipedream/platform";
```
Next, let’s add an **app prop**, which will enable us to use Pipedream managed auth with this component. For this example, we’ll add Github:
```javascript
props: {
github: {
type: "app",
app: "github",
},
},
```
**IMPORTANT: The CLI will prompt you to select a connected account (or connect a new one) when you deploy (or update) this component.**
**Note:** The value for the `app` property is the name slug for the app in Pipedream. This is not currently discoverable, but it will be in the near future. For the time being, if you want to know how to reference an app, please reach out on our public Slack.
Finally, we’ll update the `run()` method to fetch issues from Github using `axios` and emit them. Notice that we’re passing the `oauth_access_token` in the authorization header by referencing the app prop `this.github.$auth.oauth_access_token`. Again, it’s important that you stick with the `pddemo/demo` repo shown in the below example so you can test the next dedupe strategy.
```javascript
async run() {
const data = await axios(this, {
method: 'get',
headers: {
Authorization: `Bearer ${this.github.$auth.oauth_access_token}`,
},
url: `https://api.github.com/repos/pddemo/demo/issues`
})
data.forEach(issue => {
this.$emit(issue)
})
}
```
Here’s the updated code.
```javascript
import { axios } from "@pipedream/platform";
export default {
name: "Source Demo",
description: "This is a demo source",
props: {
github: {
type: "app",
app: "github",
},
},
async run() {
const data = await axios(this, {
method: "get",
headers: {
Authorization: `Bearer ${this.github.$auth.oauth_access_token}`,
},
url: `https://api.github.com/repos/pddemo/demo/issues`,
});
data.forEach((issue) => {
this.$emit(issue);
});
},
};
```
Next save your changes and go to the terminal where you ran `pd dev` **—follow the CLI prompts to select a connected account for Github (or connect a new one)**. Then load the Pipedream UI, and click **RUN NOW**. Your component should emit 30 issues.
### Dedupe the events
In the RSS example, we deduped the emitted events based on the `unique` strategy. The limitation of the unique strategy is that it will only maintain uniqueness for 100 items. Since Github issues have increasing numeric IDs, we can use the `greatest` strategy to filter for new issues.
To use this strategy, we first have to declare it.
```javascript
dedupe: "greatest".
```
Then, we need to pass the numeric ID for each issue to `this.$emit()`. We can also add a summary and a timestamp (based on the date/time when the issue was created). Note: when you add a timestamp, Pipedream will automatically emit events from oldest to newest.
```javascript
response.data.forEach((issue) => {
this.$emit(issue, {
id: issue.id,
summary: `ISSUE ${issue.number}: ${issue.title}`,
ts: issue.created_at && +new Date(issue.created_at),
});
});
```
Here is the updated code.
```javascript
import { axios } from "@pipedream/platform";
export default {
name: "Source Demo",
description: "This is a demo source",
props: {
github: {
type: "app",
app: "github",
},
},
dedupe: "greatest",
async run() {
const data = await axios(this, {
method: "get",
headers: {
Authorization: `Bearer ${this.github.$auth.oauth_access_token}`,
},
url: `https://api.github.com/repos/pddemo/demo/issues`,
});
data.forEach((issue) => {
this.$emit(issue, {
id: issue.id,
summary: `ISSUE ${issue.number}: ${issue.title}`,
ts: issue.created_at && +new Date(issue.created_at),
});
});
},
};
```
Save, load the Pipedream UI, and click **RUN NOW**. You should see 30 issues emitted, now with summaries. When you click **RUN NOW** again, only new issues will be emitted (if there are any).
### Add a timer to run on a schedule
As the final step of this walk-through, we’ll update our component to check for new issues every 15 minutes. To do that, we’ll add a timer prop.
```javascript
timer: {
type: "$.interface.timer",
default: {
intervalSeconds: 15 * 60,
},
},
```
Here’s the updated code.
```javascript
import { axios } from "@pipedream/platform";
export default {
name: "Source Demo",
description: "This is a demo source",
props: {
github: {
type: "app",
app: "github",
},
timer: {
type: "$.interface.timer",
default: {
intervalSeconds: 15 * 60,
},
},
},
dedupe: "greatest",
async run() {
const data = await axios(this, {
method: "get",
headers: {
Authorization: `Bearer ${this.github.$auth.oauth_access_token}`,
},
url: `https://api.github.com/repos/pddemo/demo/issues`,
});
data.forEach((issue) => {
this.$emit(issue, {
id: issue.id,
summary: `ISSUE ${issue.number}: ${issue.title}`,
ts: issue.created_at && +new Date(issue.created_at),
});
});
},
};
```
Save and reload your source in the Pipedream UI. You should now see a countdown timer to the next execution.
## What’s Next?
You’re ready to start authoring and deploying components on Pipedream! You can also check out the [detailed component reference](/docs/components/contributing/api/) at any time!
If you have any questions or feedback, please join our [public Slack](https://pipedream.com/support).
# TypeScript Components
Source: https://pipedream.com/docs/components/contributing/typescript
🎉 Calling all TypeScript developers 🎉
TypeScript components are in **beta**, and we’re looking for feedback. Please see our list of [known issues](/docs/components/contributing/typescript/#known-issues), start writing TypeScript components, and give us feedback in [our community](https://pipedream.com/support).
During the beta, the `@pipedream/types` package and other TypeScript configuration in the `pipedream` repo is subject to change.
## Why TypeScript?
Most Pipedream components in [the registry](https://github.com/PipedreamHQ/pipedream) are written in Node.js. Writing components in TypeScript can reduce bugs and speed up development, with very few changes to your code.
If you haven’t written TypeScript, start with [this tutorial](https://www.typescriptlang.org/docs/handbook/typescript-from-scratch.html).
## Quickstart
If you’ve never developed Pipedream components before, [start here](/docs/components/contributing/).
### Developing TypeScript components in the `PipedreamHQ/pipedream` registry
1. [Fork and clone the repo](https://github.com/PipedreamHQ/pipedream).
2. Run `pnpm install` to install dependencies.
3. See [the RSS sources and actions](https://github.com/PipedreamHQ/pipedream/tree/master/components/rss) for example `tsconfig.json` configuration and TypeScript components. If the app you’re working with doesn’t yet have a `tsconfig.json` file, copy the file from the RSS component directory and modify accordingly.
4. In the RSS examples, you’ll see how we use the `defineApp`, `defineAction`, and `defineSource` methods from the `@pipedream/types` package. This lets us strictly-type `this` in apps and components. See [the TypeScript docs on `ThisType`](https://www.typescriptlang.org/docs/handbook/utility-types.html#thistypetype) for more detail on this pattern.
5. Before you publish components to Pipedream, you’ll need to compile your TypeScript to JavaScript. Run:
```bash
npm run build
```
The build process should print the compiled JS files to your console and produce them at the `/dist` directory.
For example, if you compile a TypeScript file at `pipedream/components/rss/sources/new-item-in-feed/new-item-in-feed.ts`, the corresponding JS file will be produced at `pipedream/components/rss/dist/sources/new-item-in-feed/new-item-in-feed.js`.
6. Use [the Pipedream CLI](/docs/cli/reference/) to `pd publish` or `pd dev` the JavaScript components emitted by step 5 by the full path to the file.
```
pd publish pipedream/components/rss/dist/sources/new-item-in-feed/new-item-in-feed.js
```
Don’t forget to use the dist directory
If you attempt to deploy the TypeScript component directly, you’ll receive a 500 error from the publish endpoint. Instead deploy the JavaScript file produced within the `/dist` directory as described in step 5.
7. If it doesn’t exist in the app directory, add a `.gitignore` file that ignores the following files. Commit only `ts` files to Git, not compiled `*js` files.
```
*.js
*.mjs
dist
```
### Developing TypeScript components in your own application
First, install the `@pipedream/types` package:
```ruby
# npm
npm i --save-dev @pipedream/types
# yarn
yarn add --dev @pipedream/types
```
You’ll need a minimal configuration to compile TypeScript components in your application. In the Pipedream registry, we use this setup:
* The `tsconfig.json` in the root of the repo contains [references](https://www.typescriptlang.org/docs/handbook/project-references.html) to component app directories. For example, the root config provides a reference to the `components/rss` directory, which contains its own `tsconfig.json` file.
* `npm run build` compiles the TypeScript in all directories in `references`.
* The `tsconfig.json` in each component app directory contains the app-specific TypeScript configuration.
* The GitHub actions in `.github/workflows` compile and publish our components.
See [the RSS sources and actions](https://github.com/PipedreamHQ/pipedream/tree/master/components/rss) for an example app configuration.
## Known issues
We welcome PRs in [the `PipedreamHQ/pipedream` repo](https://github.com/PipedreamHQ/pipedream), where we store all sources and actions, the `@pipedream/types` package, these docs, and other Pipedream code. Here are a few known issues durin the **beta**:
* `this` is strictly-typed within `methods`, `run`, `hooks`, and everywhere you have access to `this` in [the component API](/docs/components/contributing/api/). But this typing can be improved. For example, we don’t yet map props to their appropriate TypeScript type (everything is typed with `any`).
* The compile -> publish lifecycle hasn’t been fully-automated when you’re developing in the `pipedream` repo. Currently, you have to run `npm run build` from the repo root, then use the `pd` CLI to publish components after compilation. It would be nice to run `tsc-watch` and have that compile and publish the new version of the component using the `--onSuccess` flag, publishing any sources or actions accordingly.
* We should add a linter (like `dtslint`) to all TypeScript components). Currently, `dtslint` is configured only for the `@pipedream/types` package.
## `@pipedream/types`
See the `types` directory of [the `PipedreamHQ/pipedream` repo](https://github.com/PipedreamHQ/pipedream) for Pipedream types. We welcome PRs!
## Giving feedback during the beta
We welcome any feedback, bug reports, or improvements. Please reach out or submit PRs [in our Slack, Discourse and GitHub communities](https://pipedream.com/support).
# Overview
Source: https://pipedream.com/docs/connect
export const PUBLIC_APPS = '2,700';
**Connect provides a developer toolkit that lets you add {PUBLIC_APPS}+ integrations to your app or AI agent.** You can build AI agents, chatbots, workflow builders, [and much more](/docs/connect/use-cases/), all in a few minutes. You have full, code-level control over how these integrations work in your app. You handle your product, Pipedream simplifies the integration.
## Demos
Check out [Pipedream MCP](/docs/connect/mcp/developers/) in our **[demo chat app](https://chat.pipedream.com)** or explore the [Connect SDK](/docs/connect/components/) in our **[playground](https://pipedream.com/connect/demo)**.
## Managed auth
* Handle authorization or accept API keys on behalf of your users, for any of Pipedream’s [{PUBLIC_APPS}+ APIs](https://pipedream.com/apps)
* Use the [Client SDK](https://github.com/PipedreamHQ/pipedream/tree/master/packages/sdk) or [Connect Link](/docs/connect/managed-auth/quickstart/#or-use-connect-link) to accept auth in minutes
* Ship new integrations quickly with Pipedream’s approved OAuth clients, or use your own
## Make requests on behalf of your users
* Use [Pipedream’s MCP server](/docs/connect/mcp/developers/) to provide your AI agent 10,000+ tools from {PUBLIC_APPS}+ APIs
* Add our [entire registry](https://github.com/PipedreamHQ/pipedream/tree/master/components) of [pre-built tools and triggers](/docs/connect/components/) from {PUBLIC_APPS}+ APIs to your SaaS app or workflow builder
* Send custom API requests while still avoiding dealing with customer credentials with the [Connect proxy](/docs/connect/api-proxy/)
* Develop and deploy complex multi-step [workflows](/docs/connect/workflows/) in our best-in-class [visual builder](/docs/workflows/building-workflows/)
## Use cases
Pipedream Connect lets you build any API integration into your product in minutes. Our customers build:
* **AI products**: Talk to any AI API or LLM, interacting with your users or running AI-driven asynchronous tasks
* **In-app messaging**: Send messages to Slack, Discord, Microsoft Teams, or any app directly from your product.
* **CRM syncs**: Sync data between your app and Salesforce, HubSpot, or any CRM
* **Spreadsheet integrations**: Sync data between your app and Google Sheets, Airtable, or any spreadsheet
[and much more](/docs/connect/use-cases/).
## Getting started
Visit [the Connect quickstart](/docs/connect/quickstart/) to build your first integration.
## Plans and pricing
**Connect is entirely free to get started and use in development mode**. Once you’re ready to ship to production, check out our [pricing page](https://pipedream.com/pricing?plan=Connect) for the latest info.
## Security
Pipedream takes the security of our products seriously. See [details on Connect security](/docs/privacy-and-security/#pipedream-connect) and [our general security docs](/docs/privacy-and-security/). Please send us any questions or [suspected vulnerabilities](/docs/privacy-and-security/#reporting-a-vulnerability). You can also get a copy of our [SOC 2 Type 2 report](/docs/privacy-and-security/#soc-2), [sign HIPAA BAAs](/docs/privacy-and-security/#hipaa), and get information on other practices and controls.
### Storing user credentials, token refresh
All credentials and tokens are sent to Pipedream securely over HTTPS, and encrypted at rest. [See our security docs on credentials](/docs/privacy-and-security/#third-party-oauth-grants-api-keys-and-environment-variables) for more information.
### How to secure your Connect apps
* **Secure all secrets** — Secure your Pipedream OAuth client credentials, and especially any user credentials. Never expose secrets in your client-side code. Make all requests to Pipedream’s API and third-party APIs from your server-side code.
* **Use HTTPS** — Always use HTTPS to secure your connections between your client and server. Requests to Pipedream’s API will be automatically redirected to HTTPS.
* **Use secure, session-based auth between your client and server** — authorize all requests from your client to your server using a secure, session-based auth mechanism. Use well-known identity providers with services like [Clerk](https://clerk.com/), [Firebase](https://firebase.google.com/), or [Auth0](https://auth0.com/) to securely generate and validate authentication tokens. The same follows for Pipedream workflows — if you trigger Pipedream workflows from your client or server, validate all requests in the workflow before executing workflow code.
* **Secure your workflows** — See our [standard security practices](/docs/privacy-and-security/best-practices/) for recommendations on securing your Pipedream workflows.
## Glossary of terms
* **App**: GitHub, Notion, Slack, Google Sheets, and more. The app is the API you want your users to connect to in your product. See the [full list here](https://pipedream.com/apps).
* **Developer**: This is probably you, the Pipedream customer who’s developing an app and wants to use Connect to make API requests on behalf of your end users.
* **End User**: Your customer or user, whose data you want to access on their behalf. End users are identifed via the `external_user_id` param in the Connect SDK and API.
* **Connected Account**: The account your end user connects. [Read more about connected accounts](/docs/apps/connected-accounts/).
* **OAuth Client**: This is admittedly a bit of an overloaded term and refers both to [custom OAuth clients](/docs/connect/managed-auth/oauth-clients/) you create in Pipedream to use when your end users authorize access to their account, as well as [OAuth clients to authenticate to Pipedream’s API](/docs/rest-api/auth/#oauth).
# Connect API Proxy
Source: https://pipedream.com/docs/connect/api-proxy
export const PUBLIC_APPS = '2,700';
Pipedream Connect provides a proxy API that you can use to send authenticated requests to any integrated API on behalf of your users. This is useful in a few scenarios:
1. You need code-level control and you want to use [Pipedream’s OAuth](/docs/connect/managed-auth/oauth-clients/#using-pipedream-oauth) instead of [your own OAuth client](/docs/connect/managed-auth/oauth-clients/#using-a-custom-oauth-client)
2. There isn’t a [pre-built tool](/docs/connect/components/) (action) for the app, or you need to modify the request
3. You want to avoid storing end user credentials in your app
## Overview
The Connect proxy enables you to interface with any integrated API and make authenticated requests on behalf of your users, without dealing with OAuth or storing end user credentials.
1. You send a request to the proxy and identify the end user you want to act on behalf of
2. The proxy sends the request to the upstream API and dynamically inserts your end user’s auth credentials
3. The proxy returns the response from the upstream API back to you
Before getting started with the Connect proxy, make sure you’ve already gone through the [managed auth quickstart](/docs/connect/managed-auth/quickstart/) for Pipedream Connect.
## Getting started
You can send requests to the Connect proxy using either the Pipedream SDK with a fetch-style interface or the Pipedream REST API.
### Prerequisites
* A [Pipedream OAuth client](/docs/connect/api-reference/authentication) to make authenticated requests to Pipedream’s API
* Connect [environment](/docs/connect/managed-auth/environments/) (ex, `production` or `development`)
* The [external user ID](/docs/connect/api-reference/introduction) for your end user (ex, `abc-123`)
* The [account ID](/docs/connect/api-reference/list-accounts) for your end user’s connected account (ex, `apn_1234567`)
Refer to the full Connect API [here](/docs/connect/api-reference/).
### Authenticating on behalf of your users
One of the core benefits of using the Connect API Proxy is not having to deal with storing or retrieving sensitive credentials for your end users.
Since Pipedream has {PUBLIC_APPS}+ integrated apps, we know how the upstream APIs are expecting to receive access tokens or API keys. When you send a request to the proxy, Pipedream will look up the corresponding connected account for the relevant user, and **automatically insert the authorization credentials in the appropriate header or URL param**.
### Sending requests
When making requests to the Connect Proxy, you must provide the following parameters:
#### Request parameters
**URL**
* The URL of the API you want to call (ex, `https://slack.com/api/chat.postMessage`)
* When using the REST API, this should be a URL-safe Base64 encoded string (ex, `aHR0cHM6Ly9zbGFjay5jb20vYXBpL2NoYXQucG9zdE1lc3NhZ2U`)
**For apps with dynamic domains** (like Zendesk, Zoho, GitLab), you should use relative paths in your proxy requests. Pipedream automatically resolves the correct domain based on the user’s connected account. See [When to use relative vs full URLs](/docs/connect/api-proxy/#when-to-use-relative-vs-full-urls) for details.
**HTTP method**
* Use the HTTP method required by the upstream API
**Body**
* Optionally include a body to send to the upstream API
**Headers**
* If using the REST API, include the `Authorization` header with your Pipedream OAuth access token (`Bearer {access_token}`)
* Headers that contain the prefix `x-pd-proxy` will get forwarded to the upstream API
#### Examples
```typescript TypeScript
import { PipedreamClient } from "@pipedream/sdk";
const client = new PipedreamClient({
projectEnvironment: {development | production},
projectId: {your_pipedream_project_id},
clientId: {your_oauth_client_id},
clientSecret: {your_oauth_client_secret}
});
const resp = await client.proxy.post(
{
external_user_id: "{external_user_id}", // The external user ID for your end user
account_id: "{account_id}", // The account ID for your end user (ex, apn_1234567)
url: "https://slack.com/api/chat.postMessage", // Include any query params you need; no need to Base64 encode the URL if using the SDK
headers: {
hello: "world!" // Include any headers you need to send to the upstream API
},
body: {
text: "hello, world",
channel: "C03NA8B4VA9"
},
}
)
// Parse and return the data you need
console.log(resp);
```
```sh cURL
# First, obtain an OAuth access token to authenticate to the Pipedream API
curl -X POST https://api.pipedream.com/v1/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "{your_oauth_client_id}",
"client_secret": "{your_oauth_client_secret}"
}'
# The response will include an access_token. Use it in the Authorization header below.
curl -X POST "https://api.pipedream.com/v1/connect/{your_project_id}/proxy/{url_safe_base64_encoded_url}?external_user_id={external_user_id}&account_id={apn_xxxxxxx}" \
-H "Authorization: Bearer {access_token}" \
-H "x-pd-environment: {development | production}" \
-d '{
"text": "hello, world",
"channel": "C03NA8B4VA9"
}'
# Parse and return the data you need
```
## Allowed domains
The vast majority of apps in Pipedream work with the Connect Proxy. To check if an app is supported and what domains are allowed, use `pd.getApps()` or the [`/apps` REST API](/docs/rest-api/#list-apps).
### Understanding the Connect object
Each app in the `/apps` API response includes a `connect` object:
```json
{
"id": "app_1Z2hw1",
"name_slug": "gitlab",
"name": "GitLab",
// ...other fields...
"connect": {
"proxy_enabled": true,
"allowed_domains": ["gitlab.com"],
"base_proxy_target_url": "https://{{custom_fields.base_api_url}}"
}
}
```
| Field | Description |
| ----------------------- | ------------------------------------------------------------------------------------- |
| `proxy_enabled` | Whether the app supports the Connect Proxy |
| `allowed_domains` | Domains you can send requests to when using full URLs |
| `base_proxy_target_url` | The base URL for proxy requests, may contain placeholders for account-specific values |
### When to use relative vs full URLs
The format of `base_proxy_target_url` determines whether you should use a relative path or full URL:
#### Apps with static domains
If `base_proxy_target_url` is a standard URL (e.g., `https://slack.com`), you can use either:
* **Full URL**: `https://slack.com/api/chat.postMessage`
* **Relative path**: `/api/chat.postMessage`
#### Apps with dynamic domains
If `base_proxy_target_url` contains placeholders like `{{custom_fields.base_api_url}}`, you **must** use relative paths. This applies to:
* Self-hosted instances (GitLab)
* Apps with account-specific subdomains (Zendesk, Zoho)
For these apps, Pipedream resolves the actual domain from the user’s connected account at runtime.
### Examples
```typescript Slack (static domain)
// Both work
await client.proxy.post({
external_user_id: "user-123",
account_id: "apn_1234567",
url: "https://slack.com/api/chat.postMessage",
body: {...}
})
await client.proxy.post({
account_id: "apn_1234567",
external_user_id: "user-123",
url: "/api/chat.postMessage",
body: {...}
})
```
```typescript GitLab (dynamic domain)
// Must use relative path
await client.proxy.get({
account_id: "apn_1234567",
external_user_id: "user-123",
url: "/api/v4/projects", // Pipedream resolves to the end user's GitLab instance
})
```
### Discovering app support programmatically
```typescript SDK
const apps = await client.apps.list()
// Filter for apps that support the proxy
const proxyEnabledApps = apps.filter(app => app.connect?.proxy_enabled)
```
```bash REST API
curl https://api.pipedream.com/v1/apps \
-H "Authorization: Bearer "
```
Filter the response for apps where `connect.proxy_enabled` is `true`.
## Restricted headers
The following headers are not allowed when making requests through the Connect API Proxy. Requests that include these headers will be rejected with a `400` error:
* `ACCEPT-ENCODING`
* `ACCESS-CONTROL-REQUEST-HEADERS`
* `ACCESS-CONTROL-REQUEST-METHOD`
* `CONNECTION`
* `CONTENT-LENGTH`
* `COOKIE`
* `DATE`
* `DNT`
* `EXPECT`
* `HOST`
* `KEEP-ALIVE`
* `ORIGIN`
* `PERMISSIONS-POLICY`
* `REFERER`
* `TE`
* `TRAILER`
* `TRANSFER-ENCODING`
* `UPGRADE`
* `VIA`
* `NOTE`
* Headers starting with `PROXY-`
* Headers starting with `SEC-`
## Limits
* The Connect Proxy limits API requests to 1,000 requests per 5 minutes per project. Requests that surpass this limit will receive a `429` response.
* The maximum timeout for a request is 30 seconds. Requests that take longer than 30 seconds will be terminated, and Pipedream will return a `504` error to the caller.
Please [let us know](https://pipedream.com/support) if you need higher limits.
# Authentication
Source: https://pipedream.com/docs/connect/api-reference/authentication
The Pipedream Connect API uses OAuth to authenticate requests.
**We use OAuth** for a few reasons:
✅ OAuth clients are tied to the Pipedream workspace, administered by workspace admins \
✅ Tokens are short-lived \
✅ OAuth clients support scopes, limiting access to specific operations \
✅ Limit access to specific Pipedream projects (coming soon!)
Since API requests are meant to be made server-side, and since grants are not tied to individual end users, all OAuth clients are [**Client Credentials** applications](https://www.oauth.com/oauth2-servers/access-tokens/client-credentials/).
### Creating an OAuth client
1. Visit the [API settings](https://pipedream.com/settings/api) for your Pipedream workspace.
2. Click the **New OAuth Client** button.
3. Name your client and click **Create**.
4. Copy the client secret. **It will not be accessible again**. Click **Close**.
5. Copy the client ID from the list.
### How to get an access token
In the client credentials model, you exchange your OAuth client ID and secret for an access token. Then you use the access token to make API requests.
Pipedream offers [TypeScript](https://www.npmjs.com/package/@pipedream/sdk) and [Python](https://pypi.org/project/pipedream) SDKs, which abstract the process of generating and refreshing fresh access tokens.
```typescript TypeScript
import { PipedreamClient } from "@pipedream/sdk";
const client = new PipedreamClient({
clientId: "YOUR_CLIENT_ID",
clientSecret: "YOUR_CLIENT_SECRET",
projectEnvironment: "YOUR_PROJECT_ENVIRONMENT",
projectId: "YOUR_PROJECT_ID"
});
await client.accounts.retrieve("account_id");
```
```python Python
from pipedream import Pipedream
pd = Pipedream(
client_id="YOUR_CLIENT_ID",
client_secret="YOUR_CLIENT_SECRET",
project_id="YOUR_PROJECT_ID",
project_environment="YOUR_PROJECT_ENVIRONMENT"
)
await pd.accounts.retrieve("account_id")
```
You can also manage this token refresh process yourself, using the `/oauth/token` API endpoint:
```bash
curl https://api.pipedream.com/v1/oauth/token \
-H 'Content-Type: application/json' \
-d '{ "grant_type": "client_credentials", "client_id": "", "client_secret": "" }'
```
Access tokens expire after 1 hour. Store access tokens securely, server-side.
### Revoking a client secret
1. Visit your workspace’s [API settings](https://pipedream.com/settings/api).
2. Click the **…** button to the right of the OAuth client whose secret you want to revoke, then click **Rotate client secret**.
3. Copy the new client secret. **It will not be accessible again**.
### OAuth security
See [the OAuth section of the security docs](/docs/privacy-and-security/#pipedream-rest-api-security-oauth-clients) for more information on how Pipedream secures OAuth credentials.
# Configure action prop
Source: https://pipedream.com/docs/connect/api-reference/configure-action-prop
post /v1/connect/{project_id}/actions/configure
# Configure component prop
Source: https://pipedream.com/docs/connect/api-reference/configure-component-prop
post /v1/connect/{project_id}/components/configure
# Configure trigger prop
Source: https://pipedream.com/docs/connect/api-reference/configure-trigger-prop
post /v1/connect/{project_id}/triggers/configure
# Create Connect token
Source: https://pipedream.com/docs/connect/api-reference/create-connect-token
post /v1/connect/{project_id}/tokens
Create a Connect token for a user
Your app will initiate the account connection flow for your end users in your frontend. To securely scope connection to a specific end user, on your server, **you retrieve a short-lived token for that user**, and return that token to your frontend.
See [the Connect tokens docs](/docs/connect/managed-auth/tokens/) for more information.
When using the Connect API to make requests from a client environment like a browser, you must specify the **allowed origins** for the token. Otherwise, this field is optional. This is a list of URLs that are allowed to make requests with the token. For example:
```json
{
"allowed_origins": ["https://myapp.com"]
}
```
# Delete account
Source: https://pipedream.com/docs/connect/api-reference/delete-account
delete /v1/connect/{project_id}/accounts/{account_id}
# Delete accounts by app
Source: https://pipedream.com/docs/connect/api-reference/delete-accounts-by-app
delete /v1/connect/{project_id}/apps/{app_id}/accounts
# Delete deployed trigger
Source: https://pipedream.com/docs/connect/api-reference/delete-deployed-trigger
delete /v1/connect/{project_id}/deployed-triggers/{trigger_id}
# Delete external user
Source: https://pipedream.com/docs/connect/api-reference/delete-external-user
delete /v1/connect/{project_id}/users/{external_user_id}
# Deploy trigger
Source: https://pipedream.com/docs/connect/api-reference/deploy-trigger
post /v1/connect/{project_id}/triggers/deploy
# Get deployed trigger
Source: https://pipedream.com/docs/connect/api-reference/get-deployed-trigger
get /v1/connect/{project_id}/deployed-triggers/{trigger_id}
# Introduction
Source: https://pipedream.com/docs/connect/api-reference/introduction
Pipedream provides a TypeScript SDK and a REST API to interact with the Connect service. You'll find examples using the SDK and the REST API in multiple languages below.
## REST API base URL
Pipedream Connect resources are scoped to [projects](/docs/projects/), so you'll need to pass [the project's ID](/docs/projects/#finding-your-projects-id) as a part of the base URL or when initializing the SDK client:
```curl HTTP (cURL)
https://api.pipedream.com/v1/connect/{project_id}
```
```typescript TypeScript
import { PipedreamClient } from "@pipedream/sdk";
const client = new PipedreamClient({
clientId: "YOUR_CLIENT_ID",
clientSecret: "YOUR_CLIENT_SECRET",
projectEnvironment: "YOUR_PROJECT_ENVIRONMENT",
projectId: "YOUR_PROJECT_ID"
});
```
```python Python
from pipedream import Pipedream
pd = Pipedream(
client_id="YOUR_CLIENT_ID",
client_secret="YOUR_CLIENT_SECRET",
project_id="YOUR_PROJECT_ID",
project_environment="YOUR_PROJECT_ENVIRONMENT"
)
```
## SDK Installation
Pipedream provides SDKs in both [TypeScript](#typescript-sdk) and [Python](#python-sdk).
### TypeScript SDK
The TypeScript SDK can be used in both browser and server environments, enabling you to build full-stack applications with Pipedream Connect.
#### npm
To install [the SDK](https://www.npmjs.com/package/@pipedream/sdk) from npm, run:
```sh
npm i --save @pipedream/sdk
```
#### `
```
or a specific version:
```html
```
### Python SDK
The Python SDK is for server-side use. Install it to build backend services, APIs, and scripts that interact with Pipedream Connect as part of your full-stack application.
#### pip
To install [the SDK](https://pypi.org/project/pipedream/) from pip, run:
```sh
pip install pipedream
```
## SDK Usage
### TypeScript SDK (server)
Most of your interactions with the Connect API will likely happen on the server, to protect API requests and user credentials. You'll use the SDK in [your frontend](/docs/connect/api-reference/introduction) to let users connect accounts. Once connected, you'll use the SDK on the server to retrieve credentials, invoke workflows on their behalf, and more.
[Create a Pipedream OAuth client](/docs/connect/api-reference/authentication) and instantiate the SDK with your client ID and secret:
```js
import { PipedreamClient } from "@pipedream/sdk";
// These secrets should be saved securely and passed to your environment
const client = new PipedreamClient({
clientId: "{oauth_client_id}",
clientSecret: "{oauth_client_secret}",
projectId: "{project_id}",
projectEnvironment: "development", // or "production"
});
// The client provides methods to interact with the Connect API
```
### TypeScript SDK (browser)
You'll primarily use the browser SDK to let your users securely connect apps from your frontend. Here, you
1. [Create a short-lived token on your server](/docs/connect/api-reference/create-connect-token)
2. Initiate auth with that token to securely connect an account for a specific user
Here's a Next.js example [from our quickstart](/docs/connect/managed-auth/quickstart/):
```js
import { PipedreamClient } from "@pipedream/sdk"
// Example from our Next.js app
import { serverConnectTokenCreate } from "./server"
const { token, expires_at } = await serverConnectTokenCreate({
external_user_id: externalUserId // The end user's ID in your system
});
export default function Home() {
const client = new PipedreamClient()
function connectAccount() {
client.connectAccount({
app: appSlug, // Pass the app name slug of the app you want to integrate
oauthAppId: appId, // For OAuth apps, pass the OAuth app ID; omit this param to use Pipedream's OAuth client or for key-based apps
token, // The token you received from your server above
onSuccess: ({ id: accountId }) => {
console.log(`Account successfully connected: ${accountId}`)
}
})
}
return (
)
}
```
## Environment
Most API endpoints require an [environment](/docs/connect/managed-auth/environments/) parameter. This lets you specify the environment (`production` or `development`) where resources will live in your project.
Always set the environment when you create the SDK client:
```javascript
import { PipedreamClient } from "@pipedream/sdk";
const client = new PipedreamClient({
clientId: "your-oauth-client-id",
clientSecret: "your-oauth-client-secret",
projectId: "your-project-id",
projectEnvironment: "development" // change to "production" for production environment
});
```
or pass the `X-PD-Environment` header in HTTP requests:
```sh
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/tokens \
-H "Content-Type: application/json" \
-H "X-PD-Environment: development" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "your-external-user-id"
}'
```
## External users
When you use the Connect API, you'll pass an `external_user_id` parameter when initiating account connections and retrieving credentials. This is your user's ID, in your system — whatever you use to uniquely identify them.
Pipedream associates this ID with user accounts, so you can retrieve credentials for a specific user, and invoke workflows on their behalf.
External User IDs are limited to 250 characters.
Read more about [external users](/docs/connect/managed-auth/users/).
# List accounts
Source: https://pipedream.com/docs/connect/api-reference/list-accounts
get /v1/connect/{project_id}/accounts
To retrieve credentials for OAuth apps (Slack, Google Sheets, etc), **the connected account must be using [your own OAuth client](/docs/connect/managed-auth/oauth-clients/#using-a-custom-oauth-client)**.
Never return user credentials to the client
# List actions
Source: https://pipedream.com/docs/connect/api-reference/list-actions
get /v1/connect/{project_id}/actions
# List app categories
Source: https://pipedream.com/docs/connect/api-reference/list-app-categories
get /v1/connect/app_categories
# List apps
Source: https://pipedream.com/docs/connect/api-reference/list-apps
get /v1/connect/apps
# List components
Source: https://pipedream.com/docs/connect/api-reference/list-components
get /v1/connect/{project_id}/components
# List deployed triggers
Source: https://pipedream.com/docs/connect/api-reference/list-deployed-triggers
get /v1/connect/{project_id}/deployed-triggers
# List trigger events
Source: https://pipedream.com/docs/connect/api-reference/list-trigger-events
get /v1/connect/{project_id}/deployed-triggers/{trigger_id}/events
# List trigger webhooks
Source: https://pipedream.com/docs/connect/api-reference/list-trigger-webhooks
get /v1/connect/{project_id}/deployed-triggers/{trigger_id}/webhooks
# List trigger workflows
Source: https://pipedream.com/docs/connect/api-reference/list-trigger-workflows
get /v1/connect/{project_id}/deployed-triggers/{trigger_id}/pipelines
# List triggers
Source: https://pipedream.com/docs/connect/api-reference/list-triggers
get /v1/connect/{project_id}/triggers
# Rate Limits
Source: https://pipedream.com/docs/connect/api-reference/rate-limits
## Pipedream rate limits
The following rate limits apply to all Connect API endpoints:
| Endpoint | Limit | Scope |
| --------------------------- | ----------------------- | ----------------- |
| `POST /token` | 100 requests/minute | Per external user |
| `GET \| DELETE /accounts/*` | 200 requests/minute | Per project |
| `POST /components/*` | 100 requests/minute | Per project |
| `GET /components/*` | 3000 requests/5 minutes | Per project |
| `GET \| POST /proxy` | 1000 requests/5 minutes | Per project |
Component endpoints include `/components`, `/actions`, and `/triggers`.
Need higher limits? [Contact our support team](https://pipedream.com/support) to discuss your requirements.
## Custom rate limits
You can set custom rate limits for your users to control their usage of the Connect API and prevent runaway requests or abuse.
### Create rate limit
Create a rate limit by specifying a time window and maximum requests allowed within that window. The API returns a `rate_limit_token` that you include in subsequent Connect API requests.
```http
POST /rate_limits
```
#### Parameters
| Parameter | Type | Description |
| --------------------- | ------- | ----------------------------------- |
| `window_size_seconds` | integer | Time window duration in seconds |
| `requests_per_window` | integer | Maximum requests allowed per window |
#### Example request
```bash
# First, obtain an OAuth access token
curl -X POST https://api.pipedream.com/v1/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "{oauth_client_id}",
"client_secret": "{oauth_client_secret}"
}'
# Create the rate limit
curl -X POST https://api.pipedream.com/v1/connect/rate_limits \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access_token}" \
-d '{
"window_size_seconds": 10,
"requests_per_window": 1000
}'
```
#### Response
```json
{
"token": "CiKpqRdTmNwLfhzSvYxBjAkMnVbXuQrWeZyHgPtJsDcEvFpLnE"
}
```
### Using rate limit tokens
Include the `rate_limit_token` in the `x-pd-rate-limit` header for all Connect API requests:
```bash
curl -X POST "https://api.pipedream.com/v1/connect/{project_id}/actions/run" \
-H "Authorization: Bearer {access_token}" \
-H "Content-Type: application/json" \
-H "x-pd-rate-limit: {rate_limit_token}" \
-d '{
"external_user_id": "user123",
"id": "gitlab-list-commits",
"configured_props": {
"gitlab": {
"authProvisionId": "apn_kVh9AoD"
},
"projectId": 45672541,
"refName": "main"
}
}'
```
# Reload action props
Source: https://pipedream.com/docs/connect/api-reference/reload-action-props
post /v1/connect/{project_id}/actions/props
# Reload component props
Source: https://pipedream.com/docs/connect/api-reference/reload-component-props
post /v1/connect/{project_id}/components/props
# Reload trigger props
Source: https://pipedream.com/docs/connect/api-reference/reload-trigger-props
post /v1/connect/{project_id}/triggers/props
# Retrieve account
Source: https://pipedream.com/docs/connect/api-reference/retrieve-account
get /v1/connect/{project_id}/accounts/{account_id}
To retrieve credentials for OAuth apps (Slack, Google Sheets, etc), **the connected account must be using [your own OAuth client](/docs/connect/managed-auth/oauth-clients/#using-a-custom-oauth-client)**.
Never return user credentials to the client
# Retrieve action
Source: https://pipedream.com/docs/connect/api-reference/retrieve-action
get /v1/connect/{project_id}/actions/{component_id}
# Retrieve app
Source: https://pipedream.com/docs/connect/api-reference/retrieve-app
get /v1/connect/apps/{app_id}
# Retrieve component
Source: https://pipedream.com/docs/connect/api-reference/retrieve-component
get /v1/connect/{project_id}/components/{component_id}
# Retrieve trigger
Source: https://pipedream.com/docs/connect/api-reference/retrieve-trigger
get /v1/connect/{project_id}/triggers/{component_id}
# Run action
Source: https://pipedream.com/docs/connect/api-reference/run-action
post /v1/connect/{project_id}/actions/run
# SDK Migration Guide
Source: https://pipedream.com/docs/connect/api-reference/sdk-migration
Safely migrate from Pipedream's TypeScript SDK v1.x to v2.x
**The Pipedream SDK v2.x is now available with significant improvements.** While v1.x continues to be supported, we recommend upgrading to v2.x for new projects to take advantage of improved TypeScript support, new features, and ongoing updates.
## Overview
The Pipedream SDK v2.x introduces significant improvements including:
* **Full TypeScript support** with comprehensive type definitions
* **Namespaced methods** for better organization (e.g., `client.actions.run()`)
* **Improved pagination support** for large data sets
* **`snake_case` parameter naming** for consistency with the REST API
## Migration Resources
For detailed migration instructions and examples, refer to the migration guide:
Step-by-step instructions, code examples, and migration strategies for upgrading from v1.x to v2.x
## Key Breaking Changes from v1.x to v2.x
The migration guide covers these major breaking changes:
* **Client initialization**: New `PipedreamClient` class
* **Method namespacing**: Actions, accounts, and other methods are now namespaced
* **Parameter naming**: CamelCase parameters converted to snake\_case
## Migration Support
If you encounter issues during migration:
1. Consult the [migration guide](https://github.com/PipedreamHQ/pipedream-sdk-typescript/blob/main/MIGRATE.md) for detailed examples
2. Check the [API / SDK documentation](/docs/connect/api-reference/introduction) for v2.x usage patterns
3. Join our [community](https://join.slack.com/t/pipedream-users/shared_invite/zt-36p4ige2d-9CejV713NlwvVFeyMJnQPw) for additional support
The migration guide includes options for incremental migration, allowing you to upgrade your codebase gradually rather than all at once.
# Update deployed trigger
Source: https://pipedream.com/docs/connect/api-reference/update-deployed-trigger
put /v1/connect/{project_id}/deployed-triggers/{trigger_id}
Configured props you define when updating a deployed trigger will overwrite previously configured props.
# Update trigger webhooks
Source: https://pipedream.com/docs/connect/api-reference/update-trigger-webhooks
put /v1/connect/{project_id}/deployed-triggers/{trigger_id}/webhooks
# Update trigger workflows
Source: https://pipedream.com/docs/connect/api-reference/update-trigger-workflows
put /v1/connect/{project_id}/deployed-triggers/{trigger_id}/pipelines
# Pre built tools for your app or agent
Source: https://pipedream.com/docs/connect/components
Pipedream Connect provides APIs to embed pre-built tools ([triggers and actions](/docs/components/contributing/)) directly in your application or AI agent, enabling access to 10,000+ built-in API operations. Enable [your end users](/docs/connect/api-reference/introduction) to configure, deploy, and invoke Pipedream triggers and actions for more than 2,700 APIs.
## What are triggers and actions?
In Pipedream, we call triggers and actions [components](/docs/components/contributing/), which are self-contained executable units of code. Your end users configure the inputs and these components produce a result that's exported as output. These components are developed and maintained by Pipedream and our community and their source code is available in our [public Github repo](https://github.com/PipedreamHQ/pipedream/tree/master/components).
Check out the [SDK playground](https://pipedream.com/connect/demo) to see the SDK in action. You can also [run it locally and explore the code](https://github.com/PipedreamHQ/pipedream-connect-examples/tree/master/connect-react-demo).
## Implementation
You have two options for implementing Connect components in your application:
1. Use the `pipedream` [backend SDK](#backend-sdk) with your own frontend
2. Use the `connect-react` [frontend SDK](#connect-react-sdk) with Pipedream's pre-built frontend components
### Backend SDK
Use the Pipedream server SDK to handle all Connect operations on your backend, then build your own frontend UI. This gives you full control over the user experience.
```typescript Backend
import { PipedreamClient } from '@pipedream/sdk';
// Initialize with OAuth credentials
const client = new PipedreamClient({
projectEnvironment: "development",
clientId: process.env.PIPEDREAM_CLIENT_ID,
clientSecret: process.env.PIPEDREAM_CLIENT_SECRET,
projectId: process.env.PIPEDREAM_PROJECT_ID,
});
// Run a pre-built action on behalf of your user
const result = await client.actions.run({
id: "slack-send-message-to-channel",
external_user_id: "user-123",
configured_props: {
slack: {
authProvisionId: "apn_abc123", // User's connected account
},
channel: "#general",
text: "Hello from my app!",
},
});
```
```javascript Your frontend
// Your frontend makes requests to your backend API
async function sendSlackMessage() {
const response = await fetch('/api/pipedream/run-action', {
method: 'POST',
body: JSON.stringify({
actionId: 'slack-send-message-to-channel',
configuredProps: {
channel: '#general',
text: 'Hello from my app!',
}
})
});
return response.json();
}
```
### Connect React SDK
Use Pipedream's pre-built React components to abstract the complexity of building a frontend form interface.
```typescript Backend (actions.ts)
"use server";
import { PipedreamClient } from "@pipedream/sdk";
const client = new PipedreamClient({
projectEnvironment: "development", // or "production"
projectId: process.env.PIPEDREAM_PROJECT_ID,
clientId: process.env.PIPEDREAM_CLIENT_ID,
clientSecret: process.env.PIPEDREAM_CLIENT_SECRET,
});
// Generate connect tokens for frontend requests
export async function fetchToken(opts: { externalUserId: string }) {
return await client.tokens.create({
external_user_id: opts.externalUserId,
allowed_origins: JSON.parse(process.env.PIPEDREAM_ALLOWED_ORIGINS || "[]"),
});
}
```
```typescript Frontend (page.tsx)
import { PipedreamClient } from '@pipedream/sdk';
import { FrontendClientProvider, ComponentFormContainer } from '@pipedream/connect-react';
import { fetchToken } from './actions';
function App() {
const userId = "user-123";
const client = new PipedreamClient({
projectEnvironment: "development",
externalUserId: userId,
tokenCallback: fetchToken, // Backend function to generate connect tokens
});
return (
);
}
```
## Getting started
The following guide walks through using the **backend SDK or REST API** to manually discover apps, list components, and configure them. If you're using the **Connect React SDK**, the `ComponentFormContainer` handles these steps automatically.
Refer to the [Connect API docs](/docs/connect/api-reference/) for the full API reference. Below is a quickstart with a few specific examples.
You can skip steps 1 and 2 if you already know the component you want to use or if you'd prefer to [pass a natural language prompt to Pipedream's component search API](/docs/rest-api/#search-for-registry-components).
Before sending requests to the API, make sure to [authenticate using a Pipedream OAuth client](/docs/connect/api-reference/authentication):
```typescript TypeScript
// Initialize the Pipedream SDK client
import { PipedreamClient } from "@pipedream/sdk";
const client = new PipedreamClient({
projectEnvironment: "development | production",
clientId: "{oauth_client_id}",
clientSecret: "{oauth_client_secret}",
projectId: "{your_project_id}"
});
```
```sh HTTP (cURL)
# Get an access token for the REST API
curl -X POST https://api.pipedream.com/v1/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "{your_oauth_client_id}",
"client_secret": "{your_oauth_client_secret}"
}'
```
All subsequent examples assume that you've either initialized the SDK client or have a valid access token.
To find the right trigger or action to configure and run, first find the app. In this example, we'll search for `gitlab`.
```typescript TypeScript
const apps = await client.apps.list({ q: "gitlab" });
// Parse and return the data you need
```
```sh HTTP (cURL)
curl -X https://api.pipedream.com/v1/apps?q=gitlab \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access_token}"
# Parse and return the data you need
```
Here's the response:
```json
{
"page_info": {
"total_count": 1,
"count": 1,
"start_cursor": "Z2l0bGFi",
"end_cursor": "Z2l0bGFi"
},
"data": [
{
"id": "app_1Z2hw1",
"name_slug": "gitlab",
"name": "GitLab",
"auth_type": "oauth",
"description": "GitLab is the most comprehensive AI-powered DevSecOps Platform.",
"img_src": "https://assets.pipedream.net/s.v0/app_1Z2hw1/logo/orig",
"custom_fields_json": "[{\"name\":\"base_api_url\",\"label\":\"Base API URL\",\"description\":\"The Base API URL defaults to `gitlab.com`. If you are using self-hosted Gitlab, enter your base url here, e.g. `gitlab.example.com`\",\"default\":\"gitlab.com\",\"optional\":null}]",
"categories": [
"Developer Tools"
],
"featured_weight": 5000,
"connect": {
"proxy_enabled": true,
"allowed_domains": ["gitlab.com"],
"base_proxy_target_url": "https://{{custom_fields.base_api_url}}"
}
}
]
}
```
Once you have the app you want to use, now you can list the triggers and/or actions for that app. We'll list the actions for Gitlab and we'll pass the `name_slug` `gitlab` as the `app`.
```typescript TypeScript
const components = await client.components.list({ q: "gitlab" });
// Parse and return the data you need
```
```bash HTTP (cURL)
curl -X https://api.pipedream.com/v1/connect/{project_id}/actions?app=gitlab \
-H "Content-Type: application/json" \
-H "X-PD-Environment: {environment}" \
-H "Authorization: Bearer {access_token}"
# Parse and return the data you need
```
Here's the response:
```json
{
"page_info": {
"total_count": 20,
"count": 10,
"start_cursor": "c2NfbHlpRThkQQ",
"end_cursor": "c2NfQjVpTkJBTA"
},
"data": [
{
"name": "List Commits",
"version": "0.0.2",
"key": "gitlab_developer_app-list-commits"
},
{
"name": "Update Issue",
"version": "0.0.1",
"key": "gitlab_developer_app-update-issue"
},
{
"name": "Update Epic",
"version": "0.0.1",
"key": "gitlab_developer_app-update-epic"
},
{
"name": "Search Issues",
"version": "0.0.1",
"key": "gitlab_developer_app-search-issues"
},
{
"name": "List Repo Branches",
"version": "0.0.1",
"key": "gitlab_developer_app-list-repo-branches"
},
{
"name": "Get Repo Branch",
"version": "0.0.1",
"key": "gitlab_developer_app-get-repo-branch"
},
{
"name": "Get Issue",
"version": "0.0.1",
"key": "gitlab_developer_app-get-issue"
},
{
"name": "Create issue",
"version": "0.0.1",
"key": "gitlab_developer_app-create-issue"
},
{
"name": "Create Epic",
"version": "0.0.1",
"key": "gitlab_developer_app-create-epic"
},
{
"name": "Create Branch",
"version": "0.0.1",
"key": "gitlab_developer_app-create-branch"
}
]
}
```
Now that you've found the components you want to use, you can proceed to configure and execute them:
* **For actions**: See the [Actions guide](/docs/connect/components/actions) to learn how to configure props, handle dynamic props, and invoke actions
* **For triggers**: See the [Triggers guide](/docs/connect/components/triggers) to learn how to deploy event sources and native triggers
* **Need help?**: Check the [Troubleshooting guide](/docs/connect/components/troubleshooting) for common issues and solutions
# Executing Actions
Source: https://pipedream.com/docs/connect/components/actions
Actions are components that perform a task by taking input parameters and producing a result. They can be invoked on-demand to interact with third-party APIs on behalf of your users.
## Retrieving a component's definition
To configure and run a component for your end users, you need to understand the component's definition. Now that you have the component's key from the previous step, you can retrieve its structure from the Pipedream API. See the [component structure](/docs/components/contributing/api/#component-structure) section in our docs for more details.
As an example, the following API call will return the structure of the **List Commits** action for Gitlab:
```typescript TypeScript
const component = await client.components.retrieve("gitlab-list-commits");
// Parse and return the data you need
```
```bash HTTP (cURL)
curl -X https://api.pipedream.com/v1/connect/{project_id}/components/gitlab-list-commits \
-H "Content-Type: application/json" \
-H "X-PD-Environment: {environment}" \
-H "Authorization: Bearer {access_token}"
# Parse and return the data you need
```
The response will contain the component's structure, including its user-friendly name, version, and most importantly, the configuration options the component accepts (also known as [props](/docs/components/contributing/api/#props) or "properties"). Here's an example of the response for the component in the example above:
```json
{
"data": {
"name": "List Commits",
"version": "0.0.3",
"key": "gitlab-list-commits",
"configurable_props": [
{
"name": "gitlab",
"type": "app",
"app": "gitlab"
},
{
"name": "projectId",
"type": "integer",
"label": "Project ID",
"description": "The project ID, as displayed in the main project page",
"remoteOptions": true
},
{
"name": "refName",
"type": "string",
"label": "Branch Name",
"description": "The name of the branch",
"remoteOptions": true,
"optional": true
},
{
"name": "max",
"type": "integer",
"label": "Max Results",
"description": "Max number of results to return. Default value is `100`",
"optional": true,
"default": 100
}
]
}
}
```
Using this information, you can now drive the configuration of the component for your end users, as described in the next section.
## Configuring action props
Component execution on behalf of your end users requires a few preliminary steps, focused on getting the right input parameters (aka [props](/docs/workflows/building-workflows/using-props/)) to the component.
Configuring each prop for a component often involves an API call to retrieve the possible values, unless the values that a prop can take are static or free-form. The endpoint is accessible at:
```bash
POST /v1/connect/{project_id}/components/configure
```
Typically, the options for a prop are linked to a specific user's account. Each of these props implements an `options` method that retrieves the necessary options from the third-party API, formats them, and sends them back in the response for the end user to select. Examples are listing Slack channels, Google Sheets, etc.
The payload for the configuration API call must contain the following fields:
1. `external_user_id`: the ID of your user on your end
2. `id`: the component's unique ID (aka **key**)
3. `prop_name`: the name of the prop you want to configure
4. `configured_props`: an object containing the props that have already been configured. The initial configuration call must contain the ID of the account (aka `authProvisionId`) that your user has connected to the target app (see [this section](/docs/connect/managed-auth/quickstart/) for more details on how to create these accounts).
We'll use the [**List Commits** component for Gitlab](https://github.com/PipedreamHQ/pipedream/blob/master/components/gitlab/actions/list-commits/list-commits.mjs#L4) as an example, to illustrate a call that retrieves the options for the `projectId` prop of that component:
```typescript TypeScript
const resp = await client.components.configureProps({
external_user_id: "abc-123",
id: "gitlab-list-commits",
prop_name: "projectId",
configured_props: {
gitlab: {
authProvisionId: "apn_kVh9AoD",
}
}
});
```
```sh HTTP (cURL)
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/components/configure \
-H "Content-Type: application/json" \
-H "X-PD-Environment: {environment}" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "abc-123",
"id": "gitlab-list-commits",
"prop_name": "projectId",
"configured_props": {
"gitlab": {
"authProvisionId": "apn_kVh9AoD"
}
}
}'
# Parse and return the data you need
```
The response contains the possible values (and their human-readable labels when applicable) for the prop, as well as any possible errors that might have occurred. The response for the request above would look like this:
```json
{
"observations": [],
"context": null,
"options": [
{
"label": "jverce/foo-massive-10231-1",
"value": 45672541
},
{
"label": "jverce/foo-massive-10231",
"value": 45672514
},
{
"label": "jverce/foo-massive-14999-2",
"value": 45672407
},
{
"label": "jverce/foo-massive-14999-1",
"value": 45672382
},
{
"label": "jverce/foo-massive-14999",
"value": 45672215
},
{
"label": "jverce/gitlab-development-kit",
"value": 21220953
},
{
"label": "jverce/gitlab",
"value": 21208123
}
],
"errors": [],
"timings": {
"api_to_sidekiq": 1734043172355.1042,
"sidekiq_received": 1734043172357.867,
"sidekiq_to_lambda": 1734043172363.6812,
"sidekiq_done": 1734043173461.6406,
"lambda_configure_prop_called": 1734043172462,
"lambda_done": 1734043173455
},
"stringOptions": null
}
```
Fields inside `configured_props` are written in camel case since they refer to the names of props as they appear in the component's code, they are not attributes that the API itself consumes.
You configure props one-by-one, making a call to the component configuration API for each new prop. Subsequent prop configuration calls will be identical to the one above:
1. Add the prop you currently want to configure as the `prop_name`
2. Include the names and values of all previously-configured props in the `configured_props` object. Keep this object in your app's local state, add a prop once you or the end user selects a value, and pass it to the `configured_props` API param.
For example, to retrieve the configuration options for the `refName` prop:
```json
{
"async_handle": "IyvFeE5oNpYd",
"external_user_id": "demo-34c13d13-a31e-4a3d-8b63-0ac954671095",
"id": "gitlab-list-commits",
"prop_name": "refName",
"configured_props": {
"gitlab": {
"authProvisionId": "apn_oOhaBlD"
},
"projectId": 21208123
}
}
```
## Configuring dynamic props
The set of props that a component can accept might not be static, and may change depending on the values of prior props. Props that behave this way are called [dynamic props](/docs/components/contributing/api/#dynamic-props), and they need to be configured in a different way. Props that are dynamic will have a `reloadProps` attribute set to `true` in the component's definition.
After configuring a dynamic prop, the set of subsequent props must be recomputed (or reloaded), which is possible using the following API call:
```sh
POST /v1/connect/components/props
```
The payload is similar to the one used for the configuration API, but it excludes the `prop_name` field since the goal of this call is to reload and retrieve the new set of props, not to configure a specific one.
Using the [Add Single Row action for Google Sheets](https://pipedream.com/apps/google-sheets/actions/add-single-row) as an example, the request payload would look like this:
```json
{
"async_handle": "PL41Yf3PuX61",
"external_user_id": "demo-25092fa8-86c0-4d46-86c9-9dc9bde3b964",
"id": "google_sheets-add-single-row",
"configured_props": {
"googleSheets": {
"authProvisionId": "apn_V1hMoE7"
},
"sheetId": "1BfWjFF2dTW_YDiLISj5N9nKCUErShgugPS434liyytg"
}
}
```
In this case, the `sheetId` prop is dynamic, and so after configuring it, the set of props must be reloaded. The response will contain the new set of props and their definition, similar to when the [component information was first retrieved](/docs/connect/components/#retrieving-a-components-definition). The response will also contain an ID that can be used to reference the new set of props in subsequent configuration calls. If this is ID is not provided, the set of props will be based on the definition of the component that was retrieved initially.
To illustrate, the response for the request above would look like this:
```json
{
"observations": [],
"errors": [],
"dynamicProps": {
"id": "dyp_6xUyVgQ",
"configurableProps": [
{
"name": "googleSheets",
"type": "app",
"app": "google_sheets"
},
{
"name": "drive",
"type": "string",
"label": "Drive",
"description": "Defaults to `My Drive`. To select a [Shared Drive](https://support.google.com/a/users/answer/9310351) instead, select it from this list.",
"optional": true,
"default": "My Drive",
"remoteOptions": true
},
{
"name": "sheetId",
"type": "string",
"label": "Spreadsheet",
"description": "The Spreadsheet ID",
"useQuery": true,
"remoteOptions": true,
"reloadProps": true
},
{
"name": "worksheetId",
"type": "string[]",
"label": "Worksheet(s)",
"description": "Select a worksheet or enter a custom expression. When referencing a spreadsheet dynamically, you must provide a custom expression for the worksheet.",
"remoteOptions": true,
"reloadProps": true
},
{
"name": "hasHeaders",
"type": "boolean",
"label": "Does the first row of the sheet have headers?",
"description": "If the first row of your document has headers, we'll retrieve them to make it easy to enter the value for each column. Note: When using a dynamic reference for the worksheet ID (e.g. `{{steps.foo.$return_value}}`), this setting is ignored.",
"reloadProps": true
},
{
"name": "myColumnData",
"type": "string[]",
"label": "Values",
"description": "Provide a value for each cell of the row. Google Sheets accepts strings, numbers and boolean values for each cell. To set a cell to an empty value, pass an empty string."
}
]
}
}
```
## Invoking an action
At the end of the configuration process for an action, you'll end up with a payload that you can use to invoke the action. The payload is similar to the one used for configuring a prop, with the exception of the `prop_name` attribute (because we're not configuring any props at this point):
```json
{
"async_handle": "xFfBakdTGTkI",
"external_user_id": "abc-123",
"id": "gitlab-list-commits",
"configured_props": {
"gitlab": {
"authProvisionId": "apn_kVh9AoD"
},
"projectId": 45672541,
"refName": "main"
}
}
```
To run the action with this configuration, simply send it as the request payload to the following endpoint:
```typescript TypeScript
const resp = await client.actions.run({
external_user_id: "abc-123",
id: "gitlab-list-commits",
configured_props: {
gitlab: {
authProvisionId: "apn_kVh9AoD",
},
projectId: 45672541,
refName: "main"
}
});
// Parse and return the data you need
```
```sh HTTP (cURL)
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/actions/run \
-H "Content-Type: application/json" \
-H "X-PD-Environment: {environment}" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "abc-123",
"id": "gitlab-list-commits",
"configured_props": {
"gitlab": {
"authProvisionId": "apn_kVh9AoD"
},
"projectId": 45672541,
}
}'
# Parse and return the data you need
```
The output of executing the action will be a JSON object containing the following fields:
1. `exports`: all the named exports produced by the action, like when calling [`$.export` in a Node.js](/docs/workflows/building-workflows/code/nodejs/#using-export) component.
2. `os`: a list of observations produced by the action (e.g. logs, errors, etc).
3. `ret`: the return value of the action, if any.
4. When using [File Stash](/docs/connect/components/files/) to sync local files, the response will also include a `stash` property with file information.
The following (abbreviated) example shows the output of running the action above:
```json
{
"exports": {
"$summary": "Retrieved 1 commit"
},
"os": [],
"ret": [
{
"id": "387262aea5d4a6920ac76c1e202bc9fd0841fea5",
"short_id": "387262ae",
"created_at": "2023-05-03T03:03:25.000+00:00",
"parent_ids": [],
"title": "Initial commit",
"message": "Initial commit",
"author_name": "Jay Vercellone",
"author_email": "nope@pipedream.com",
"authored_date": "2023-05-03T03:03:25.000+00:00",
"committer_name": "Jay Vercellone",
"committer_email": "nope@pipedream.com",
"committed_date": "2023-05-03T03:03:25.000+00:00",
"trailers": {},
"extended_trailers": {},
"web_url": "https://gitlab.com/jverce/foo-massive-10231-1/-/commit/387262aea5d4a6920ac76c1e202bc9fd0841fea5"
}
]
}
```
## Special Prop Types
### SQL Prop
The `sql` prop is a specialized prop type used for interacting with SQL databases. It enables developers to build applications that can:
* Execute custom SQL queries
* Introspect database schemas
* Support prepared statements
This prop type is used by these database actions:
* `postgresql-execute-custom-query`
* `snowflake-execute-sql-query`
* `mysql-execute-raw-query`
* `microsoft_sql_server-execute-raw-query`
* `azure_sql-execute-raw-query`
* `turso-execute-query`
#### Configuration
When configuring these actions, you'll need to provide:
1. Database app type and auth (e.g., `postgresql` in this example)
2. A `sql` prop with the following structure:
```js
const configuredProps = {
postgresql: {
authProvisionId: "apn_xxxxxxx"
},
sql: {
auth: {
app: "postgresql" // Database type -- must match the app prop name
},
query: "select * from products limit 1",
params: [] // Optional array of parameters for prepared statements
}
}
```
#### Using prepared statements
You can use prepared statements by including placeholders in your query and providing the parameter values in the `params` array. Different database systems use different placeholder syntax:
* **PostgreSQL** uses `$1`, `$2`, `$3`, etc. for numbered parameters
* **Snowflake**, **MySQL, Azure SQL, Microsoft SQL Server, and Turso** use `?` for positional parameters
```js PostgreSQL example
const configuredProps = {
postgresql: {
authProvisionId: "apn_xxxxxxx"
},
sql: {
auth: {
app: "postgresql"
},
query: "select * from products where name = $1 and price > $2 limit 1",
params: ["foo", 10.99] // Values to replace $1 and $2 placeholders
}
}
```
```js MySQL example
const configuredProps = {
mysql: {
authProvisionId: "apn_xxxxxxx"
},
sql: {
auth: {
app: "mysql"
},
query: "select * from products where name = ? and price > ? limit 1",
params: ["foo", 10.99] // Values to replace the ? placeholders
}
}
```
Using prepared statements helps prevent SQL injection attacks by separating the SQL command structure from the data values being used, and is strongly recommended.
#### Retrieving database schema information
By retrieving the database schema, developers can:
* Provide database structure to AI agents for accurate SQL generation
* Build native SQL editors with autocomplete for tables and columns
* Validate queries against the actual database schema before execution
You can call `configureComponent` on the `sql` prop to retrieve database schema information:
```typescript TypeScript
const resp = await client.components.configureProps({
external_user_id: externalUserId,
prop_name: "sql",
id: "postgresql-execute-custom-query",
configured_props: {
postgresql: {
authProvisionId: accountId
},
},
});
```
The response includes a `context.dbInfo` object containing detailed schema information for all tables in the database:
```json
{
"context": {
"dbInfo": {
"products": {
"metadata": {},
"schema": {
"id": {
"tableName": "products",
"columnName": "id",
"isNullable": "NO",
"dataType": "integer",
"columnDefault": "nextval('products_id_seq'::regclass)"
},
"name": {
"tableName": "products",
"columnName": "name",
"isNullable": "NO",
"dataType": "character varying",
"columnDefault": null
},
"description": {
"tableName": "products",
"columnName": "description",
"isNullable": "YES",
"dataType": "text",
"columnDefault": null
},
"price": {
"tableName": "products",
"columnName": "price",
"isNullable": "NO",
"dataType": "numeric",
"columnDefault": null
},
"created_at": {
"tableName": "products",
"columnName": "created_at",
"isNullable": "YES",
"dataType": "timestamp with time zone",
"columnDefault": "CURRENT_TIMESTAMP"
}
}
}
}
}
}
```
# Using Custom Tools
Source: https://pipedream.com/docs/connect/components/custom-tools
You can write and publish your own components to your Pipedream workspace to use with Connect. This complements the public actions available in Pipedream’s global registry.
## Overview
Custom tools are Node.js modules that you develop using the [Pipedream Components API](/docs/components/contributing/api/) and publish to your Pipedream workspace. Once published, they become available across [all Connect APIs](/docs/connect/api-reference/list-components), including the list, retrieve, run endpoints, etc.
Publishing custom tools is available to Pipedream customers on the [Business plan](https://pipedream.com/pricing?plan=Enterprise).
## Creating custom tools
Custom tools use the same development workflow as standard Pipedream components:
1. **Write your component** using the [Pipedream Components API](/docs/components/contributing/api/)
2. **Follow component guidelines** outlined in the [component development docs](/docs/components/contributing/guidelines/)
3. **Use the Pipedream CLI** to publish your component with a Connect-specific flag
Custom tools are [actions](/docs/components/contributing/actions-quickstart/). Check out the actions quickstart guide for step-by-step development instructions. Support for custom sources (triggers) is coming soon.
## Publishing for Connect
To make your custom components available in Connect, use the [`pd publish`](/docs/cli/reference/#pd-publish) command with the `--connect-environment` flag:
```sh
pd publish my-custom-action.mjs --connect-environment development
```
### Environments
The `--connect-environment` flag accepts two values:
* **`development`**: makes the component available to your Pipedream workspace in Connect in development
* **`production`**: makes the component available to your Pipedream workspace in Connect in production
Components published to `development` will only be available in the development environment, and vice versa for `production`.
## Using custom tools
Once published, your custom tools appear alongside public components in Connect APIs:
* **List endpoints**: Custom tools are included in component listing responses
* **Retrieve endpoints**: You can fetch details about your custom components
* **Run endpoints**: Execute your custom tools with the same API calls used for public components
### Referencing custom tools
Custom tools are identified with a `~/` prefix in their component ID. For example, if you publish a component with the key `google_sheets-update-file`, it will appear in Connect APIs as `~/google_sheets-update-file`.
When making API calls to list, retrieve, or run custom tools, use the prefixed ID:
```bash
# List all components (includes custom tools with ~/ prefix)
GET /v1/components
# Retrieve a specific custom tool
GET /v1/components/~/google_sheets-update-file
# Run a custom tool
POST /v1/components/~/google_sheets-update-file/run
```
The Connect API treats custom tools identically to public components, ensuring a consistent integration experience.
### Custom tools in MCP
Custom actions that you publish are automatically exposed as tools in the [Pipedream MCP server](/docs/connect/mcp/developers/) for the relevant app.
## Example workflow
Here’s a typical workflow for creating and using a custom tool:
1. **Develop locally** using your preferred editor
2. **Test your component** using local testing for actions
3. **Publish to Connect** with the appropriate environment flag
4. **Integrate via Connect APIs** in your application
Test your custom tools in your application directly or [run Pipedream’s SDK playground](https://github.com/PipedreamHQ/pipedream-connect-examples/tree/master/connect-react-demo#pipedream-components-demo-react) locally with your Pipedream credentials.
## Best practices
* **Follow naming conventions**: Use clear, descriptive names for your tools
* **Include proper documentation**: Add helpful descriptions and prop labels for easier configuration
* **Test thoroughly**: Validate your components work as expected before publishing to production
* **Version management**: Update [component versions](/docs/components/contributing/guidelines/#versioning) when making changes
* **Environment separation**: Use development environment for testing, production for live integrations
## Getting help
For component development questions, visit the [Pipedream Support](https://pipedream.com/support). For Connect-specific integration help, refer to the [Connect docs](/docs/connect/api-reference/).
# Working With Files
Source: https://pipedream.com/docs/connect/components/files
Pipedream provides a file storage system, File Stash, that allows you to store and retrieve files from tool executions via Connect. When a trigger or action downloads files to the `/tmp` directory in Pipedream’s execution environment, you can sync these files with File Stash, making them accessible outside of Pipedream.
## File Stash
When you execute an action via Connect that downloads files to the `/tmp` directory, those files normally only exist within Pipedream’s execution environment. With File Stash syncing, you can now make these files available via presigned URLs that can be accessed from anywhere.
### How it works
1. Files created in `/tmp` during execution are synced with File Stash when you pass `stashId` in the action execution payload
2. Each file is assigned a presigned URL that remains valid for 30 minutes
3. These URLs allow anyone with the link to download the file directly
## When to use stashId
You should pass the `stashId` parameter when the component's `stash` attribute indicates that a `stashId` is `"required"` or `"optional"`. You can find the `stash` indicator in the response from the [Retrieve component](/docs/connect/api-reference/retrieve-component) or [List components](/docs/connect/api-reference/list-components) endpoint.
Typically, if `stash` is `"required"`, then the component outputs a file to `/tmp`. If `stash` is `"optional"`, then the component may accept as input either a local file path, in which case `stashId` is required, or a remote file URL, in which cash `stashId` can be omitted.
You can also use these heuristics to determine when to use `stashId`:
1. You’re working with actions that download or generate files in `/tmp`, such as actions with “file”, “upload”, or “download” in their name
2. You’re working with actions that have a `filePath` or `filename` prop
3. You want to make files accessible outside of Pipedream’s execution environment
To generate a new stash ID, you can pass:
* An empty string (`""`)
* The string `"NEW"`
* Boolean `true`
To reuse an existing stash ID (valid for 24 hours), pass the previously returned `stashId` value. This allows you to reference multiple files from the same stash.
Avoid passing `stashId` unnecessarily as it will slightly increase response time.
## API Reference
### Running actions with File Stash
To enable File Stash when running an action, use the `stash_id` parameter in your request:
| Parameter | Type | Description |
| ---------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `stash_id` | string | (Optional) The key that identifies the external user-specific File Stash. If set to `""` (or `"NEW"` or `true`), Pipedream will generate a stash ID for you. If omitted, Pipedream will not sync files in `/tmp` with File Stash. |
```js Node.js
const resp = await client.actionRun({
externalUserId: "abc-123",
actionId: "google_drive-download-file",
configuredProps: {
googleDrive: {
authProvisionId: "apn_gyhLaz3"
},
fileId: {
"__lv": {
"label": "important files > mcp-hot.jpg",
"value": "16nlbFcgtgZkxLLMT2DcnBrEeQXQSriLs"
}
},
filePath: "/tmp/mcp-hot.jpg"
},
stashId: "" // An empty string will generate a new stash ID
});
// The response contains file URLs in $filestash_uploads
console.log(resp.exports.$filestash_uploads);
```
```sh HTTP (cURL)
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/actions/run \
-H "Content-Type: application/json" \
-H "X-PD-Environment: {environment}" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "abc-123",
"id": "google_drive-download-file",
"configured_props": {
"googleDrive": {
"authProvisionId": "apn_jyhKbx4"
},
"fileId": {
"__lv": {
"label": "important files > mcp-hot.jpg",
"value": "16nlbFcgtgZkxLLMT2DcnBrEeQXQSriLw"
}
},
"filePath": "/tmp/mcp.png"
},
"stash_id": ""
}'
```
### Response
The response includes a `stashId` and a `$filestash_uploads` export with information about the files that were downloaded to `/tmp` and then synced to File Stash:
```json
{
"exports": {
"$summary": "Successfully downloaded the file, \"mcp.png\"",
"$filestash_uploads": [
{
"localPath": "/tmp/mcp.png",
"s3Key": "1day/proj_lgsqAAJ/exu_x1iK86/d4ffb5b1081d3aacd2929f23f270268c/u/mcp.png",
"get_url": "https://pipedream-file-stash-production.s3.us-east-1.amazonaws.com/1day/proj_lgsqAAJ/exu_x1iK86/d4ffb5b1081d3aacd2929f23f270267d/u/mcp.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIA7H7CJKGDWMN4VXXZ%2F20250606%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250606T204541Z&X-Amz-Expires=1800&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEK4%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMiJIMEYCIQE3PJZwefZb%2Gp6tWNV8JJtZXG4%2FE%2BOQrFzcdkbPtk6iDwIhBPyFmL9yztlGE0Ub4QFMJzLf2ln2%2C3CYu4r%2FlHsnikmiKsAEDHaQBxoNOTA2MDM3MTgyNzIxIhyHFEjyyQ4ssQm4e5JqoATBID6Ipfe91co6nOiB18cTsrrCi4GyxWW6seujHJ0UmvSrsagD%2FJzyoirNNYdX1pwk9anJkCFEGUSz61DPgZfYGsjIC6jMWD6es%2Gvn3z%2FKFJxIaCVlDJTSkmzOOZcyzFwzGNgqKarSD1P63vySsH7LfsocM4GQKfH1KbHYKkX4GIIEAcL9T9JYU7j3zQcOE2uNpF%2BZ1fVQ8Yg0stYhMIUzSy1fLNS1CRHvejU793PSgJoKrZq8zICQFz3yL5ZxWqfrT%2BxGSZKsSH0iEOKVKq7MK0cdxrVJJsgyzl6ixiIsDKhwgmA0PhT6kvZOof0XyozdJjPAN33v2XSx%2F4BD3MrDonk4d%2F8vweQubfrOwangOPG8USZo31PXvdf8AXnx5rqVmFUL3etUsdPO2NzF6K%2B8bXNHfwgROMVG54tVGhxAX80OuflLN9lhPq%2B0%2BKS0cIC%2BpG9RNk4iToz1IFP9OWQaJPgOjOf90cPQgYfOV%2F%2FqIR9133NtKBzksB%2F%2F%2Bu1M6HS8MAfhF%2BAf9vpT%2FjvTlJhcvtiqyCzGz4TqJzxzIlFRv1dSyS08U82C7rVgOKpNWwDDqB1IjqeAZFap6tFP3s5apixPvipUERd8c8%2F9izz4%2Bz%2BD0f3Gn%2BQIRuToKQpPp%2FKfJZ15g4Mu6H4s7s7Nsr4znzdT2SOlWGi%2Bw%2FrIKi47vJfA4MKwTlW9K8e%2FsmhzHkB9LEqU7Km%2Fk36Qw8KaNwgY6nAFw%2BP4e8vTHE2MyMAZ2GiwvdlE4%2BNPtJAX4L%2BrabrgxnAHgqR0xB%2B3rNI5b62zaMrUZCm7T28Fec%2BA2x16PFLw4yUUv8UksV3r0H3J9dO6%2FrORTxYz0UYq8aiARGvg8kcjOGJ72Q5wv%2B48Up8r39coHlyACOQdd6Lg4HsohStWgeDJV0LKXru6RkNmM3FJWcWUqOy8oZxgaWe%2F%2BBAo%3D&X-Amz-Signature=c9hd88df7hfg40dh5060e47gcde639h5c3615gf77f60e9bgc90d44dh095636f"
}
]
},
"os": [],
"ret": {
"fileMetadata": {
"name": "mcp.png",
"mimeType": "image/png"
},
"filePath": "/tmp/mcp.png"
},
"stashId": "d4ffb5b1081d3aacd2929f23f270237d"
}
```
Each file in the `$filestash_uploads` array includes:
* `localPath`: The path to the file in the `/tmp` directory where it was downloaded or created
* `s3Key`: The unique key for the file in the Pipedream File Stash after being synced from `/tmp`
* `get_url`: A presigned URL that allows downloading the file for 30 minutes
## Usage Examples
### Reusing a stash ID
You can reuse a stash ID across multiple action runs to maintain a reference to previously downloaded files. This is particularly useful when you need to:
* Process multiple files across different actions in a sequence
* Keep a reference to files for later use in your app or agent
* Build a collection of files over time
* Ensure files downloaded in one action are accessible in subsequent actions
To reuse a stash ID, simply pass the same `stashId` value to subsequent action runs:
```js Node.js
// First action run - download a file from Google Drive
const firstResponse = await client.actionRun({
externalUserId: "abc-123",
actionId: "google_drive-download-file",
configuredProps: {
googleDrive: {
authProvisionId: "apn_gyhLaz3"
},
fileId: "1xyz123",
filePath: "/tmp/report1.pdf"
},
stashId: "" // Generate a new stash ID
});
const stashId = firstResponse.stashId;
// Second action run - use the same file in another action (e.g., upload to Dropbox)
const secondResponse = await client.actionRun({
externalUserId: "abc-123",
actionId: "dropbox-upload-file",
configuredProps: {
dropbox: {
authProvisionId: "apn_mmhHPgj"
},
path: "/",
name: "uploaded-report.pdf",
filePath: "/tmp/report1.pdf" // Same file path as in the first action
},
stashId: stashId // Reuse the stash ID to maintain access to the file
});
// The file downloaded in the first action is available to the second action
```
```sh HTTP (cURL)
# First request with new stash
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/actions/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "abc-123",
"id": "google_drive-download-file",
"configured_props": {
"googleDrive": {
"authProvisionId": "apn_gyhLaz3"
},
"fileId": "1W6ZssXLvVE-YN8rRbQlqggCpdIF-gdh1",
"filePath": "/tmp/myfile.txt"
},
"stash_id": "NEW"
}'
# Get the stash_id from the response (e.g., "abcd1234")
# Second request using the same stash
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/actions/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "abc-123",
"id": "dropbox-upload-file",
"configured_props": {
"dropbox": {
"authProvisionId": "apn_mmhHPgj"
},
"path": "/",
"name": "my-upload.txt",
"filePath": "/tmp/myfile.txt"
},
"stash_id": "abcd1234"
}'
```
### Common multi-action file workflows
A typical workflow involving files across multiple actions might look like:
1. Download a file from an external service to `/tmp`
2. Process or transform the file
3. Upload the file to another service
For this to work reliably, you need to use the same `stashId` across all actions to ensure that files downloaded or created in one action remain accessible to subsequent actions, even though each action runs in its own isolated environment.
## File Storage Duration
Files in File Stash are automatically deleted after 24 hours. The presigned URLs remain valid for 30 minutes from the time they are generated.
# Deploying Triggers
Source: https://pipedream.com/docs/connect/components/triggers
Triggers (also called sources) are different from actions - they are not invoked directly by end users, but rather by events that happen on a third-party service. For example, the "New File" source for Google Drive will be triggered every time a new file is created in a specific folder in Google Drive, then will emit an event for you to consume.
All this means is that actions can be invoked manually on demand, while sources are instead deployed and run automatically when the event they are listening for occurs.
## Categories of triggers
These are 2 categories of triggers you can deploy on behalf of your end users:
* [App-based event sources](/docs/connect/components/triggers/#app-based-event-sources)
* [Native triggers](/docs/connect/components/triggers/#native-triggers)
Refer to the [full Connect API reference](/docs/connect/api-reference/deploy-trigger) to list, retrieve, delete, and manage triggers for your user.
### App-based event sources
* Listen for events that occur in other systems: for example, when [a new file is added to Google Drive](https://pipedream.com/apps/google-drive/triggers/new-files-instant) or when [a new contact is created in HubSpot](https://pipedream.com/apps/hubspot/triggers/new-or-updated-contact)
* Deploying these triggers requires that your customers first connect their account using [Pipedream Connect Managed Auth](/docs/connect/managed-auth/quickstart/), since the triggers are deployed on their behalf using account
* Refer to the [quickstart above](/docs/connect/components/triggers/#deploying-a-source) to get started
#### Handling test events
* Many event sources attempt to retrieve a small set of historical events on deploy to provide visibility into the event shape for end users and developers
* Exposing real test events make it easier to consume the event in downstream systems without requiring users to trigger real events ([more info](/docs/components/contributing/guidelines/#surfacing-test-events))
* However, this results in emitting those events to the listening webhook immediately, which may not always be ideal, depending on your use case
* If you'd like to avoid emitting historical events, you can deploy a trigger without defining a `webhook_url`, then [update the listening webhooks for the deployed trigger](/docs/connect/api-reference/update-trigger-webhooks) after roughly a minute
### Native triggers
* You can also deploy native triggers, which don't require any authentication from your end users, so **you should skip the account connection process when configuring these triggers**
* Because these triggers don't use a connected account from your end users, APIs to deploy and manage them are slightly different (see below)
## Deploying a source
Because sources are exercised by events that happen on a third-party service, their semantics are different from actions. Once a source is configured, it must be deployed to start listening for events. When deploying a source, you can define either a webhook URL or a Pipedream workflow ID to consume those events.
Deploying a source is done by sending a payload similar to the one used for running an action, with the addition of the webhook URL or workflow ID. Using the **New Issue (Instant)** source for Gitlab as an example, the payload would look something like this:
```json
{
"external_user_id": "abc-123",
"id": "gitlab-new-issue",
"prop_name": "http",
"configured_props": {
"gitlab": {
"authProvisionId": "apn_kVh9AoD"
},
"projectId": 45672541
},
"webhook_url": "https://events.example.com/gitlab-new-issue"
}
```
Deploy a source for your users:
```typescript TypeScript
const deployedTrigger = await client.triggers.deploy({
external_user_id: "abc-123",
id: "gitlab-new-issue",
configured_props: {
gitlab: {
authProvisionId: "apn_kVh9AoD",
},
projectId: 45672541,
},
webhook_url: "https://events.example.com/gitlab-new-issue"
});
const {
id: triggerId, // The unique ID of the deployed trigger
name: triggerName, // The name of the deployed trigger
owner_id: userId, // The unique ID in Pipedream of your user
} = deployedTrigger;
// Parse and return the data you need
```
```sh HTTP (cURL)
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/components/triggers/deploy \
-H "Content-Type: application/json" \
-H "X-PD-Environment: {environment}" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "abc-123",
"id": "gitlab-new-issue",
"configured_props": {
"gitlab": {
"authProvisionId": "apn_kVh9AoD"
},
"projectId": 45672541,
},
"webhook_url": "https://events.example.com/gitlab-new-issue"
}'
# Parse and return the data you need
```
If the source deployment succeeds, the response will contain the information regarding the state of the source, including all the component's props metadata, as well as their values. It will also contain its name, creation date, owner, and most importantly its unique ID, which can be used to manage the source in the future (e.g. delete it). The response for the request above would look like this:
```json
{
"data": {
"id": "dc_dAuGmW7",
"owner_id": "exu_oedidz",
"component_id": "sc_3vijzQr",
"configurable_props": [
{
"name": "gitlab",
"type": "app",
"app": "gitlab"
},
{
"name": "db",
"type": "$.service.db"
},
{
"name": "http",
"type": "$.interface.http",
"customResponse": true
},
{
"name": "projectId",
"type": "integer",
"label": "Project ID",
"description": "The project ID, as displayed in the main project page",
"remoteOptions": true
}
],
"configured_props": {
"gitlab": {
"authProvisionId": "apn_kVh9AoD"
},
"db": {
"type": "$.service.db"
},
"http": {
"endpoint_url": "https://xxxxxxxxxx.m.pipedream.net"
},
"projectId": 45672541
},
"active": true,
"created_at": 1734028283,
"updated_at": 1734028283,
"name": "My first project - exu_oedidz",
"name_slug": "my-first-project---exu-oedidz-2"
}
}
```
In the example above, the source ID is `dc_dAuGmW7`, which can be used to delete, retrieve, or update the source in the future.
Refer to the [full Connect API reference](/docs/connect/api-reference/list-components) for questions and additional examples.
## Native trigger examples
### HTTP Webhook
Generate a unique HTTP webhook URL for your end users to configure in any other upstream service.
```typescript TypeScript
const deployedTrigger = await client.triggers.deploy({
external_user_id: "abc-123",
id: "http-new-requests",
webhook_url: "https://events.example.com/http-new-requests"
});
const {
id: triggerId, // The unique ID of the deployed trigger
endpoint_url: endpointUrl, // The endpoint URL to return to the user
} = deployedTrigger;
// Parse and return the data you need
```
```sh HTTP (cURL)
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/components/triggers/deploy \
-H "Content-Type: application/json" \
-H "X-PD-Environment: {environment}" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "abc-123",
"id": "http-new-requests",
"webhook_url": "https://events.example.com/http-new-requests"
}'
# Parse and return the data you need
```
#### Example response
```json
{
"id": "hi_zbGHMx",
"key": "xxxxxxxxxx",
"endpoint_url": "http://xxxxxxxxxx.m.pipedream.net",
"custom_response": true,
"created_at": 1744508049,
"updated_at": 1744508049
}
```
### Schedule
Deploy a timer to act as a cron job that will emit an event on a custom schedule you or your users define.
#### Configured props
`cron` (**object**)
When defining schedules, you can pass one of the following:
* `intervalSeconds`: Define the frequency in seconds
* `cron`: Define a custom cron schedule and optionally define the `timezone`. For example:
```json
"cron": {
"cron": "0 * * * *",
"timezone": "America/Los_Angeles" // optional, defaults to UTC
}
```
```typescript TypeScript
const deployedTrigger = await client.triggers.deploy({
external_user_id: "abc-123",
id: "schedule-custom-interval",
configured_props: {
"cron": {
"intervalSeconds": 900
}
},
webhook_url: "https://events.example.com/schedule-custom-interval"
});
const {
id: triggerId, // The unique ID of the deployed trigger
} = deployedTrigger;
// Parse and return the data you need
```
```sh HTTP(cURL)
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/components/triggers/deploy \
-H "Content-Type: application/json" \
-H "X-PD-Environment: {environment}" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "abc-123",
"id": "schedule-custom-interval",
"configured_props": {
"cron": {
"intervalSeconds": 900
}
},
"webhook_url": "https://events.example.com/schedule-custom-interval"
}'
# Parse and return the data you need
```
#### Example response
```json
{
"id": "ti_aqGTJ2",
"interval_seconds": 900,
"cron": null,
"timezone": "UTC",
"schedule_changed_at": 1744508391,
"created_at": 1744508391,
"updated_at": 1744508391
}
```
### New emails received
Generate a unique email address for your customers to emit events to
```typescript TypeScript
const deployedTrigger = await client.triggers.deploy({
external_user_id: "abc-123",
id: "email-new-email",
webhook_url: "https://events.example.com/email-new-email"
});
const {
id: triggerId, // The unique ID of the deployed trigger
email_address: emailAddress, // The unique email address to return to the user
} = deployedTrigger;
// Parse and return the data you need
```
```sh HTTP (cURL)
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/components/triggers/deploy \
-H "Content-Type: application/json" \
-H "X-PD-Environment: {environment}" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "abc-123",
"id": "email-new-email",
"webhook_url": "https://events.example.com/email-new-email"
}'
# Parse and return the data you need
```
#### Example response
```json
{
"id": "ei_QaJTb0",
"email_address": "xxxxxxxxxx@upload.pipedream.net",
"created_at": 1744499847,
"updated_at": 1744499847
}
```
# Troubleshooting
Source: https://pipedream.com/docs/connect/components/troubleshooting
Common issues and solutions when working with Pipedream Connect components.
## Referencing the app prop in configured props payload
If you encounter an error like `Cannot read properties of undefined (reading 'oauth_access_token')`, it's likely related to an incorrect reference to the app prop in your configured\_props payload.
For example, using `google_sheets` instead of `googleSheets`, or `stripe` instead of `app`. Always use the exact app prop name as returned by the component definition.
The app prop name can be found in the component's definition under `configurable_props`:
```javascript
"configurable_props": [
{
"name": "googleSheets", // Use this exact name in your payload
"type": "app",
"app": "google_sheets"
},
...
]
```
## Passing dynamic props ID
When working with components that use dynamic props, you must track and pass the `dynamicPropsId` in your API calls. After calling the API to reload props as described in the [Configure dynamic props](/docs/connect/components/actions/#configure-dynamic-props) section, you'll receive a response containing a `dynamicProps.id` value that looks like `dyp_6xUyVgQ`.
This ID must be included in subsequent API calls to `runAction` or `deployTrigger`. Failing to include it can result in errors like:
```json
{
"name": "Error",
"message": "undefined is not an array or an array-like"
}
```
or
```json
{
"title": "TypeError",
"detail": "Cannot read properties of undefined (reading 'endpoint')"
}
```
For example, after receiving the dynamic props ID from the reload props call, include it in your action execution:
```js
// First, reload props for a component with dynamic props
const { dynamicProps } = await client.reloadProps({ … });
// Then use the dynamicProps.id when running the action
const resp = await client.runAction({
externalUserId: "abc-123",
actionId: "google_sheets-add-single-row",
dynamicPropsId: dynamicProps.id, // Must include this
configuredProps: {
googleSheets: {
authProvisionId: account.id,
},
sheetId: "1BfWjFF2dTW_YDiLISj5N9nKCUErShgugPS434liyytg",
worksheetId: "Sheet1",
// ... other configured props
}
});
```
Remember to maintain this ID in your application state while the user is configuring the component, and include it in all subsequent API calls related to that particular configuration.
## Checking source logs for deployed triggers
If a deployed trigger isn't emitting events as expected, you can examine the source logs to get a better sense of what's happening.
Use the following URL to access logs and view emitted events:
```html
https://pipedream.com/sources/{dcid}
```
Replace `{dcid}` with your deployed component ID (e.g., `dc_dAuGmW7`).
The sources UI contains three tabs:
* **Events**: Lists emitted events from the deployed trigger that will be sent to the subscribed webhook or workflow. This helps you verify that events are being properly processed and understand their structure.
* **Logs**: Displays execution history for the trigger. For polling sources, this shows each time the trigger checks for updates. For webhook-based instant sources, it shows each time the source receives an event from the upstream API. This tab is especially useful for troubleshooting when events aren't being emitted as expected.
* **Configuration**: Provides a read-only view of the deployed source's code and configuration. While you can't modify settings for deployed triggers that belong to external users here, this tab offers insight into how the trigger is configured.
This UI view is currently in beta and has some limitations. Some UI elements may appear unpolished, and the configuration tab has limited functionality.
# Connect Link
Source: https://pipedream.com/docs/connect/managed-auth/connect-link
Connect Link provides a Pipedream-hosted link that lets you connect third-party accounts without any frontend implementation in your app.
## When to use Connect Link
If you aren’t able to execute JavaScript or open an iFrame in your frontend, or you want to send users a URL to connect accounts via email or SMS, use Connect Link. That URL opens a Pipedream-hosted page that guides the user through the account connection flow. The URL is scoped to the specific end user, and expires after 4 hours.
## How to generate a link
See [the Connect quickstart](/docs/connect/managed-auth/quickstart/) for a full tutorial for getting Connect up and running.
Here’s a quick overview of how to generate a Connect Link URL:
1. First, [generate a token](/docs/connect/managed-auth/quickstart/#generate-a-short-lived-token) for your users.
2. Extract the `connect_link_url` from the token response.
3. Before returning the URL to your user, add an `app` parameter to the end of the query string:
```html
https://pipedream.com/_static/connect.html?token={token}&connectLink=true&app={appSlug}
```
4. Redirect your users to this URL, or send it to them via email, SMS, and more.
**To test this code, check out this workflow:** [https://pipedream.com/new?h=tch\_4RfjXN](https://pipedream.com/new?h=tch_4RfjXN)
## Success and error redirect URLs
To automatically redirect users somewhere after they complete the connection flow (or if an error occurs), define the `success_redirect_uri` and `error_redirect_uri` parameters during token creation. [See the API docs](/docs/connect/api-reference/create-connect-token) for details.
In the absence of these URLs, Pipedream will redirect the user to a Pipedream-hosted success or error page at the end of the connection flow.
# Project Configuration
Source: https://pipedream.com/docs/connect/managed-auth/customization
By default, your end users will see a primarly Pipedream branded experience when they connect their account. To customize this screen to highlight your application, you can configure your [app’s name](/docs/connect/managed-auth/customization/#application-name), [support email](/docs/connect/managed-auth/customization/#support-email), and [logo](/docs/connect/managed-auth/customization/#logo) in the Pipedream UI.
## Customizing your application details
Open your project in the Pipedream UI: [https://pipedream.com/projects](https://pipedream.com/projects)
1. Once you’ve opened your project, click the **Connect** tab in the left sidebar
2. From there, select the **Configuration** tab
### Application name
By default, your end users will see:
> We use Pipedream to connect your account
Enter the name of your application that you’d like to show instead, so it reads:
> \{Application Name} uses Pipedream to connect your account
### Support email
In the case of any errors during the account connection flow, by default your users will see:
> Connection failed. Please retry or contact support.
To give your end users an email address to seek support, enter your support email. We’ll display it:
> Connection failed. Please retry or contact support [help@example.com](mailto:help@example.com).
### Logo
By default we’ll show Pipedream’s logo alongside the app your user is connecting to. If you’d like to show your own logo instead, upload it here.
# Environments
Source: https://pipedream.com/docs/connect/managed-auth/environments
Pipedream Connect projects support two environments: `development` and `production`. Connected accounts and credentials stored in one environment remain separate from the other.
You can use all of the Connect features in `development` mode **on any plan**. **[Visit the pricing page](https://pipedream.com/pricing?plan=Connect)** to select the right plan when you’re ready to ship your app to production.
## Development mode
Development mode provides access to all Connect features while you’re building and testing your integration with the following constraints:
* **Maximum of 10 external users**: The development environment is limited to 10 unique external user IDs. If you exceed this limit, you’ll need to [delete some existing users](/docs/connect/managed-auth/users/#deleting-users) before adding new ones.
* **Must be signed in to pipedream.com**: When connecting an account in development mode, you must be signed in to pipedream.com in the same browser where you’re connecting your account.
* **Personal testing only**: Development mode is intended for your own accounts during testing and development, not for your real end users.
The `development` environment is not intended for production use with your customers. When you’re ready to launch, you should transition to `production`.
## How to specify the environment
You specify the environment when [creating a new Connect token](/docs/connect/api-reference/create-connect-token) with the Pipedream SDK or API. When users successfully connect their account, Pipedream saves the account credentials (API key, access token, etc.) for that `external_user_id` in the specified environment.
Always set the environment when you create the SDK client:
```js
import { PipedreamClient } from "@pipedream/sdk";
const client = new PipedreamClient({
projectEnvironment: "development", // change to production if running for a test production account, or in production
clientId: "your-oauth-client-id",
clientSecret: "your-oauth-client-secret",
projectId: "proj_xxxxxxx"
});
```
or pass the `x-pd-environment` header in HTTP requests:
```sh
curl -X POST https://api.pipedream.com/v1/connect/{project_id}/tokens \
-H "Content-Type: application/json" \
-H "x-pd-environment: development" \
-H "Authorization: Bearer {access_token}" \
-d '{
"external_user_id": "your-external-user-id"
}'
```
## Shipping Connect to production
When you’re ready to ship to production:
1. Visit the [pricing page](https://pipedream.com/pricing?plan=Connect) to enable production access
2. Update your environment to `production` in your SDK client configuration and / or API calls
Using Connect in production doesn’t have any user limits and doesn’t require that the end user is signed in to pipedream.com like the development environment does.
# OAuth Clients
Source: https://pipedream.com/docs/connect/managed-auth/oauth-clients
When connecting an account for any OAuth app via Pipedream Connect, we’ll default to using Pipedream’s official OAuth client, which enables you to quickly get up and running. [Read more about OAuth clients in Pipedream here](/docs/apps/oauth-clients/).
## Using Pipedream OAuth
There are two types of apps in Pipedream:
1. **Key-based**: These apps require static credentials, like API keys. Pipedream stores these credentials securely and exposes them via API.
2. **OAuth**: These apps require OAuth authorization. Pipedream manages the OAuth flow for these apps, ensuring you always have a fresh access token for requests.
For any OAuth app that supports it, **you can always use your own client.** Your ability to use Pipedream’s OAuth clients in production depends on the use case. See below for details.
| Operation | Details | Pipedream | Custom |
| ------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | --------- | ------ |
| Retrieve user credentials | [Fetch the credentials](/docs/connect/api-reference/list-accounts) for your end user from Pipedream’s API to use in your app | ❌ | ✅ |
| Invoke workflows | [Trigger any Pipedream workflow](/docs/connect/workflows/) and use the connected account of your end users | ❌ | ✅ |
| Embed prebuilt tools | [Run any action and deploy any trigger](/docs/connect/components/) directly from your AI agent or app | ✅ | ✅ |
| Proxy API requests | [Write custom code to interface with any integrated API](/docs/connect/api-proxy/) while avoiding dealing with user auth | ✅ | ✅ |
## Using a custom OAuth client
1. Follow the steps [here](/docs/apps/oauth-clients/#configuring-custom-oauth-clients) to create an OAuth client in Pipedream.
2. When connecting an account either via the [frontend SDK](/docs/connect/managed-auth/quickstart/#use-the-pipedream-sdk-in-your-frontend), make sure to include the `oauthAppId` in `pd.connectAccount()`.
3. If using [Connect Link](/docs/connect/managed-auth/quickstart/#or-use-connect-link), make sure to include the `oauthAppId` in the URL.
### Finding your OAuth app ID
[Create your OAuth client in Pipedream](https://pipedream.com/@/accounts/oauth-clients) then click the arrow to the left of the client name to expand the details.
# Managed Auth Quickstart
Source: https://pipedream.com/docs/connect/managed-auth/quickstart
export const ConnectLinkDemo = ({supabaseUrl, supabaseAnonKey}) => {
const [appSlug, setAppSlug] = useState("google_sheets");
const [tokenData, setTokenData] = useState(null);
const [connectLinkUrl, setConnectLinkUrl] = useState("");
const [error, setError] = useState(null);
const [copied, setCopied] = useState(false);
const [externalUserId, setExternalUserId] = useState("");
useEffect(() => {
setExternalUserId(crypto.randomUUID());
}, []);
const generateRequestToken = () => {
if (typeof window === "undefined") return "";
const baseString = `${navigator.userAgent}:${window.location.host}:connect-demo`;
return btoa(baseString);
};
const getConnectToken = async () => {
try {
const requestToken = generateRequestToken();
const response = await fetch(`${supabaseUrl}/functions/v1/demo-connect-token`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${supabaseAnonKey}`,
"X-Request-Token": requestToken
},
body: JSON.stringify({
external_user_id: externalUserId
})
});
if (!response.ok) {
throw new Error("Failed to get Connect token");
}
const data = await response.json();
setTokenData(data);
} catch (err) {
setError(err.message || "Failed to get Connect token");
}
};
useEffect(() => {
if (externalUserId) {
getConnectToken();
}
}, [externalUserId]);
useEffect(() => {
if (tokenData?.connect_link_url) {
const url = new URL(tokenData.connect_link_url);
url.searchParams.set("app", appSlug);
setConnectLinkUrl(url.toString());
} else {
setConnectLinkUrl("");
}
}, [tokenData, appSlug]);
const copyToClipboard = () => {
navigator.clipboard.writeText(connectLinkUrl);
setCopied(true);
setTimeout(() => setCopied(false), 2000);
};
if (!tokenData?.connect_link_url) {
return
;
};
export const SUPABASE_ANON_KEY = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im92d3R0cXZyYm15aWNlcWtnYXZxIiwicm9sZSI6ImFub24iLCJpYXQiOjE3NTEzODY4MTYsImV4cCI6MjA2Njk2MjgxNn0.7QHhvz7K9KPmCGI8vm36TM5hiIASuXAY54OXt3SqCVg';
export const SUPABASE_URL = 'https://ovwttqvrbmyiceqkgavq.supabase.co';
export const PUBLIC_APPS = '2,700';
Pipedream Connect is the easiest way for your users to connect to [over {PUBLIC_APPS}+ APIs](https://pipedream.com/apps), **right in your product**. You can build in-app messaging, CRM syncs, AI agents, [and much more](/docs/connect/use-cases/), all in a few minutes.
## Visual overview
Here’s a high-level overview of how Connect works with your app:
Here’s how Connect sits in your frontend and backend, and communicates with Pipedream’s API:
## Getting started
We’ll walk through these steps below with an interactive demo that lets you see an execute the code directly in the docs.
You’ll need to do two things to add Pipedream Connect to your app:
1. [Connect to the Pipedream API from your server](/docs/connect/managed-auth/quickstart/#generate-a-short-lived-token). This lets you make secure calls to the Pipedream API to initiate the account connection flow and retrieve account credentials.
2. [Add the Pipedream SDK to your frontend](/docs/connect/managed-auth/quickstart/#connect-your-users-account) or redirect your users to [a Pipedream-hosted URL](/docs/connect/managed-auth/connect-link/) to start the account connection flow.
If you’re building your own app, you’ll need to provide these credentials to the environment, or retrieve them from your secrets store:
```env
# Used to authorize requests to the Pipedream API
PIPEDREAM_CLIENT_ID=your_client_id
PIPEDREAM_CLIENT_SECRET=your_client_secret
PIPEDREAM_ENVIRONMENT=development
PIPEDREAM_PROJECT_ID=your_project_id
```
1. Open an existing Pipedream project or create a new one at [pipedream.com/projects](https://pipedream.com/projects)
2. Click the **Settings** tab, then copy your **Project ID**
Pipedream uses OAuth to authorize requests to the REST API. To create an OAuth client:
1. Visit the [API settings](https://pipedream.com/settings/api) for your workspace
2. Create a new OAuth client and note the client ID and secret
You’ll need these when configuring the SDK and making API requests.
To securely initiate account connections for your users, you’ll need to generate a short-lived token for your users and use that in the [account connection flow](/docs/connect/managed-auth/quickstart/#connect-your-users-account). See [the docs on Connect tokens](/docs/connect/managed-auth/tokens/) for a general overview of why we need to create tokens and scope them to end users.
Check out the code below and **try it yourself**:
```typescript TypeScript
import { PipedreamClient } from "@pipedream/sdk";
// This code runs on your server
const client = new PipedreamClient({
projectEnvironment: "production",
clientId: process.env.PIPEDREAM_CLIENT_ID,
clientSecret: process.env.PIPEDREAM_CLIENT_SECRET,
projectId: process.env.PIPEDREAM_PROJECT_ID
});
// Create a token for a specific user
const { token, expires_at, connect_link_url } = await client.tokens.create({
external_user_id: "YOUR_USER_ID", // Replace with your user's ID
});
```
Once you have a token, return it to your frontend to start the account connection flow for the user, or redirect them to a Pipedream-hosted URL with [Connect Link](/docs/connect/managed-auth/quickstart/#or-use-connect-link).
Refer to the API docs for [full set of parameters you can pass](/docs/connect/api-reference/create-connect-token) in the `ConnectTokenCreate` call.
You have two options when connecting an account for your user:
1. [Use the Pipedream SDK](/docs/connect/managed-auth/quickstart/#use-the-pipedream-sdk-in-your-frontend) in your frontend
2. [Use Connect Link](/docs/connect/managed-auth/quickstart/#or-use-connect-link) to deliver a hosted URL to your user
Use this method when you want to handle the account connection flow yourself, in your app. For example, you might want to show a **Connect Slack** button in your app that triggers the account connection flow.
First, install the [Pipedream SDK](https://www.npmjs.com/package/@pipedream/sdk) in your frontend:
```java
npm i --save @pipedream/sdk
```
When the user connects an account in your product, [pass the token from your backend](/docs/connect/managed-auth/quickstart/#generate-a-short-lived-token) and call `connectAccount`. This opens a Pipedream iFrame that guides the user through the account connection.
Try the interactive demo below to connect an account after generating a token in the previous step:
```typescript Google Sheets
import { PipedreamClient } from "@pipedream/sdk"
// This code runs in the frontend using the token from your server
export default function Home() {
function connectAccount() {
const client = new PipedreamClient()
client.connectAccount({
app: "google_sheets",
token: "{connect_token}",
onSuccess: (account) => {
// Handle successful connection
console.log(`Account successfully connected: ${account.id}`)
},
onError: (err) => {
// Handle connection error
console.error(`Connection error: ${err.message}`)
}
})
}
return (
)
}
```
```typescript GitHub
import { PipedreamClient } from "@pipedream/sdk"
// This code runs in the frontend using the token from your server
export default function Home() {
function connectAccount() {
const client = new PipedreamClient()
client.connectAccount({
app: "github",
token: "{connect_token}",
onSuccess: (account) => {
// Handle successful connection
console.log(`Account successfully connected: ${account.id}`)
},
onError: (err) => {
// Handle connection error
console.error(`Connection error: ${err.message}`)
}
})
}
return (
)
}
```
```typescript Notion
import { PipedreamClient } from "@pipedream/sdk"
// This code runs in the frontend using the token from your server
export default function Home() {
function connectAccount() {
const client = new PipedreamClient()
client.connectAccount({
app: "notion",
token: "{connect_token}",
onSuccess: (account) => {
// Handle successful connection
console.log(`Account successfully connected: ${account.id}`)
},
onError: (err) => {
// Handle connection error
console.error(`Connection error: ${err.message}`)
}
})
}
return (
)
}
```
```typescript Gmail
import { PipedreamClient } from "@pipedream/sdk"
// This code runs in the frontend using the token from your server
export default function Home() {
function connectAccount() {
const client = new PipedreamClient()
client.connectAccount({
app: "gmail",
token: "{connect_token}",
onSuccess: (account) => {
// Handle successful connection
console.log(`Account successfully connected: ${account.id}`)
},
onError: (err) => {
// Handle connection error
console.error(`Connection error: ${err.message}`)
}
})
}
return (
)
}
```
```typescript OpenAI
import { PipedreamClient } from "@pipedream/sdk"
// This code runs in the frontend using the token from your server
export default function Home() {
function connectAccount() {
const client = new PipedreamClient()
client.connectAccount({
app: "openai",
token: "{connect_token}",
onSuccess: (account) => {
// Handle successful connection
console.log(`Account successfully connected: ${account.id}`)
},
onError: (err) => {
// Handle connection error
console.error(`Connection error: ${err.message}`)
}
})
}
return (
)
}
```
Use this option when you can’t execute JavaScript or open an iFrame in your environment (e.g. mobile apps) and instead want to share a URL with your end users.
The Connect Link URL opens a Pipedream-hosted page, guiding users through the account connection process. The URL is specific to the user and expires after 4 hours.
After generating a token in the [step above](/docs/connect/managed-auth/quickstart/#generate-a-short-lived-token), you can use the resulting Connect Link URL. Try it below:
Make sure to add the `app` parameter to the end of the URL to specify the app.
Check out the [full API docs](/docs/connect/api-reference/create-connect-token) for all parameters you can pass when creating tokens, including setting redirect URLs for success or error cases.
Now that your users have connected an account, you can use their auth in one of a few ways:
1. [Expose 10k+ tools](/docs/connect/components/) to your AI app or agent and call them on behalf of your customers
2. [Send custom requests](/docs/connect/api-proxy/) to any one of the 2500+ APIs using the Connect API proxy
3. [Use Pipedream’s visual workflow builder](/docs/connect/workflows/) to define complex logic to run on behalf of your users
4. [Embed Pipedream components directly in your app](/docs/connect/components/) to run actions and triggers on their behalf
* Test end to end in [development](/docs/connect/managed-auth/environments/)
* Ship to production!
# Connect Tokens
Source: https://pipedream.com/docs/connect/managed-auth/tokens
When you initiate account connection for your end users, you must either:
1. Generate a secure, short-lived token scoped to the end user, or
2. Use the [Connect Link](/docs/connect/managed-auth/connect-link/) feature to generate a URL that guides the user through the account connection flow without any frontend work on your side.
Here, we’ll show you how to generate tokens for your users and return that to your frontend, passing that to the account connection flow.
Use tokens when you want to handle the account connection flow yourself, in your app’s UI. For example, you might want to show a **Connect Slack** button in your app that triggers the account connection flow for Slack, or launch the flow in a modal.
Connect tokens currently have a 4-hour expiry, and can only be used once.
## Creating a token
See docs on [the `/tokens` endpoint](/docs/connect/api-reference/create-connect-token) to create new tokens.
## Webhooks
When you generate a token, you can specify a `webhook_uri` where Pipedream will deliver updates on the account connection flow. This is useful if you want to update your UI based on the status of the account connection flow, get a log of errors, and more.
[See the webhooks docs](/docs/connect/managed-auth/webhooks/) for more information.
## Tokens are scoped to end users and environments
When you [create a new Connect token](/docs/connect/api-reference/create-connect-token), you pass an `external_user_id` and an `environment`. See the docs on [environments](/docs/connect/managed-auth/environments/) for more information on passing environment in the SDK and API.
Tokens are scoped to this user and environment. When the user successfully connects an account with that token, it will be saved for that `external_user_id` in the specified environment.
# Troubleshooting
Source: https://pipedream.com/docs/connect/managed-auth/troubleshooting
Below are some common errors when connecting your users’ accounts via Pipedream Connect.
### Error creating a Connect token
> Error creating token: Error: Failed to obtain OAuth token: Response Error: 401 Unauthorized
Authorization to the Pipedream API failed when creating the Connect token. Double-check the client ID or secret for your [Pipedream OAuth client](/docs/connect/api-reference/authentication).
### Error connecting an account
Most errors when connecting an account are related to the [Connect token](/docs/connect/managed-auth/tokens/), which Pipedream validates from the Connect iFrame.
#### Common errors
> This link has expired. Please request a new one from the app developer.
> This session has expired. Please refresh the page to try again.
#### Troubleshooting steps
Pipedream typically returns an explicit error message in the HTTP response of the token validation network call directly from the iFrame in the client. To check for errors, start the account connection flow in a browser and open the developer console to view the network requests.
Filter for requests to
```bash
https://api.pipedream.com/v1/connect/tokens
```
and check the response for error messages.
#### Token validation errors
> The Pipedream Connect token is invalid. Please generate a new one and try again.
Connect tokens expire, and are only able to be used once. Try generating a new token and try again.
> App not found. Please check your app id.
Double-check the app slug you’re passing [when connecting your user’s account](/docs/connect/managed-auth/quickstart/#connect-your-users-account).
### Connection failed. Please retry or contact support.
The user may have closed the OAuth popup window without completing authorization.
If you’re still have trouble or hitting an error that isn’t listed here, [get in touch with us](https://pipedream.com/support). We’d love to help.
# Users
Source: https://pipedream.com/docs/connect/managed-auth/users
To view or delete your users’ connected accounts:
1. Open your project in Pipedream
2. Click the **Connect** tab on the left
3. Click the **Users** tab at the top
You’ll see a list of all users, their connected accounts, and the option to delete any accounts from the UI. You can also retrieve and delete all your users via the API ([see the docs for reference](/docs/connect/api-reference/)).
## Connecting multiple accounts
Users can connect multiple accounts for many different apps, or for the same app (e.g., I can connect my Notion and Gmail accounts, as well as accounts for multiple Slack workspaces).
When retrieving account information [from the API](/docs/connect/api-reference/list-accounts), you can filter by `external_user_id` and / or `app` to retrieve information for a specific user and / or app.
When running workflows on behalf of an end user, right now you can only use a single account for a given app. If there are multiple connected accounts for that app, **Pipedream will use the most recently created account**. See more info on running workflows for your users [here](/docs/connect/workflows/).
## Deleting accounts
You can delete an individual connected account or an entire user and all associated accounts and resources from the UI or via the API.
### Deleting individual connected accounts
If you need more granular control, you can delete specific connected accounts instead of removing the entire user.
#### From the UI
1. Open the project in Pipedream
2. Navigate to the **Users** tab under **Connect**
3. Find the user whose accounts you want to manage
4. View all connected accounts for that user in the expanded section
5. Click the **Delete** button next to the specific account you want to remove
This allows for more granular control over which integrated services remain accessible to your users.
#### Via the API
You can delete specific connected accounts programmatically:
```powershell
curl -X DELETE "https://api.pipedream.com/v1/connect/{project_id}/accounts/{account_id}" \
-H "Authorization: Bearer {access_token}"
```
For complete API details including TypeScript and Node.js examples, [refer to the API reference](/docs/connect/api-reference/delete-account).
### Deleting users
When you delete a user, all of their connected accounts and deployed resources (if any) are permanently removed from Pipedream. There are two ways to delete users:
#### From the UI
1. Open the project in Pipedream
2. Navigate to the **Users** tab under **Connect**
3. Locate the user you wish to delete
4. Click the **Delete User** button from the overlow menu at the end of the row
Deleting a user is permanent and cannot be undone. All connected accounts for this user will be permanently removed.
#### Via the API
You can delete a user programmatically using the Pipedream API:
```powershell
curl -X DELETE "https://api.pipedream.com/v1/connect/{project_id}/users/{external_user_id}" \
-H "Authorization: Bearer {access_token}"
```
For complete API details including TypeScript and Node.js examples, see the [API reference](/docs/connect/api-reference/delete-external-user).
# Connect Webhooks
Source: https://pipedream.com/docs/connect/managed-auth/webhooks
When you [generate a Connect token](/docs/connect/managed-auth/quickstart/#generate-a-short-lived-token), you can pass a `webhook_uri` parameter. Pipedream will send a POST request to this URL when the user completes the connection flow, or if an error occurs at any point. [See the API docs](/docs/connect/api-reference/create-connect-token) for details.
## Webhook events
* `CONNECTION_SUCCESS` - Sent when the user successfully connects their account
* `CONNECTION_ERROR` - Sent when an error occurs during the connection flow
## Webhook payload
### Successful connection
Please note that user credentials are not sent in the webhook request. To retrieve credentials, use the [Connect API to fetch the account](/docs/connect/api-reference/retrieve-account) using the `account.id` provided in the webhook payload.
```json
{
"event": "CONNECTION_SUCCESS",
"connect_token": "abc123",
"environment": "production",
"connect_session_id": 123,
"account": {
"id": "apn_abc123",
"name": "My Slack workspace",
"external_id": "U123456",
"healthy": true,
"dead": false,
"app": {
"id": "app_abc123",
"name_slug": "slack",
"name": "Slack",
"auth_type": "oauth",
"description": "Slack is a channel-based messaging platform",
"img_src": "https://assets.pipedream.net/icons/slack.svg",
"custom_fields_json": [],
"categories": "Communication",
},
"created_at": "2021-09-01T00:00:00Z",
"updated_at": "2021-09-01T00:00:00Z",
}
}
```
### Error
```json
{
"event": "CONNECTION_ERROR",
"connect_token": "abc123",
"environment": "production",
"connect_session_id": 123,
"error": "You've hit your limit on the number of external users you can connect."
}
```
# MCP Servers
Source: https://pipedream.com/docs/connect/mcp
export const PUBLIC_APPS = '2,700';
Pipedream offers dedicated MCP ([Model Context Protocol](https://modelcontextprotocol.io/)) servers for all of our {PUBLIC_APPS}+ integrated apps. These servers enable AI assistants like Claude or ChatGPT to securely access and interact with thousands of APIs through a standardized communication protocol, performing real-world tasks using your or your users’ accounts.
Pipedream’s MCP servers are powered by [Pipedream Connect](https://pipedream.com/docs/connect) and include:
* Access to {PUBLIC_APPS}+ apps and APIs through a consistent interface
* Over 10,000 pre-built tools
* Fully-managed OAuth and secure credential storage
User credentials are encrypted at rest and all requests are made through Pipedream’s servers, never directly exposing credentials to AI models. Read more about how we protect user credentials [here](/docs/privacy-and-security/#third-party-oauth-grants-api-keys-and-environment-variables).
## Available MCP servers
Pipedream provides MCP servers for all our [supported apps](https://mcp.pipedream.com/). Each app has its own dedicated MCP server with tools specific to that API. For example:
* **[Slack](https://mcp.pipedream.com/app/slack)**: Send messages, manage channels, create reminders, and more
* **[GitHub](https://mcp.pipedream.com/app/github)**: Create issues, manage pull requests, search repositories
* **[Google Sheets](https://mcp.pipedream.com/app/google-sheets)**: Read and write data, format cells, create charts
Explore the full list of available MCP servers at [mcp.pipedream.com](https://mcp.pipedream.com).
## Getting started
You can use Pipedream’s MCP servers in two ways:
1. **[As a developer](/docs/connect/mcp/developers/)**: Host your own MCP servers for your application or organization
2. **[As an end user](/docs/connect/mcp/users/)**: Connect your accounts through our hosted MCP servers at [mcp.pipedream.com](https://mcp.pipedream.com)
**Try out Pipedream MCP in our chat app at [chat.pipedream.com](https://chat.pipedream.com)** and explore the code [here](https://github.com/PipedreamHQ/mcp).
## Security
Like the rest of Pipedream Connect, MCP servers follow strict security best practices:
* **Credential isolation**: Each user’s credentials are stored securely and isolated from other users
* **No credential exposure**: Credentials are never exposed to AI models or your client-side code
* **Revocable access**: Users can revoke access to their connected accounts at any time
For more information on security, see our [security documentation](/docs/privacy-and-security/).
## Use cases
Pipedream MCP enables AI assistants to perform a wide range of tasks:
* **Productivity automation**: Schedule meetings, send emails, create documents
* **Data analysis**: Query databases, analyze spreadsheets, generate reports
* **Content creation**: Post social media updates, create marketing materials
* **Customer support**: Respond to inquiries, create tickets, update CRM records
* **Developer workflows**: Create issues, review code, deploy applications
## Supported tools
* Each MCP server provides tools specific to that app. Tools are automatically created based on Pipedream’s [registry of pre-built actions](https://github.com/PipedreamHQ/pipedream/tree/master/components)
* You can find the supported tools for a given app on its MCP server page or search for specific actions here: [pipedream.com/expore](https://pipedream.com/explore#popular-actions)
## Pricing
* Anyone can use Pipedream’s hosted MCP servers for their own use **for free**
* To deploy Pipedream’s MCP servers to your own app or agent, you can get started for free in development mode
* [Visit the pricing page](https://pipedream.com/pricing?plan=Connect) when you’re ready to ship to production
## Additional resources
* [Pipedream hosted MCP servers](https://mcp.pipedream.com)
* [MCP official spec](https://modelcontextprotocol.io/)
* [Pipedream MCP reference implementation](https://github.com/PipedreamHQ/pipedream/tree/master/modelcontextprotocol)
* [MCP inspector tool](https://modelcontextprotocol.io/docs/tools/inspector/)
# App Discovery
Source: https://pipedream.com/docs/connect/mcp/app-discovery
export const PUBLIC_APPS = '2,700';
Discover and configure available MCP servers programmatically, with automatic app extraction from prompts.
## Overview
Pipedream provides {PUBLIC_APPS}+ APIs as MCP servers. Each server corresponds to an app integration (like Notion, Gmail, or Slack) and has its own specific set of tools. This page covers how to discover available apps and enable automatic app discovery from prompts.
## Discovering available MCP servers
### Search for available apps
```typescript TypeScript
import { PipedreamClient } from "@pipedream/sdk";
// Initialize the Pipedream SDK client
const client = new PipedreamClient({
projectEnvironment: PIPEDREAM_ENVIRONMENT,
clientId: PIPEDREAM_CLIENT_ID,
clientSecret: PIPEDREAM_CLIENT_SECRET,
projectId: PIPEDREAM_PROJECT_ID
});
// Search for Google Sheets apps, sorted by featured weight
const apps = await client.apps.list({
q: "google sheets",
sort_key: "featured_weight",
sort_direction: "desc"
});
// Each app has these key properties:
// - name_slug: Used in the MCP server URL (e.g., "google_sheets")
// - name: Display name (e.g., "Google Sheets")
```
```python Python
from pipedream import Pipedream
# Initialize the Pipedream SDK client
pd = Pipedream(
project_id=PIPEDREAM_PROJECT_ID,
project_environment=PIPEDREAM_ENVIRONMENT,
client_id=PIPEDREAM_CLIENT_ID,
client_secret=PIPEDREAM_CLIENT_SECRET,
)
# Search for Google Sheets apps, sorted by featured weight
apps = pd.apps.list(
q="google sheets",
sort_key="featured_weight",
sort_direction="desc"
)
# Each app has these key properties:
# - name_slug: Used in the MCP server URL (e.g., "google_sheets")
# - name: Display name (e.g., "Google Sheets")
```
## Automatic app discovery
Enable Pipedream to automatically extract relevant apps from a given prompt using the `appDiscovery` parameter.
Try this out for yourself at [chat.pipedream.com](https://chat.pipedream.com)
App discovery currently requires [full-config mode](/docs/connect/mcp/developers#full-config) to be enabled.
### Enabling app discovery
Add the `appDiscovery=true` parameter to your MCP server requests:
| Header | Query Param | Value | Required? |
| -------------------- | -------------- | ------ | --------- |
| `x-pd-app-discovery` | `appDiscovery` | `true` | No |
### How it works
When app discovery is enabled:
1. Pipedream analyzes the incoming prompt to identify which apps are most relevant
2. The initial tool call responses with an array of relevant apps
3. When the client reload its available tools, it will now have tools for the relevant apps
### Examples
#### Basic setup with app discovery
```typescript TypeScript
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { PipedreamClient } from "@pipedream/sdk";
// Get access token
const client = new PipedreamClient({
projectEnvironment: PIPEDREAM_ENVIRONMENT,
clientId: PIPEDREAM_CLIENT_ID,
clientSecret: PIPEDREAM_CLIENT_SECRET,
projectId: PIPEDREAM_PROJECT_ID
});
const accessToken = await client.rawAccessToken;
// Configure MCP transport with app discovery enabled
const transport = new StreamableHTTPClientTransport(new URL(serverUrl), {
requestInit: {
headers: {
"Authorization": `Bearer ${accessToken}`,
"x-pd-project-id": PIPEDREAM_PROJECT_ID,
"x-pd-environment": PIPEDREAM_ENVIRONMENT,
"x-pd-external-user-id": EXTERNAL_USER_ID,
"x-pd-app-discovery": "true",
"x-pd-tool-mode": "full-config"
}
}
});
```
```python Python
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from pipedream import Pipedream
# Get access token
pd = Pipedream(
project_id=PIPEDREAM_PROJECT_ID,
project_environment=PIPEDREAM_ENVIRONMENT,
client_id=PIPEDREAM_CLIENT_ID,
client_secret=PIPEDREAM_CLIENT_SECRET,
)
response = pd.oauth_tokens.create(
client_id=PIPEDREAM_CLIENT_ID,
client_secret=PIPEDREAM_CLIENT_SECRET,
)
access_token = response.access_token
# Configure MCP client with app discovery enabled
headers = {
"Authorization": f"Bearer {access_token}",
"x-pd-project-id": PIPEDREAM_PROJECT_ID,
"x-pd-environment": PIPEDREAM_ENVIRONMENT,
"x-pd-external-user-id": EXTERNAL_USER_ID,
"x-pd-app-discovery": "true",
"x-pd-tool-mode": "full-config"
}
# Create MCP client connection with app discovery
async with streamablehttp_client(server_url, headers=headers) as (read, write, _):
async with ClientSession(read, write) as session:
await session.initialize()
```
#### How app discovery works with different prompts
App discovery automatically detects which apps are referenced in user prompts:
| User Input | Apps Detected |
| ---------------------------------------------------------- | -------------------------------------------------------------------------- |
| "Send a message to the #general channel in Slack" | slack |
| "Create a task in Notion and send a notification to Slack" | notion, slack |
| "Add this email to my spreadsheet" | google\_sheets, microsoft\_excel, airtable\_oauth, zoho\_sheet, smartsheet |
| "Schedule a meeting and update my CRM" | google\_calendar, zoho\_crm, hubspot |
### Limitations
* App discovery currently requires `full-config` mode to be enabled
* Detection accuracy depends heavily on the clarity of the prompt
# Develop with Pipedream MCP
Source: https://pipedream.com/docs/connect/mcp/developers
Add Pipedream MCP to your app or agent to make tool calls on behalf of your users to {PUBLIC_APPS}+ APIs and 10,000+ tools.
export const PUBLIC_APPS = '2,700';
Pipedream Connect includes built-in user authentication for [every MCP server](https://mcp.pipedream.com), which means you don’t need to build any authorization flows or deal with token storage and refresh in order to make authenticated requests on behalf of your users. [Learn more here](/docs/connect/mcp/developers/#user-account-connections).
## Overview
Pipedream’s MCP server code is [publicly available on GitHub](https://github.com/PipedreamHQ/pipedream/blob/master/modelcontextprotocol/README.md), and you have two options for using Pipedream’s MCP server in your app:
1. [Use Pipedream’s remote MCP server](/docs/connect/mcp/developers#use-pipedream’s-remote-mcp-server) (most common)
2. [Self-host Pipedream’s MCP server](/docs/connect/mcp/developers#self-host-pipedream’s-mcp-server)
**Try out Pipedream MCP in our chat app at [chat.pipedream.com](https://chat.pipedream.com)** and explore the code [here](https://github.com/PipedreamHQ/mcp).
### Key Pipedream concepts to understand
**`external_user_id`**
* This is your user’s ID, in your system: whatever you use to uniquely identify them
* Requests made for that user ID are coupled to that end user and their connected accounts ([learn more](/docs/connect/managed-auth/users))
**`app`**
* The app’s “name slug” (the unique identifier for the app)
* Check out the [app discovery](/docs/connect/mcp/app-discovery) docs to learn how to discover and use available apps
## Tool modes
Pipedream MCP supports a few different methods for interacting with tools:
1. [Sub-agent](/docs/connect/mcp/developers#sub-agent) (default)
2. [Full config](/docs/connect/mcp/developers#full-config)
3. [Tools only](/docs/connect/mcp/developers#tools-only)
### **Sub-agent**
When using Pipedream MCP in sub-agent mode, all tools you expose to your LLM take a single input: **`instruction`**.
The Pipedream MCP server passes the **`instruction`** to an LLM to handle the configuration of the main tool using a set of agents with narrowly scoped sets of instructions and additional tools to aid in the configuration and execution of the top-level user prompt.
* The benefit with this approach is that sub-agent mode abstracts a lot of the complexity with handling things like [remote options](/docs/connect/components/#configure-the-component) and [dynamic props](/docs/connect/components/#configure-dynamic-props), especially for MCP clients that don’t automatically [reload tools](https://modelcontextprotocol.io/docs/concepts/tools#tool-discovery-and-updates).
* However, one downside is that you hand over some of the control and observability to Pipedream in this model.
While in Beta, Pipedream eats the costs of the LLM tokens in sub-agent mode. We’ll likely pass these costs to developers in the future.
```javascripton
{
"name": "GOOGLE_SHEETS-ADD-SINGLE-ROW",
"description": "Add a single row of data to Google Sheets. [See the documentation](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append)",
"inputSchema": {
"type": "object",
"properties": {
"instruction": {
"type": "string"
}
},
"required": [
"instruction"
],
"additionalProperties": false,
"$schema": "http://json-schema.org/draft-07/schema#"
}
}
```
### **Full-config**
Full-config mode enables support for loading and configuring dynamic props. This mode provides the most flexibility for tool configuration and is required for certain features like app discovery.
Use `full-config` mode when you need:
* Complete control over tool configuration
* Support for [dynamic props](/docs/connect/api-reference/reload-component-props)
* [App discovery](/docs/connect/mcp/app-discovery) functionality
#### Configuring dynamic props
* Tools that use [dynamic props](/docs/connect/api-reference/reload-component-props) can’t be configured in one shot, as the full prop definition isn’t known until certain inputs are defined.
* For example, the full set of props for `google_sheets-add-single-row` aren’t known until you configure the `hasHeaders` prop. Once we know if there’s a header row, we can retrieve the column names from the header row and make them available as props that can be configured.
* As you call each tool, you should reload the available tools for the server, and we’ll expose meta tools for configuration, such as `begin_configuration_google_sheets-add-single-row`, which causes the rest of the tools to be removed and only tools relevant to the configuration are exposed.
Your MCP client must be able to reload the list of available tools on each turn. See [here](https://github.com/PipedreamHQ/mcp) for example implementations.
Set the `toolMode` parameter to `full-config`:
```typescript TypeScript
const transport = new StreamableHTTPClientTransport(new URL(serverUrl), {
requestInit: {
headers: {
"Authorization": `Bearer ${accessToken}`,
"x-pd-project-id": PIPEDREAM_PROJECT_ID,
"x-pd-environment": PIPEDREAM_ENVIRONMENT,
"x-pd-external-user-id": EXTERNAL_USER_ID,
"x-pd-app-slug": APP_SLUG,
"x-pd-tool-mode": "full-config" // Enable full-config mode
}
}
});
```
```python Python
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
# Configure headers for full-config mode
headers = {
"Authorization": f"Bearer {access_token}",
"x-pd-project-id": PIPEDREAM_PROJECT_ID,
"x-pd-environment": PIPEDREAM_ENVIRONMENT,
"x-pd-external-user-id": EXTERNAL_USER_ID,
"x-pd-app-slug": APP_SLUG,
"x-pd-tool-mode": "full-config" # Enable full-config mode
}
# Create MCP client connection with full-config mode
async with streamablehttp_client(server_url, headers=headers) as (read, write, _):
async with ClientSession(read, write) as session:
await session.initialize()
```
```javascripton
{
"name": "google_sheets-add-single-row",
"description": "Add a single row of data to Google Sheets. [See the documentation](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append)",
"inputSchema": {
"type": "object",
"properties": {
"drive": {
"anyOf": [
{
"anyOf": [
{
"not": {}
},
{
"type": "string"
}
]
},
{
"type": "null"
}
],
"description": "Defaults to `My Drive`. To select a [Shared Drive](https://support.google.com/a/users/answer/9310351) instead, select it from this list.\n\nYou can use the \"CONFIGURE_COMPONENT\" tool using these parameters to get the values. key: google_sheets-add-single-row, propName: drive"
},
"sheetId": {
"type": "string",
"description": "Select a spreadsheet or provide a spreadsheet ID\n\nYou can use the \"CONFIGURE_COMPONENT\" tool using these parameters to get the values. key: google_sheets-add-single-row, propName: sheetId"
},
"worksheetId": {
"type": "string",
"description": "Select a worksheet or enter a custom expression. When referencing a spreadsheet dynamically, you must provide a custom expression for the worksheet.\n\nYou can use the \"CONFIGURE_COMPONENT\" tool using these parameters to get the values. key: google_sheets-add-single-row, propName: worksheetId"
},
"hasHeaders": {
"type": "boolean",
"description": "If the first row of your document has headers, we'll retrieve them to make it easy to enter the value for each column. Note: When using a dynamic reference for the worksheet ID (e.g. `{{steps.foo.$return_value}}`), this setting is ignored."
}
},
"required": [
"sheetId",
"worksheetId",
"hasHeaders"
],
"additionalProperties": false,
"$schema": "http://json-schema.org/draft-07/schema#"
}
}
```
### **Tools-only**
To handle all tool configuration and calling directly, you should use `tools-only` mode.
While some tools will be able to be fully configured and executed in a single shot, not all tools will work in tools-only mode.
```javascripton
{
"name": "google_sheets-add-single-row",
"description": "Add a single row of data to Google Sheets. [See the documentation](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append)",
"inputSchema": {
"type": "object",
"properties": {
"drive": {
"anyOf": [
{
"anyOf": [
{
"not": {}
},
{
"type": "string"
}
]
},
{
"type": "null"
}
],
"description": "Defaults to `My Drive`. To select a [Shared Drive](https://support.google.com/a/users/answer/9310351) instead, select it from this list.\n\nYou can use the \"CONFIGURE_COMPONENT\" tool using these parameters to get the values. key: google_sheets-add-single-row, propName: drive"
},
"sheetId": {
"type": "string",
"description": "Select a spreadsheet or provide a spreadsheet ID\n\nYou can use the \"CONFIGURE_COMPONENT\" tool using these parameters to get the values. key: google_sheets-add-single-row, propName: sheetId"
},
"worksheetId": {
"type": "string",
"description": "Select a worksheet or enter a custom expression. When referencing a spreadsheet dynamically, you must provide a custom expression for the worksheet.\n\nYou can use the \"CONFIGURE_COMPONENT\" tool using these parameters to get the values. key: google_sheets-add-single-row, propName: worksheetId"
},
"hasHeaders": {
"type": "boolean",
"description": "If the first row of your document has headers, we'll retrieve them to make it easy to enter the value for each column. Note: When using a dynamic reference for the worksheet ID (e.g. `{{steps.foo.$return_value}}`), this setting is ignored."
}
},
"required": [
"sheetId",
"worksheetId",
"hasHeaders"
],
"additionalProperties": false,
"$schema": "http://json-schema.org/draft-07/schema#"
}
}
```
## Getting started
### Prerequisites
To use either the remote or self-hosted MCP server, you’ll need:
1. A [Pipedream account](https://pipedream.com/auth/signup)
2. A [Pipedream project](/docs/projects/#creating-projects). Accounts connected via MCP will be stored here.
3. [Pipedream OAuth credentials](/docs/connect/api-reference/authentication)
#### Set up your environment
Set the following environment variables:
```env
PIPEDREAM_CLIENT_ID=your_client_id
PIPEDREAM_CLIENT_SECRET=your_client_secret
PIPEDREAM_PROJECT_ID=your_project_id # proj_xxxxxxx
PIPEDREAM_ENVIRONMENT=development # development | production
```
Learn more about [environments in Pipedream Connect](/docs/connect/managed-auth/environments/).
### Authentication
#### Developer authentication
Your application authenticates with Pipedream using client credential OAuth. [See below](/docs/connect/mcp/developers/#api-authentication) for details.
#### User account connections
One of the core features of Pipedream Connect and the MCP server is the ability for your users to easily connect their accounts without having to build any of the authorization flow or handle token storage.
You can handle account connections in one of two ways in your app:
##### Add a button in your UI
* Use Pipedream’s [frontend SDK](/docs/connect/managed-auth/quickstart/#use-the-pipedream-sdk-in-your-frontend) to let users connect their account directly in your UI
* You can see an example of this when you connect any account in [mcp.pipedream.com](https://mcp.pipedream.com)
##### Return a link
* Use [Connect Link](/docs/connect/managed-auth/quickstart/#or-use-connect-link) to let your users connect their account in a new browser tab
* This is handled automatically by Pipedream's MCP server and **there’s no additional implementation required**
* If a user doesn’t have a connected account that’s required for a given tool call, the server will return a URL in the tool call response:
```sh
https://pipedream.com/_static/connect.html?token=ctok_xxxxxxx&connectLink=true&app={appSlug}
```
### Discover available MCP servers
Pipedream provides [{PUBLIC_APPS}+ APIs as MCP servers](https://mcp.pipedream.com). Each server corresponds to an app integration (like Notion, Gmail, or Slack) and has its own specific set of tools.
For detailed information on discovering apps and enabling automatic app discovery, check out [app discovery](/docs/connect/mcp/app-discovery) section.
### Use Pipedream’s remote MCP server
The remote MCP server is in beta, and we’re looking for feedback. During the beta, the API is subject to change.
#### Supported transport types
The Pipedream MCP server supports both SSE and streamable HTTP transport types dynamically, with no configuration required by the developer or MCP client.
#### Base URL
```sh
https://remote.mcp.pipedream.net
```
#### API Authentication
To authenticate requests to Pipedream’s MCP server, you need to include an access token with every HTTP request. Here’s how to get it:
```typescript TypeScript
import { PipedreamClient } from "@pipedream/sdk";
// Initialize the Pipedream SDK client
const client = new PipedreamClient({
projectEnvironment: PIPEDREAM_ENVIRONMENT,
clientId: PIPEDREAM_CLIENT_ID,
clientSecret: PIPEDREAM_CLIENT_SECRET,
projectId: PIPEDREAM_PROJECT_ID
});
// Get access token for MCP server auth
const response = await client.oauthTokens.create({
client_id: PIPEDREAM_CLIENT_ID,
client_secret: PIPEDREAM_CLIENT_SECRET,
});
const accessToken = response.access_token;
console.log(accessToken);
```
```python Python
from pipedream import Pipedream
# Initialize the Pipedream SDK client
pd = Pipedream(
project_id=PIPEDREAM_PROJECT_ID,
project_environment=PIPEDREAM_ENVIRONMENT,
client_id=PIPEDREAM_CLIENT_ID,
client_secret=PIPEDREAM_CLIENT_SECRET,
)
# Get access token for MCP server auth
response = pd.oauth_tokens.create(
client_id=PIPEDREAM_CLIENT_ID,
client_secret=PIPEDREAM_CLIENT_SECRET,
)
access_token = response.access_token
print(access_token)
```
```sh cURL
curl -s -X POST https://api.pipedream.com/v1/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "'$PIPEDREAM_CLIENT_ID'",
"client_secret": "'$PIPEDREAM_CLIENT_SECRET'"
}'
```
#### Params
* Below are params that you should send with every HTTP request to Pipedream’s MCP server.
* To enable broad support for various MCP clients, you can pass these params via HTTP headers **or** as query params on the URL.
| Header | Query Param | Value | Required? |
| ----------------------- | ---------------- | ---------------------------------------- | -------------------------- |
| `x-pd-project-id` | `projectId` | `proj_xxxxxxx` | Yes |
| `x-pd-environment` | `environment` | `development`, `production` | Yes |
| `x-pd-external-user-id` | `externalUserId` | `` | Yes |
| `x-pd-app-slug` | `app` | `linear`, `notion`, etc | Yes\* |
| `x-pd-tool-mode` | `toolMode` | `sub-agent`, `tools-only`, `full-config` | No Defaults to `sub-agent` |
| `x-pd-app-discovery` | `appDiscovery` | `true` | No |
\*Required unless using `appDiscovery=true`
#### Example request
```typescript TypeScript
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { PipedreamClient } from "@pipedream/sdk";
// Initialize the Pipedream SDK client
const client = new PipedreamClient({
projectEnvironment: PIPEDREAM_ENVIRONMENT,
clientId: PIPEDREAM_CLIENT_ID,
clientSecret: PIPEDREAM_CLIENT_SECRET,
projectId: PIPEDREAM_PROJECT_ID
});
// Retrieve your developer access token via the Pipedream SDK
const accessToken = await client.rawAccessToken;
const serverUrl = MCP_SERVER_URL || `https://remote.mcp.pipedream.net`;
const transport = new StreamableHTTPClientTransport(new URL(serverUrl), {
requestInit: {
headers: {
"Authorization": `Bearer ${accessToken}`,
"x-pd-project-id": PIPEDREAM_PROJECT_ID, // proj_xxxxxxx
"x-pd-environment": PIPEDREAM_ENVIRONMENT, // development | production
"x-pd-external-user-id": EXTERNAL_USER_ID, // the user's ID from your system
"x-pd-app-slug": APP_SLUG, // notion, linear, gmail, etc
}
}
});
```
```python Python
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from pipedream import Pipedream
# Initialize the Pipedream SDK client
pd = Pipedream(
project_id=PIPEDREAM_PROJECT_ID,
project_environment=PIPEDREAM_ENVIRONMENT,
client_id=PIPEDREAM_CLIENT_ID,
client_secret=PIPEDREAM_CLIENT_SECRET,
)
# Retrieve your developer access token via the Pipedream SDK
response = pd.oauth_tokens.create(
client_id=PIPEDREAM_CLIENT_ID,
client_secret=PIPEDREAM_CLIENT_SECRET,
)
access_token = response.access_token
server_url = MCP_SERVER_URL or "https://remote.mcp.pipedream.net"
# Configure MCP client with authentication headers
headers = {
"Authorization": f"Bearer {access_token}",
"x-pd-project-id": PIPEDREAM_PROJECT_ID, # proj_xxxxxxx
"x-pd-environment": PIPEDREAM_ENVIRONMENT, # development | production
"x-pd-external-user-id": EXTERNAL_USER_ID, # the user's ID from your system
"x-pd-app-slug": APP_SLUG, # notion, linear, gmail, etc
}
# Create MCP client connection
async with streamablehttp_client(server_url, headers=headers) as (read, write, _):
async with ClientSession(read, write) as session:
await session.initialize()
# Now you can use the session to call tools
tools = await session.list_tools()
```
### Self-host Pipedream’s MCP server
Hosting the MCP server locally or in your app will expose these routes:
* `GET /:external_user_id/:app`: app-specific connection endpoint
* `POST /:external_user_id/:app/messages`: app-specific message handler
#### Using the `Dockerfile`
You can build and run the container from the [reference implementation](https://github.com/PipedreamHQ/pipedream/blob/master/modelcontextprotocol/Dockerfile):
```sh
> docker build -t pipedream-connect .
> docker run -d --name pd-mcp -p 3010:3010 --env-file .env pipedream-connect:latest
```
#### Running the server using npx
```sh
npx @pipedream/mcp sse
```
The current npx package only supports the `sse` transport type, `http` is coming soon.
#### Running the server locally
You can also run the server locally and even customize the MCP server for your specific requirements:
```bash
# Clone the repo
git clone https://github.com/PipedreamHQ/pipedream
cd pipedream/modelcontextprotocol
# Install dependencies
pnpm install
# Start the server
pnpm dev:http
```
See the [MCP server README](https://github.com/PipedreamHQ/pipedream/blob/master/modelcontextprotocol/README.md) for detailed instructions on customization options.
#### Debugging
You can use the optional env var `PD_SDK_DEBUG` to print out all the requests and responses going to the Connect API:
```env
PD_SDK_DEBUG=true pnpm dev:http
```
### Using the MCP inspector
The [MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector) can be helpful when debugging tool calls.
```sh
npx @modelcontextprotocol/inspector
```
Enter the server URL:
If using Pipedream’s remote server:
```sh
https://remote.mcp.pipedream.net/{external_user_id}/{app_slug}
```
If running locally:
```sh
http://localhost:3010/{external_user_id}/{app_slug}
```
## Using custom tools
Publish [custom tools](/docs/connect/components/custom-tools/) to your workspace to use them in the Pipedream MCP server for the relevant app. This lets you add custom and unique functionality that may not be available in the public registry.
# Using Pipedream MCP With OpenAI
Source: https://pipedream.com/docs/connect/mcp/openai
export const SUPABASE_ANON_KEY = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Im92d3R0cXZyYm15aWNlcWtnYXZxIiwicm9sZSI6ImFub24iLCJpYXQiOjE3NTEzODY4MTYsImV4cCI6MjA2Njk2MjgxNn0.7QHhvz7K9KPmCGI8vm36TM5hiIASuXAY54OXt3SqCVg';
export const SUPABASE_URL = 'https://ovwttqvrbmyiceqkgavq.supabase.co';
export const PUBLIC_APPS = '2,700';
export const AppSearchDemo = ({supabaseUrl, supabaseAnonKey}) => {
const generateRequestToken = () => {
if (typeof window === "undefined") return "";
const baseString = `${navigator.userAgent}:${window.location.host}:connect-demo`;
return btoa(baseString);
};
const generateUUID = () => {
return crypto.randomUUID();
};
const useDebounce = (value, delay) => {
const [debouncedValue, setDebouncedValue] = useState(value);
useEffect(() => {
const handler = setTimeout(() => {
setDebouncedValue(value);
}, delay);
return () => {
clearTimeout(handler);
};
}, [value, delay]);
return debouncedValue;
};
const scrollbarStyles = `
.custom-scrollbar::-webkit-scrollbar {
width: 6px;
}
.custom-scrollbar::-webkit-scrollbar-track {
background: transparent;
}
.custom-scrollbar::-webkit-scrollbar-thumb {
background-color: rgba(156, 163, 175, 0.5);
border-radius: 3px;
}
.custom-scrollbar::-webkit-scrollbar-thumb:hover {
background-color: rgba(156, 163, 175, 0.8);
}
.dark .custom-scrollbar::-webkit-scrollbar-thumb {
background-color: rgba(75, 85, 99, 0.5);
}
.dark .custom-scrollbar::-webkit-scrollbar-thumb:hover {
background-color: rgba(75, 85, 99, 0.8);
}
`;
const [searchQuery, setSearchQuery] = useState("");
const [apps, setApps] = useState([]);
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState("");
const [copiedSlug, setCopiedSlug] = useState("");
const [connectToken, setConnectToken] = useState("");
const [externalUserId, setExternalUserId] = useState("");
const debouncedSearchQuery = useDebounce(searchQuery, 300);
useEffect(() => {
setExternalUserId(generateUUID());
}, []);
useEffect(() => {
if (externalUserId) {
getConnectToken();
}
}, [externalUserId]);
const getConnectToken = async () => {
try {
const requestToken = generateRequestToken();
const response = await fetch(`${supabaseUrl}/functions/v1/demo-connect-token`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${supabaseAnonKey}`,
"X-Request-Token": requestToken
},
body: JSON.stringify({
external_user_id: externalUserId
})
});
if (!response.ok) {
throw new Error("Failed to get Connect token");
}
const data = await response.json();
setConnectToken(data.token);
} catch (err) {
console.error("Error getting Connect token:", err);
setError("Failed to initialize app search. Please refresh the page.");
}
};
const searchApps = useCallback(async query => {
if (!query || query.length < 2 || !connectToken) {
setApps([]);
return;
}
setIsLoading(true);
setError("");
try {
const params = new URLSearchParams();
params.append("q", query);
params.append("sort_key", "featured_weight");
params.append("sort_direction", "desc");
params.append("has_actions", "1");
const response = await fetch(`https://api.pipedream.com/v1/apps?${params.toString()}`, {
headers: {
"Authorization": `Bearer ${connectToken}`,
"Content-Type": "application/json"
}
});
if (!response.ok) {
throw new Error("Failed to search apps");
}
const data = await response.json();
const formattedApps = data.data.map(app => ({
id: app.id,
name: app.name,
name_slug: app.name_slug,
description: app.description,
icon: app.img_src,
featured_weight: app.featured_weight,
categories: app.categories || []
}));
const sortedApps = [...formattedApps].sort((a, b) => (b.featured_weight || 0) - (a.featured_weight || 0));
setApps(sortedApps);
} catch (err) {
console.error("Error searching apps:", err);
setError("Failed to search apps. Please try again.");
setApps([]);
} finally {
setIsLoading(false);
}
}, [connectToken]);
useEffect(() => {
searchApps(debouncedSearchQuery);
}, [debouncedSearchQuery, searchApps]);
const copyToClipboard = async nameSlug => {
if (typeof window === "undefined" || !navigator.clipboard) {
return;
}
try {
await navigator.clipboard.writeText(nameSlug);
setCopiedSlug(nameSlug);
setTimeout(() => setCopiedSlug(""), 2000);
} catch (err) {}
};
return
This is a temporary token. Any linked connected accounts will be regularly deleted.
}
;
};
Access {PUBLIC_APPS}+ APIs and 10,000+ tools in OpenAI using Pipedream Connect. MCP makes it easy to extend the capabilities of any LLM or agent, and Pipedream offers drop-in support for [calling tools in OpenAI](https://platform.openai.com/docs/guides/tools-remote-mcp).
Pipedream Connect includes built-in user authentication for [every MCP server](https://mcp.pipedream.com), which means you don’t need to build any authorization flows or deal with token storage and refresh in order to make authenticated requests on behalf of your users. [Learn more here](/docs/connect/mcp/developers/#user-account-connections).
## Testing in OpenAI’s API Playground
OpenAI provides an API playground for developers to test prompts and tool calling, which provides an easy way to test Pipedream MCP. Get started below.
Navigate to [OpenAI’s playground](https://platform.openai.com/playground/prompts?models=gpt-4.1) and sign in with your OpenAI account.
Click the **Create** button in the **Tools** section, then select **Pipedream**.
Enter a prompt and start chatting!
Refer to the instructions below when you’re ready to use Pipedream MCP in your app.
## Using Pipedream MCP in your app
To use Pipedream MCP with your own users, you need the following:
1. A [Pipedream account](https://pipedream.com/auth/signup)
2. A [Pipedream project](/docs/projects/#creating-projects) (accounts connected via MCP will be stored here)
3. [Pipedream OAuth credentials](/docs/connect/api-reference/authentication)
These are requirements for you, the developer. Your users do **not** need to sign up for Pipedream in order to connect their accounts in your app or agent.
Now set the following environment variables (learn more about environments in Pipedream Connect [here](/docs/connect/managed-auth/environments/)):
```python
OPENAI_API_KEY=your_openai_api_key
PIPEDREAM_CLIENT_ID=your_client_id
PIPEDREAM_CLIENT_SECRET=your_client_secret
PIPEDREAM_PROJECT_ID=your_project_id # proj_xxxxxxx
PIPEDREAM_ENVIRONMENT=development # development | production
```
[See here](/docs/connect/mcp/developers/#discover-available-mcp-servers) for guidance on discovering the apps Pipedream has available as MCP servers.
Below is an end to end example showing how to:
1. Initialize the Pipedream SDK
2. Find the relevant MCP server
3. Send a prompt to OpenAI with the MCP server as a tool call
```typescript TypeScript
import OpenAI from 'openai';
import { PipedreamClient } from "@pipedream/sdk";
// Initialize the Pipedream SDK client
const client = new PipedreamClient({
projectEnvironment: PIPEDREAM_ENVIRONMENT,
clientId: PIPEDREAM_CLIENT_ID,
clientSecret: PIPEDREAM_CLIENT_SECRET,
projectId: PIPEDREAM_PROJECT_ID
});
// Find the app to use for the MCP server
// For this example, we'll use Notion
const apps = await client.apps.list({ q: "notion" });
const appSlug = apps.data[0].name_slug; // e.g., "notion"
// Get access token for MCP server auth
const accessToken = await client.rawAccessToken;
// Send the unique ID that you use to identify this user in your system
const externalUserId = 'abc-123'; // Used in MCP URL to identify the user
// Initialize OpenAI client
const client = new OpenAI();
// Make the OpenAI request with the MCP server
const response = await client.responses.create({
model: 'gpt-4.1',
tools: [
{
type: 'mcp',
server_label: appSlug,
server_url: `https://remote.mcp.pipedream.net`,
headers: {
Authorization: `Bearer ${accessToken}`,
"x-pd-project-id": PIPEDREAM_PROJECT_ID,
"x-pd-environment": PIPEDREAM_ENVIRONMENT,
"x-pd-external-user-id": externalUserId,
"x-pd-app-slug": appSlug,
},
require_approval: 'never'
}
],
input: 'Summarize my most recently created Notion doc for me and help draft an email to our customers'
});
console.log(response);
```
```python Python
import openai
from pipedream import Pipedream
# Initialize the Pipedream SDK client
pd = Pipedream(
project_id=PIPEDREAM_PROJECT_ID,
project_environment=PIPEDREAM_ENVIRONMENT,
client_id=PIPEDREAM_CLIENT_ID,
client_secret=PIPEDREAM_CLIENT_SECRET,
)
# Find the app to use for the MCP server
# For this example, we'll use Notion
apps = pd.apps.list(q="notion")
app_slug = apps.data[0].name_slug # e.g., "notion"
# Get access token for MCP server auth
token_response = pd.oauth_tokens.create(
client_id=PIPEDREAM_CLIENT_ID,
client_secret=PIPEDREAM_CLIENT_SECRET,
)
access_token = token_response.access_token
# Send the unique ID that you use to identify this user in your system
external_user_id = 'abc-123' # Used in MCP URL to identify the user
# Initialize OpenAI client
client = openai.OpenAI()
# Make the OpenAI request with the MCP server
response = client.responses.create(
model='gpt-4.1',
tools=[
{
"type": "mcp",
"server_label": app_slug,
"server_url": "https://remote.mcp.pipedream.net",
"headers": {
"Authorization": f"Bearer {access_token}",
"x-pd-project-id": PIPEDREAM_PROJECT_ID,
"x-pd-environment": PIPEDREAM_ENVIRONMENT,
"x-pd-external-user-id": external_user_id,
"x-pd-app-slug": app_slug,
},
"require_approval": "never"
}
],
input='Summarize my most recently created Notion doc for me and help draft an email to our customers'
)
print(response)
```
```sh cURL
# Step 1: Get access token from Pipedream
ACCESS_TOKEN=$(curl -s -X POST https://api.pipedream.com/v1/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "'$PIPEDREAM_CLIENT_ID'",
"client_secret": "'$PIPEDREAM_CLIENT_SECRET'"
}' | jq -r .access_token)
# Step 2: Find the app to use for MCP server
# Search for the Notion app
APP_SLUG=$(curl -s -X GET "https://api.pipedream.com/v1/apps?q=notion" \
-H "Authorization: Bearer $ACCESS_TOKEN" | jq -r '.data[0].name_slug')
# Step 3: Make request to OpenAI with MCP tool
curl -X POST https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"input": "Summarize my most recently created Notion doc for me and help draft an email to our customers",
"tools": [
{
"type": "mcp",
"server_label": "Notion",
"server_url": "https://remote.mcp.pipedream.net",
"headers": {
"Authorization": "Bearer '"$ACCESS_TOKEN"'",
"x-pd-project-id": "'"$PIPEDREAM_PROJECT_ID"'",
"x-pd-environment": "'"$PIPEDREAM_ENVIRONMENT"'",
"x-pd-external-user-id": "abc-123",
"x-pd-app-slug": "'"$APP_SLUG"'"
},
"require_approval": "never"
}
]
}'
```
# Using Pipedream MCP as an end user
Source: https://pipedream.com/docs/connect/mcp/users
Set up MCP servers to use with any compatible MCP client, like Claude Desktop, Windsurf, Cursor, and VS Code.
* Navigate to [mcp.pipedream.com](https://mcp.pipedream.com) and sign in or create an account (this is a separate account from pipdream.com)
* Browse available MCP servers
* Follow the configuration instructions on the server page to add it to your preferred MCP client
* Connect your account (you can do this in the UI or the AI will prompt you when you first use a tool)
Ask the LLM or agent to perform tasks using your connected services. For example:
* “Send a message to my team in Slack”
* “Create a new issue in GitHub”
* “Add data to my Google Sheet”
The AI will refer to the configured tools available in its MCP servers to complete the task.
With MCP-enabled AI, you can:
* Send messages and manage communication
* Create and update documents
* Query and analyze data
* Automate workflows across your favorite tools
All using your own connected accounts with full control and security.
# Connect Quickstart
Source: https://pipedream.com/docs/connect/quickstart
Get started with Pipedream Connect in minutes using the CLI
Pipedream Connect provides a developer toolkit that lets you add 2,700+ integrations to your app or AI agent. The fastest way to get started is with the Pipedream CLI.
In this quickstart, you'll:
* Install the Pipedream CLI
* Create a Connect project
* Set up a Pipedream OAuth client to authenticate API requests
* Run the [SDK playground](https://pipedream.com/connect/demo) locally to explore the Pipedream SDK and pre-built tools
## Set up Connect with the CLI
**Using Homebrew:**
```bash
brew tap pipedreamhq/pd-cli
brew install pipedreamhq/pd-cli/pipedream
```
**From source:**
```bash
curl https://cli.pipedream.com/install | sh
```
```bash
curl https://cli.pipedream.com/install | sh
```
Download the [Windows build](https://cli.pipedream.com/windows/amd64/latest/pd.zip), unzip it, and add `pd.exe` to your PATH.
Verify the installation:
```bash
pd --version
```
Authenticate with your Pipedream account:
```bash
pd login
```
This opens your browser to complete authentication. Once logged in, you can close the browser tab.
Run the Connect initialization command:
```bash
pd init connect
```
This interactive command walks you through:
* Creating a new Connect project (or selecting an existing one)
* Setting up a Pipedream OAuth client to authenticate requests to the Connect API
* Configuring your local development environment
The CLI automatically creates a `.env` file with your OAuth client credentials and project ID. Keep this file secure and don't commit it to version control.
The CLI sets up a local SDK playground where you can test Connect integrations:
```bash
cd your-project-name
npm run dev
```
Open [http://localhost:3000](http://localhost:3000) to see the playground running.
The SDK playground demonstrates:
* **Component browser** - Explore 10,000+ pre-built API operations (triggers and actions)
* **Managed auth** - Connect your users' accounts to 2,700+ apps
* **Component configuration** - Configure and test components with real API data
* **Live execution** - Run actions and see the results in real-time
In the SDK playground:
1. Search for an app (e.g., "Slack" or "Google Sheets")
2. Browse the available actions and triggers for that app
3. Select an action to configure (e.g., "Send Message to Channel")
4. Connect your account when prompted
5. Configure the action's inputs
6. Click **Run** to execute it and see the results
The playground shows example code for each operation and component.
## What's next?
Now that you have Connect running locally, explore these resources:
* [Pre-built tools](/docs/connect/components): add triggers and actions to your app
* [API proxy](/docs/connect/api-proxy): make custom API requests on behalf of users
* [Pipedream MCP](/docs/connect/mcp): add Connect to AI agents using Model Context Protocol
## Example: Quick SDK preview
The SDK playground demonstrates how to use the Connect SDK to execute actions and deploy triggers. Here's a simple example of running an action:
```typescript TypeScript
import { PipedreamClient } from '@pipedream/sdk';
const client = new PipedreamClient({
projectEnvironment: "development",
clientId: process.env.PIPEDREAM_CLIENT_ID,
clientSecret: process.env.PIPEDREAM_CLIENT_SECRET,
projectId: process.env.PIPEDREAM_PROJECT_ID,
});
// Run a pre-built Slack action
const result = await client.actions.run({
id: "slack-send-message-to-channel",
external_user_id: "user-123",
configured_props: {
slack: { authProvisionId: "apn_abc123" },
channel: "#general",
text: "Hello from Connect!",
},
});
```
The CLI's `pd init connect` command generates a complete example app with these patterns already implemented.
# Pipedream Connect use cases
Source: https://pipedream.com/docs/connect/use-cases
Developers use Pipedream Connect to build customer-facing API integrations into their products. It lets you build [in-app messaging](/docs/connect/use-cases/#in-app-messaging), [CRM syncs](/docs/connect/use-cases/#crm-syncs), [AI-driven products](/docs/connect/use-cases/#ai-products), and much more, all in a few minutes.
## Core value to app developers
In 20 years of building software, we’ve seen a common theme. No matter the product, your customers end up needing to connect your app to third-party APIs.
You might build real-time notifications with messaging apps, export customer data to databases or spreadsheets, ingest data from CRMs, or connect to any of the thousands of APIs and SaaS services your customers are using. These features are often tied to large contracts and Enterprise customers.
But it’s hard to justify the engineering effort required for these integrations. They’re a distraction from the core product. Once built, they’re hard to maintain. You have to securely manage auth, learn the nuances of each API, and improve the integration as your customers ask for new features. Managing these integrations is a huge context switch for any engineer. Most teams have trouble scaling this.
At Pipedream, our customers tell us a variant of this story every day. Pipedream Connect helps you build these features **in minutes**, for any app.
Once you add the core integration UI to your app, non-technical employees can also help to manage [the workflows](/docs/workflows/building-workflows/) that drive the backend logic. For example, if you’re building [in-app messaging](/docs/connect/use-cases/#in-app-messaging), once you add the UI to let users connect Slack, Discord, and other tools, anyone on your team can build workflows that format and deliver messages to your customers. This is a huge plus for many orgs: you still get to build a bespoke UI, directly in your app, suited to your customer need. But anyone in the company can collaborate on the workflows that power it.
## Value to your customers
Shipping new customer-facing integrations can happen in minutes.
## How customers are using Connect
### In-app messaging
Most apps build email notifications, since it’s easy. But most teams work in Slack, Discord, Microsoft Teams, or a variety of other messaging apps. Sometimes you want to send messages via SMS or push notifications. It’s hard to maintain integrations for all the apps your customers are using. Pipedream makes this simple.
### CRM syncs
Sync data between your app and Salesforce, HubSpot, or any CRM. Pipedream lets your customers connect their accounts directly from your UI, define the sync logic, and run it on Pipedream’s infrastructure. Pull data from your customers’ CRMs in real-time, or push data from your app.
### AI products
Talk to any AI API or LLM. Build chat apps or interact in real-time with your users. Or run asynchronous tasks in the background, like image classification, article summarization, or other tasks you want to offload to an AI agent. You can use built-in functions like [`$.flow.suspend`](/docs/workflows/building-workflows/code/nodejs/rerun/#flowsuspend) to send a message to your team, or directly to the user, to approve specific actions.
### Spreadsheet integrations
Sync data between your app and Google Sheets, Airtable, or any spreadsheet. Pipedream Connect lets your users auth with any app, select the sheet, and define custom sync logic.
### And much more
Building an app with Pipedream and want to be profiled here (anonymously or otherwise)? Email `connect@pipedream.com` to let us know!
# Running Workflows For Your End Users
Source: https://pipedream.com/docs/connect/workflows
export const PUBLIC_APPS = '2,700';
Just like you can build and run internal [workflows](/docs/workflows/building-workflows/) for your team, **you can run workflows for [your end users](/docs/connect/api-reference/introduction), too**.
Whether you’re building well-defined integrations or autonomous AI agents, workflows provide a powerful set of tools for running [code](/docs/workflows/building-workflows/code/) or [pre-defined actions](/docs/workflows/building-workflows/actions/) on behalf of your users. Pipedream’s UI makes it easy to build, test, and [debug](/docs/workflows/building-workflows/inspect/) workflows.
## What are workflows?
Workflows are sequences of [steps](/docs/workflows/#steps) [triggered by an event](/docs/workflows/building-workflows/triggers/), like an HTTP request, or new rows in a Google sheet.
You can use [pre-built actions](/docs/workflows/building-workflows/actions/) or custom [Node.js](/docs/workflows/building-workflows/code/nodejs/), [Python](/docs/workflows/building-workflows/code/python/), [Golang](/docs/workflows/building-workflows/code/go/), or [Bash](/docs/workflows/building-workflows/code/bash/) code in workflows and connect to any of our {PUBLIC_APPS} integrated apps.
Workflows also have built-in:
* [Flow control](/docs/workflows/building-workflows/control-flow/)
* [Concurrency and throttling](/docs/workflows/building-workflows/settings/concurrency-and-throttling/)
* [Key-value stores](/docs/workflows/data-management/data-stores/)
* [Error handling](/docs/workflows/building-workflows/errors/)
* [VPCs](/docs/workflows/vpc/)
* [And more](https://pipedream.com/pricing)
Read [the quickstart](/docs/workflows/quickstart/) to learn more.
## Getting started
[Create a new workflow](/docs/workflows/building-workflows/) or open an existing one.
To get started building workflows for your end users:
1. Add an [HTTP trigger](/docs/workflows/building-workflows/triggers/#http) to your workflow
2. Generate a test event with the required headers:
* `x-pd-environment: development`
* `x-pd-external-user-id: {your_external_user_id}`
See the [Triggering your workflow](/docs/connect/workflows/#triggering-your-workflow) section below for details on securing your workflow with OAuth and deploying triggers on behalf of your end users.
When you configure [pre-built actions](/docs/workflows/building-workflows/actions/) or [custom code that connects to third-party APIs](/docs/workflows/building-workflows/code/nodejs/auth/), you can link accounts in one of two ways:
1. **Use your own account**: If you’re connecting to an API that uses your own API key or developer account — for example, a workflow that connects to the OpenAI API or a PostgreSQL database — click the **Connect account** button to link your own, static account.
2. **Use your end users’ auth**: If you’re building a workflow that connects to your end users’ accounts — for example, a workflow that sends a message with your user’s Slack account — you can select the option to **Use end user’s auth via Connect**:
When you trigger the workflow, Pipedream will look up the corresponding account for the end user whose user ID you provide [when invoking the workflow](/docs/connect/workflows/#invoke-the-workflow).
To run an end-to-end test as an end user, you need to have users and connected accounts in your project. If you already have a **development** account linked, you can skip this step.
If you don’t, the fastest way to do this is [on the **Users** tab](/docs/connect/managed-auth/users/) in your Pipedream project:
* You’ll see there’s a button to **Connect account**
* Go through the flow and make sure to create the account in **development** mode
* Note the **external user ID** of the account you just connected, you’ll need it in the next step
Test events are critical for developing workflows effectively. Without a test event, you won’t be able to test your workflow end to end in the builder, see the shape of the event data that triggers the workflow, and the lookup to use your end user’s auth won’t work.
To generate a test event, click **Send Test Event** in the trigger, and fill in the event data. This will trigger the workflow and allow you to test the workflow end to end in the builder.
Make sure to include these headers in your test request:
* `x-pd-environment: development`
* `x-pd-external-user-id: {your_external_user_id}`
When you’re done with the workflow, click **Deploy** at the top right.
If you’re using TypeScript or a JavaScript runtime, [install the Pipedream SDK](/docs/connect/api-reference/introduction). Pipedream also provides an HTTP API for invoking workflows (see example below).
```sh
npm i @pipedream/sdk
```
To invoke workflows, you’ll need:
1. The OAuth client ID and secret from your OAuth client in **step 2 above** (if configured)
2. Your [Project ID](/docs/projects/#finding-your-projects-id)
3. Your workflow’s HTTP endpoint URL
4. The [external user ID](/docs/connect/api-reference/introduction) of the user you’d like to run the workflow for
5. The [Connect environment](/docs/connect/managed-auth/environments/) tied to the user’s account
Then invoke the workflow like so:
```typescript TypeScript
import { PipedreamClient } from "@pipedream/sdk";
// These secrets should be saved securely and passed to your environment
const client = new PipedreamClient({
projectEnvironment: "development", // change to production if running for a test production account, or in production
clientId: "{oauth_client_id}",
clientSecret: "{oauth_client_secret}",
projectId: "{your_project_id}"
});
await client.workflows.invokeForExternalUser(
urlOrEndpoint: "{your_endpoint_url}", // pass the endpoint ID or full URL here
externalUserId: "{your_external_user_id}", // The end user's ID in your system
method: "POST", // "GET", "POST", "PUT", "DELETE", "PATCH"
body: {
message: "Hello World"
},
)
```
```sh HTTP (cURL)
# First, obtain an OAuth access token
curl -X POST https://api.pipedream.com/v1/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "{oauth_client_id}",
"client_secret": "{oauth_client_secret}"
}'
# The response will include an access_token. Use it in the Authorization header below.
curl -X POST https://{your-endpoint-url} \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access_token}" \
-H 'X-PD-External-User-ID: {your_external_user_id}' \
-H 'X-PD-Environment: development' \ # 'development' or 'production'
-d '{
"message": "Hello, world"
}'
```
## Configuring workflow steps
When configuring a workflow that’s using your end user’s auth instead of your own, you’ll need to define most configuration fields manually in each step.
For example, normally when you connect your own Google Sheets account directly in the builder, you can dynamically list all of the available sheets from a dropdown.
However, when running workflows on behalf of your end users, that UI configuration doesn’t work, since the Google Sheets account to use is determined at the time of workflow execution. So instead, you’ll need to configure these fields manually.
* Either make sure to pass all required configuration data when invoking the workflow, or add a step to your workflow that retrieve it from your database, etc. For example:
```sh
curl -X POST https://{your-endpoint-url} \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access_token}" \
-H 'X-PD-External-User-ID: {your_external_user_id}' \
-H 'X-PD-Environment: development' \ # 'development' or 'production'
-d '{
"slackChannel": "#general",
"messageText": "Hello, world!",
"gitRepo": "AcmeOrg/acme-repo",
"issueTitle": "Test Issue"
}' \
```
* Then in the Slack and GitHub steps, you’d reference those fields directly:
We plan to improve this interface in the future, and potentially allow developers to store end user metadata and configuration data alongside the connected account for your end users, so you won’t need to pass the data at runtime. [Let us know](https://pipedream.com/support) if that’s a feature you’d like to see.
## Testing
To test a step using the connected account of one of your end users in the builder, you’ll need a few things to be configured so that your workflow knows which account to use.
**Make sure you have an external user with the relevant connected account(s) saved to your project:**
* Go to the **[Users tab](/docs/connect/managed-auth/users/)** in the **Connect** section of your project to confirm
* If not, either connect one from your application or [directly in the UI](/docs/connect/workflows/#connect-a-test-account)
**Pass the environment and external user ID:**
1. Once you’ve added an HTTP trigger to the workflow, click **Generate test event**
2. Click on the **Headers** tab
3. Make sure `x-pd-environment` is set (you’ll likely want to `development`)
4. Make sure to also pass `x-pd-external-user-id` with the external user ID of the user you’d like to test with
## Triggering your workflow
You have two options for triggering workflows that run on behalf of your end users:
1. [Invoke via HTTP webhook](/docs/connect/workflows/#http-webhook)
2. [Deploy an event source](/docs/connect/workflows/#deploy-an-event-source) (Slack, Gmail, etc.)
### HTTP Webhook
The most common way to trigger workflows is via HTTP webhook. We strongly recommend [creating a Pipedream OAuth client](/docs/connect/api-reference/authentication#creating-an-oauth-client) and authenticating inbound requests to your workflows.
This section refers to authenticating requests to the Pipedream API. For info on how managed auth works for your end users, refer to the [managed auth quickstart](/docs/connect/managed-auth/quickstart/).
To get started, you’ll need:
* [OAuth client ID and secret](/docs/connect/api-reference/authentication#creating-an-oauth-client) for authenticating with the Pipedream API
* Your [project ID](/docs/projects/#finding-your-projects-id)
* Your workflow’s HTTP endpoint URL
* The [external user ID](/docs/connect/api-reference/introduction) of your end user
* The [Connect environment](/docs/connect/managed-auth/environments/)
```typescript TypeScript
import { PipedreamClient } from "@pipedream/sdk";
// These secrets should be saved securely and passed to your environment
const client = new PipedreamClient({
projectEnvironment: "development", // change to production if running for a test production account, or in production
clientId: "{oauth_client_id}",
clientSecret: "{oauth_client_secret}",
projectId: "{your_project_id}"
});
await client.workflows.invokeForExternalUser(
urlOrEndpoint: "{your_endpoint_url}", // pass the endpoint ID or full URL here
externalUserId: "{your_external_user_id}", // The end user's ID in your system
method: "POST", // "GET", "POST", "PUT", "DELETE", "PATCH"
body: {
message: "Hello World"
},
)
```
```sh HTTP (cURL)
# First, obtain an OAuth access token
curl -X POST https://api.pipedream.com/v1/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "{oauth_client_id}",
"client_secret": "{oauth_client_secret}"
}'
# The response will include an access_token. Use it in the Authorization header below.
curl -X POST https://{your-endpoint-url} \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access_token}" \
-H 'X-PD-External-User-ID: {your_external_user_id}' \
-H 'X-PD-Environment: development' \ # 'development' or 'production'
-d '{
"message": "Hello, world"
}'
```
### Deploy an event source
You can [programmatically deploy triggers via the API](/docs/connect/api-reference/deploy-trigger) to have events from integrated apps (like [new Slack messages](https://pipedream.com/apps/slack/triggers/new-message-in-channels) or [new emails in Gmail](https://pipedream.com/apps/gmail/triggers/new-email-received)) trigger your workflow. This allows you to:
* Deploy triggers for specific users from your application
* Configure trigger parameters per-user
* Manage deployed triggers via the API
See the [API documentation](/docs/connect/api-reference/deploy-trigger) for detailed examples of deploying and managing triggers.
## OAuth client requirements
When using OAuth apps (like Google Drive, Slack, Notion, etc.) with your end users, you **must use your own custom OAuth clients**.
1. Register your own OAuth application with each third-party service (Google Drive, Slack, etc.)
2. [Add your OAuth client credentials to Pipedream](/docs/apps/oauth-clients/#configuring-custom-oauth-clients)
3. Make sure to include your `oauthAppId` when connecting accounts for your end users
For detailed instructions, see the [OAuth Clients documentation](/docs/connect/managed-auth/oauth-clients/#using-a-custom-oauth-client).
## Troubleshooting
For help debugging issues with your workflow, you can return verbose error messages to the caller by configuring the HTTP trigger to **Return a custom response from your workflow**.
With that setting enabled on the trigger, below is an example of [this](/docs/connect/workflows/#required-account-not-found-for-external-user-id) error:
```sh
curl -X POST https://{your-endpoint-url} \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer {access_token}' \
-H "x-pd-environment: development" \
-H "x-pd-external-user-id: abc-123" \
-d '{
"slackChannel": "#general",
"messageText": "Hello, world! (sent via curl)",
"hubSpotList": "prospects",
"contactEmail": "foo@example.com"
}' \
Pipedream Connect Error: Required account for hubspot not found for external user ID abc-123 in development
```
### Common errors
#### No external user ID passed, but one or more steps require it
* One or more steps in the workflow are configured to **Use end user’s auth via Connect**, but no external user ID was passed when invoking the workflow.
* [Refer to the docs](/docs/connect/workflows/#invoke-the-workflow) to make sure you’re passing external user ID correctly when invoking the workflow.
#### No matching external user ID
* There was an external user ID passed, but it didn’t match any users in the project.
* Double-check that the external user ID that you passed when invoking the workflow matches one either [in the UI](/docs/connect/managed-auth/users/) or [via the API](/docs/connect/api-reference/list-accounts).
#### Required account not found for external user ID
* The external user ID was passed when invoking the workflow, but the user doesn’t have a connected account for one or more of the apps that are configured to use it in this workflow execution.
* You can check which connected accounts are available for that user [in the UI](/docs/connect/managed-auth/users/) or [via the API](/docs/connect/api-reference/list-accounts).
#### Running workflows for your users in production requires a higher tier plan
* Anyone is able to run workflows for your end users in `development`.
* Visit the [pricing page](https://pipedream.com/pricing?plan=Connect) for the latest info on using Connect in production.
## Known limitations
#### Workflows can only use a single external user’s auth per execution
* Right now you cannot invoke a workflow to loop through many external user IDs within a single execution.
* You can only run a workflow for a single external user ID at a time (for now).
#### The external user ID to use during execution must be passed in the triggering event
* You can’t run a workflow on a timer for example, and look up the external user ID to use at runtime.
* The external user ID must be passed in the triggering event, typically via [HTTP trigger](/docs/connect/workflows/#invoke-the-workflow).
#### Cannot use multiple accounts for the same app during a single execution
* If a user has multiple accounts for the same app (tied to a single external user), **Pipedream will use the most recently created account**.
* Learn about [managing connected accounts](/docs/connect/managed-auth/users/) for your end users.
# Migrate from v1
Source: https://pipedream.com/docs/deprecated/migrate-from-v1
Never used Pipedream v1? You can skip this migration guide and read on about [Steps](/docs/workflows/#steps).
We are excited to announce that we have launched a new version (v2) of Pipedream to all new and existing users!
We have re-imagined the UX from the ground up, made the product much easier to use and have improved performance. In addition, we are introducing powerful new features including:
* **Edit & test** your workflows in separate editing mode without impacting live workflows
* **Support for multiple languages** including [Node.js](/docs/workflows/building-workflows/code/nodejs/), [Python](/docs/workflows/building-workflows/code/python/), [Bash](/docs/workflows/building-workflows/code/bash/) and [Go](/docs/workflows/building-workflows/code/go/)
* **Granular testing** including the ability to test individual steps and more
* **Multiple triggers** are now supported per workflow
* **Improved** forms for easier configuration and streamlined building
*Get Started*
* Read our [quickstart](/docs/workflows/quickstart/), [docs](/), and/or [FAQ](#faq)
* Have questions? Ask here or on [Discourse](https://pipedream.com/community)
* As a reminder, all integration components are source-available and [hosted on GitHub](https://github.com/PipedreamHQ/pipedream). You can [contribute your own components](/docs/components/contributing/) or improve existing ones.
Watch a demo:
And this is just the beginning — we have an exciting roadmap planned for 2022 including workflow serialization and GitHub integration.
## New Builder Overview
Fundamentally, the new version of the workflow builder gives you the same abilities to build, test and deploy your workflows. However, you'll notice some differences in how to build workflows.
### Building vs Inspecting
In v1, building your workflow and inspecting past events were visible in the same view. The new v2 builder has improved this by separating the workflow **Builder** from the workflow events **Inspector**.
Switch between these contexts using the menu in the top right of the workflow builder.

When you first open a deployed workflow, you're presented with the **Inspector** version of the workflow. In this view you can see logs of past events, and select them to see the results of each step in the workflow.

To edit the workflow, click the **Edit** button in the top right hand corner. This will close the inspector and allow you to edit your workflow without the distraction of logs from the production flow.

### Testing Changes
In the v1 workflow builder, you had to deploy the whole workflow to test changes to any step. To make changes to a deployed workflow, you had to made edits on the live version.
We've improved this flow. Now you can test your changes with a new **Test** button without effecting the live version of the workflow.
In addition to testing single steps, you can now selectively test portions of your workflow (e.g. all steps above or below the selected step):

#### Testing individual events
Not only can you test portions of your workflow in isolation, but you can also select a specific event to run against your workflow.
In the **Test Trigger** portion of your trigger, you can select a past event seen by the workflow and build your steps against it - without having to re-trigger it manually:

### Deploying Changes
After you're happy with your changes, **deploy** them to your production workflow. Just click the **Deploy** button in the top right hand corner of the screen.
After deploying your changes, your workflow is now live, and any changes you made will run against incoming events.
## Node.js Code Step Changes
There are a few changes to the Node.js code steps that you should know about. Some functions have been renamed for more clarity, and we've aligned the Node.js code steps closer to the [Component API](/docs/components/contributing/).
### Code Scaffolding Format
In v1, the Node.js steps would automatically scaffold new Node.js steps in this format:
```javascript
async (event, steps) {
// your code could be entered in here
}
```
In v2, the new scaffolding is wrapped with a new `defineComponent` function:
```javascript
defineComponent({
async run({ steps, $ }) {
// your code can be entered here
},
});
```
1. The `event` from the trigger step is still available, but exposed in `steps.trigger.event` instead.
2. The `$` variable has been passed into the `run` function where your code is executed.
You can think of the `$` as the entry point to built in Pipedream functions. In v1, this special functions included `$end`, `$respond`, etc. In v2, these have been remapped to `$.flow.exit` and `$.respond` respectively.
These changes unify workflow development to the [Component API](/docs/components/contributing/api/) used by pre-built actions and also allows the [defining of props](#params-vs-props) from within your code steps.
### Using 3rd party packages
In v1, you had to define your imports of 3rd party packages within the scaffolded function:
```javascript
async (event, steps) {
const axios = require('axios');
// your code could be entered in here
}
```
Now, in v2 workflows you can `import` your packages in the top of the step, just like a normal Node.js module:
```javascript
import axios from "axios";
defineComponent({
async run({ steps, $ }) {
// your code can be entered here
},
});
```
Allowing all of the scaffolding to be edited opens up the ability to [pass props](/docs/workflows/building-workflows/code/nodejs/#passing-props-to-code-steps) into your Node.js code steps, which we'll cover later.
### Step Exports
In v1, you could assign arbitrary properties to `this` within a Node.js step and the properties would be available as step exports:
```javascript
// this step's name is get_customer_data
async (event, steps) {
this.name = 'Dylan';
// downstream steps could use steps.get_customer_data.name to retrieve 'Dylan'
}
```
In v2 you use \$.export to export data, instead::
```javascript
// this step's name is get_customer_data
defineComponent({
async run({ steps, $ }) {
$.export("name", "Dylan");
// downstream steps can use steps.get_customer_data.name to retrieve 'Dylan'
},
});
```
Using `return` to export data is the same from v1 to v2. You can still `return` data, and it will be available to other steps with \`steps.\[stepName].\$return\_value.
### Exiting a workflow early
In v1, the `$end` function can be called to exit a flow early:
```javascript
async (event, steps) {
$end('Exiting the whole workflow early');
console.log('I will never run');
}
```
In v2, this same function is available, but under `$.flow.exit`:
```javascript
defineComponent({
async run({ steps, $ }) {
return $.flow.exit("Exiting the workflow early");
console.log("I will never run");
},
});
```
### Params vs Props
In the v1 builder, you could pass input to steps using `params`. In the v2 builder, you pass input using [props](/docs/components/contributing/api/#component-api).
You can still enter free text and select data from other steps in pre-built actions. Also can add your own custom props that accept input like strings, numbers and more just like in v1.
#### Defining params
In the v1 workflow builder, params could be structured or unstructured. The params schema builder allowed you to add your own custom params to steps.
In v2, you can add your own custom props without leaving the code editor.
```javascript
export default defineComponent({
props: {
firstName: {
type: "string",
label: "Your first name",
},
},
async run({ steps, $ }) {
console.log(this.firstName);
},
});
```
In the example, you added a firstName string prop. The value assigned to this prop in the workflow builder.
Additionally, Pipedream renders a visual component in the step **Configuration** to accept this input:

### Connecting apps
In the v2 builder, you can connect apps with your code using [props](/docs/components/contributing/api/#props).
Above the `run` function, define an app prop that your Node.js step integrates with:
```javascript
import { axios } from "@pipedream/platform";
export default defineComponent({
props: {
slack: {
type: "app",
app: "slack",
},
},
async run({ steps, $ }) {
return await axios($, {
url: `https://slack.com/api/users.profile.get`,
headers: {
Authorization: `Bearer ${this.slack.$auth.oauth_access_token}`,
},
});
},
});
```
After testing the step, you'll see the Slack app will appear in the **Configuration** section on the left hand side. In this section you can choose which Slack account you'd like to use in the step.

### HTTP Response
You can still return an HTTP response from an HTTP-triggered workflow.
Use [`$.respond`](/docs/workflows/building-workflows/triggers/#http) to send a JSON or string response from the HTTP call that triggered the workflow.
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.respond({
status: 200,
headers: {},
body: {
message: "hello world!",
},
});
},
});
```
Please note, you'll also need to configure the HTTP trigger step to also allow custom responses. Use the dropdown in the **HTTP Response** section of the HTTP trigger to select the **Return a custom response from your workflow** option:

## Known Gaps & Limitations
However, some features from the original builder are not currently available in v2. The Pipedream team is working to quickly address these items, but if you have feedback that isn't listed here, please [reach out](https://pipedream.com/support).
### Sharing workflows
At this time, sharing is not yet implemented in v2 of the workflow builder. As workaround, create your workflows in a organization which make workflows available to your team members.
If you need assistance transferring workflows across accounts, [please contact us](https://pipedream.com/support).
### `$checkpoint`
The `$checkpoint` functionality to save data between workflow runs is not supported in v2, and has been replaced by [Data Stores](/docs/workflows/building-workflows/code/nodejs/using-data-stores/).
### Public workflows
At this time, all v2 workflows are private. Unfortunately at this time there is no workaround. We'll announce when a workaround for this limitation is available.
If you're working with Pipedream support to troubleshoot your workflow, you can share it with the support team under your workflow's **Settings**.
### Rolling back a specific version
In v2, you can test and save your progress on a workflow *without* deploying it.
However, after deploying it's not possible to rollback to a prior version of a deployed workflow.
You can still edit a deployed workflow, just like in v1 but automatic version rollbacks are not currently possible.
### Replaying production events
In the v2 builder, you can still view individual events that trigger your v2 workflows in the **Inspector** events log. You can delete specific events or all of them in one click as well.
To replay past events against your deploy v2 workflows, open the event's menu and click **Replay Event**. This will rerun your workflow with this same event.
## FAQ
### What are the benefits of the new (v2) workflow builder?
* **Edit & test** your workflows in separate editing mode without impacting live workflows
* **Support for multiple languages** including Node, Python, Golang & bash
* **Granular testing** including the ability to test individual steps and more
* **Multiple triggers** are now supported per workflow
* **Improved** forms for easier configuration and streamlined building
### What are the limitations of the new (v2) workflow builder?
* `$checkpoint` has been removed from v2 workflows, but [Data Stores](/docs/workflows/building-workflows/code/nodejs/using-data-stores/) provides a similar API.
* Sharing workflows is not supported
* Making workflows public is not supported
### Are v2 workflows backwards compatible?
No, v2 workflows are not currently compatible with the v1 builder.
However, pre-built component actions are still compatible across both versions. If you do encounter a gap from v1 actions in the v2 builder, [reach out to us](https://pipedream.com/support).
### Is the Component API changing as well? Will I need to rewrite Components?
No. Any components in the public registry or any private components you have published in your account are compatible with v2.
The v2 workflow builder utilizes the same Component API allowing you to create components from within your workflows, which was not possible in v1.
### Will I still be able to open and edit v1 workflows?
Yes, absolutely you will still be able to view and edit v1 workflows. There is no need to immediately change your workflows from v1 to v2.
### How do I migrate v1 workflows to v2 workflows?
At this time we do not have an automated process to change v1 to v2. To create a v2 equivalent workflow, you can recompose your v1 workflow in the v2 builder.
However, if it uses custom Node.js code steps, be sure to [follow the changes we describe in the guide above](/docs/deprecated/migrate-from-v1#nodejs-code-step-changes).
### When will the new (v2) workflow builder be the default builder for all customers?
By default, existing users will still default to the v1 builder. You can create new v2 workflows from the dropdown menu next to the New workflow button.
if you'd like to default to the v2 builder when creating new workflows, you can change the **Builder Version** in [your account settings](https://pipedream.com/settings/account).
### When will I no longer be able to create v1 workflows?
There is currently no deprecation date for v1 workflows. We will continue to support of v1 workflows until we have feature parity with v2.
When this date becomes clear we will provide assistance to automatically and assist migrate v1 to v2 workflows for you.
# Pipedream Glossary
Source: https://pipedream.com/docs/glossary
Below you’ll find a glossary of Pipedream-specific terms. We use these in the product, docs, and other content, so if you’re seeing a term for the first time, you’ll probably find it below.
All terms that aren’t in this doc hold their standard technical meaning. If you see a term missing, please [reach out](https://pipedream.com/support).
[0-9](/docs/glossary/#0---9) | [A](/docs/glossary/#a) | [B](/docs/glossary/#b) | [C](/docs/glossary/#c) | [D](/docs/glossary/#d) | [E](/docs/glossary/#e) | [F](/docs/glossary/#f) | [G](/docs/glossary/#g) | [H](/docs/glossary/#h) | [I](/docs/glossary/#i) | [J](/docs/glossary/#j) | [K](/docs/glossary/#k) | [L](/docs/glossary/#l) | [M](/docs/glossary/#m) | [N](/docs/glossary/#n) | [O](/docs/glossary/#o) | [P](/docs/glossary/#p) | [Q](/docs/glossary/#q) | [R](/docs/glossary/#r) | [S](/docs/glossary/#s) | [T](/docs/glossary/#t) | [U](/docs/glossary/#u) | [V](/docs/glossary/#v) | [W-Z](/docs/glossary/#w-z)
## 0 - 9
### 2FA
Short for [two-factor authentication](/docs/glossary/#two-factor-authentication-2fa).
## A
### Account
Synonym for [connected account](/docs/glossary/#connected-account).
### Action
Actions are reusable code steps, written as [Pipedream components](/docs/glossary/#component).
### Advanced plan
Pipedream’s plan for individuals and teams running production workflows. [See the pricing page](https://pipedream.com/pricing) for more details.
### Auto-retry
[A workflow setting](/docs/workflows/building-workflows/settings/#auto-retry-errors) that lets you automatically retry an execution from the failed step when it encounters an error.
## B
### Bash runtime
Pipedream’s internal code in the [execution environment](/docs/glossary/#execution-environment) responsible for running Bash code.
### Basic plan
Pipedream’s plan for individuals who need higher limits and the option to scale usage. [See the pricing page](https://pipedream.com/pricing) for more details.
### Bi-directional GitHub sync
When you configure [GitHub Sync](/docs/glossary/#github-sync), you can make changes in Pipedream and push them to GitHub, or make changes locally, push to GitHub, and deploy to Pipedream. Since changes can be made in each system and communicated to the other, the sync is bi-directional.
### Branch
Short for [Git branch](https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell). When using [Pipedream GitHub Sync](/docs/glossary/#github-sync), you can sync a GitHub repository to a Pipedream project and manage changes to code in a branch.
### Builder
The Pipedream UI where you build, edit, and test workflows.
### Business plan
Pipedream’s plan for teams with security, compliance, and support needs. [See the pricing page](https://pipedream.com/pricing) for more details.
## C
### Changelog
Synonym for [project changelog](/docs/glossary/#project-changelog).
### Code step
[Steps](/docs/glossary/#step) that let users run [custom code](/docs/workflows/building-workflows/code/) in a workflow.
### Cold start
A cold start refers to the delay between the invocation of workflow and the execution of the workflow code. Cold starts happen when Pipedream spins up a new [execution environment](/docs/glossary/#execution-environment) to handle incoming events.
### Commit
Short for [Git commit](https://git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository). When using [Pipedream GitHub Sync](/docs/glossary/#github-sync), you commit changes to a branch before deploying the workflow to production.
### Component
Components are Node.js modules that run on Pipedream’s serverless infrastructure. [Sources](/docs/glossary/#source) and [actions](/docs/glossary/#action) are two types of components. See [the component API](/docs/components/contributing/api/) for more details.
### Component API
The programming interface for creating [components](/docs/glossary/#component) in Pipedream.
### Component guidelines
Guidelines applied to components submitted to [the Pipedream component registry](/docs/glossary/#component-registry).
### Component registry
The public registry of [components](/docs/glossary/#component) available to Pipedream users, [available on GitHub](https://github.com/PipedreamHQ/pipedream).
### Concurrency
[A workflow setting](/docs/workflows/building-workflows/settings/concurrency-and-throttling/#concurrency) that lets users configure the number of concurrent [workers](/docs/glossary/#worker) available to process events.
### Connected account
A specific account or credentials used to connect to a Pipedream [integration](/docs/glossary/#integrations). If both you and your team member have an account with OpenAI, for example, you would connect each account as a distinct connected account. [See the docs](/docs/apps/connected-accounts/) for more details.
### Connected account access control
You can restrict access to connected accounts to specific individuals or share with the entire workspace. [See the docs](/docs/apps/connected-accounts/#access-control) for more details.
### Credit
Pipedream charges one credit per 30 seconds of compute time at 256MB megabytes of memory (the default) per workflow execution. Credits are also charged for [dedicated workers](/docs/glossary/#dedicated-workers). [See the docs](/docs/pricing/#credits-and-billing) for more details.
### Custom domain
By default, [HTTP endpoints](/docs/glossary/#http-endpoint) are served from the `*.m.pipedream.net` domain. You can configure a [custom domain](/docs/workflows/domains/) if you want to host that endpoint on your own domain.
### Custom source
An [event source](/docs/glossary/#event-source) that you create using custom code, or by modifying a [registry source](/docs/glossary/#registry-source).
## D
### Data retention
A workflow setting that allows you to configure how long Pipedream stores event data and logs associated with [executions](/docs/glossary/#execution). [See the docs](/docs/workflows/building-workflows/settings/#data-retention-controls) for more details.
### Dedicated workers
[Workers](/docs/glossary/#worker) that remain available to process events, even when the workflow is not running. This can help reduce [cold starts](/docs/glossary/#cold-start) and improve performance for workflows that require low latency. [See the docs](/docs/workflows/building-workflows/settings/#eliminate-cold-starts) for more details.
### Deduper
[Event sources](/docs/glossary/#event-source) can receive duplicate requests tied to the same event. Pipedream’s infrastructure supports [deduplication](/docs/components/contributing/api/#dedupe-strategies) to ensure that only unique events are emitted by a source.
### Delay
[A built-in service](/docs/workflows/building-workflows/control-flow/delay/) that lets you pause a workflow for a specified amount of time. You can delay workflows using pre-built actions, or delay in code.
### Destination
[Destinations](/docs/workflows/data-management/destinations/) are built-in services that abstract the delivery and connection logic required to send events to services like Amazon S3, or targets like HTTP and email.
### Domain
Synonum for [custom domain](/docs/glossary/#custom-domain).
### Data store
[Data stores](/docs/workflows/data-management/data-stores/) are Pipedream’s built-in key-value store.
### Deploy key
When you configure [GitHub Sync](/docs/glossary/#github-sync), you can use a deploy key to authenticate Pipedream with your GitHub repository. [See the docs](/docs/workflows/git/#create-a-new-project-and-enable-github-sync) for more details.
## E
### Editor
The built-in code editor in the [builder](/docs/glossary/#builder).
### Email trigger
A [workflow trigger](/docs/glossary/#trigger) that listens for incoming email. This trigger exposes a workflow-specific email address that you can use to send email to the workflow.
### Emit
[Event sources](/docs/glossary/#event-source), [workflow triggers](/docs/glossary/#trigger), and even workflows themselves can emit [events](/docs/glossary/#event) that trigger other [listeners](/docs/glossary/#listener). Since sources have a built-in [deduper](/docs/glossary/#deduper), not all requests are emitted as events.
### Emitter
A resource that [emits](/docs/glossary/#emit) [events](/docs/glossary/#event). Emitters can be [event sources](/docs/glossary/#event-source), [workflow triggers](/docs/glossary/#trigger), or even workflows themselves.
### Error notification
When a workflow execution encounters an error, Pipedream sends an [error notification](/docs/workflows/building-workflows/errors/) to the configured error [listeners](/docs/glossary/#listener).
### Environment variable
Pipedream supports two types of environment variables:
* [Project variables](/docs/glossary/#project-variable), available within a specific project
* [Workspace variables](/docs/glossary/#workspace-variable), available across all projects in a workspace
### Event
Events are emitted by [sources](/docs/glossary/#event-source) and consumed by workflows. Events can be triggered by a variety of sources, including HTTP requests, cron schedules, and third-party APIs. Events can be passed to actions, which can process the event data and perform a variety of operations, including making HTTP requests, sending emails, and interacting with third-party APIs.
### Event context
Metadata about a workflow execution, including the timestamp of the event, the event ID, and more. Exposed in [`steps.trigger.context`](/docs/workflows/building-workflows/triggers/#stepstriggercontext).
### Event data
The content of the event, exposed in [`steps.trigger.event`](/docs/workflows/building-workflows/triggers/).
### Event history
A log of all workflow events and executions, available in the [event inspector](/docs/glossary/#inspector) or the global [event history UI](/docs/workflows/event-history/).
### Event queue
When using built-in [concurrency](/docs/glossary/#concurrency) or [throttling](/docs/glossary/#throttling) controls, events are queued in a workflow-specific queue and processed by available [workers](/docs/glossary/#worker).
### Event source
[Components](/docs/glossary/#component) that watch for events from a third-party data source, emitting those events to [listeners](/docs/glossary/#listener).
### Execution
When a workflow is triggered by an event, the running instance of the workflow on that event is called an execution.
### Execution environment
[The virtual machine](/docs/privacy-and-security/#execution-environment) and internal Pipedream platform code that runs a workflow execution. An instance of an execution environment is called a [worker](/docs/glossary/#worker).
### Execution rate controls
The workflow setting that allows users to configure the number of executions a workflow can process per unit time. Also known as throttling. [See the docs](/docs/workflows/building-workflows/settings/concurrency-and-throttling/#throttling) for more details.
### Export
Depending on the context, **export** can function as a noun or verb:
* **Noun**: A synonym for [step export](/docs/glossary/#step-export)
* **Verb**: The act of exporting data from a step using Pipedream primitives like [`$.export`](/docs/workflows/building-workflows/code/nodejs/#using-export) or `return`.
### Expression
In programming, expressions are code that resolve to a value. In Pipedream, [you can use expressions within props forms](/docs/workflows/building-workflows/using-props/#entering-expressions) to reference prior steps or compute custom values at runtime.
### External credentials
[Connected accounts](/docs/glossary/#connected-account) are accounts that users link directly in Pipedream. External credentials are credentials that users store in their own database or service, and reference in Pipedream at runtime. [See the docs](/docs/apps/external-auth/) for more details.
## F
### File store
[File stores](/docs/workflows/data-management/file-stores/) are filesystems scoped to projects. Any files stored in the file store are available to all workflows in the project.
### Filter
[Built-in actions](https://pipedream.com/apps/filter) that let you continue or stop a workflow based on a condition.
### Folder
Within projects, you can organize workflows into folders.
### Free plan
Pipedream’s free plan. [See the limits docs](/docs/workflows/limits/) for more details.
## G
### Global search
Press `Ctrl + K` or `Cmd + K` to open the global search bar in the Pipedream UI.
### GitHub Sync
When enabled on a [project](/docs/glossary/#project), GitHub Sync syncs the project’s workflow code with a GitHub repository. [See the docs](/docs/workflows/git/) for more details.
### Golang runtime
Pipedream’s internal code in the [execution environment](/docs/glossary/#execution-environment) responsible for running Go code.
## H
### Helper functions
[Built-in actions](https://pipedream.com/apps/helper-functions) that convert data types, format dates, and more.
### Hooks
[Hooks](/docs/components/contributing/api/#hooks) are functions executed as a part of the [event source](/docs/glossary/#event-source) lifecycle. They can be used to perform setup tasks before the source is deployed, or teardown tasks after the source is destroyed.
### HTTP endpoint
The URL tied to a [workflow HTTP trigger](/docs/glossary/#http-trigger) or HTTP-triggered [event source](/docs/glossary/#event-source).
### HTTP trigger
A [workflow trigger](/docs/glossary/#trigger) that listens for incoming HTTP requests. This trigger exposes a unique URL that you can use to send HTTP requests to the workflow.
## I
### Inspector
The Pipedream UI that displays a specific workflow’s event history. [See the docs](/docs/workflows/building-workflows/inspect/) for more details.
### Integrations
When Pipedream adds a new third-party service to our marketplace of apps, we often have to handle details of the OAuth process and authentication, and build [sources](/docs/glossary/#event-source) and [actions](/docs/glossary/#action) for the API. These details are abstracted from the user, and the app configuration is referred to as an **integration**.
## J
## K
### Key-based account
A [connected account](/docs/glossary/#connected-account) that uses static credentials, like API keys.
## L
### Listener
A resource that listens for events emitted by [emitters](/docs/glossary/#emitter). Listeners can be [workflows](/docs/glossary/#workflow), [event sources](/docs/glossary/#event-source), webhook URLs, and more.
### Logs
Standard output and error logs generated by steps during a workflow execution. Logs are available as a part of the step execution details in the [event inspector](/docs/glossary/#inspector) or the global [event history UI](/docs/workflows/event-history/).
## M
### Merge
When you configure [GitHub Sync](/docs/glossary/#github-sync), you can merge changes from a branch into the production branch of your GitHub repository, deploying those changes to Pipedream.
## N
### Node.js runtime
Pipedream’s internal code in the [execution environment](/docs/glossary/#execution-environment) responsible for running Node.js code.
## O
### Organization
Synonym for [workspaces](/docs/glossary/#workspace).
### OAuth account
A [connected account](/docs/glossary/#connected-account) that uses OAuth to authenticate with a third-party service.
## P
### Premium apps
Pipedream’s built-in [integrations](/docs/glossary/#integrations) that require a paid plan to use. [See the pricing page](https://pipedream.com/pricing) for more details and the [full list of premium apps](/docs/apps/#premium-apps).
### Project
A container for workflows, secrets, and other resources in Pipedream. Projects can be synced with a GitHub repository using [GitHub Sync](/docs/glossary/#github-sync). [See the docs](/docs/projects/) for more details.
### Project-based access control
You can restrict access to projects to specific individuals or share with the entire workspace. [See the docs](/docs/projects/access-controls/) for more details.
### Project changelog
When using [Pipedream GitHub Sync](/docs/glossary/#github-sync), the changelog shows the history of changes made to a project.
### Project file
A file stored in a [file store](/docs/glossary/#file-store).
### Project secret
Users can add both standard project variables and secrets to a project. The values of secrets are encrypted and cannot be read from the UI once added.
### Project settings
Configure GitHub Sync and other project-specific configuration in a project’s settings.
### Project variable
Project-specific environment variables, available to all workflows in a project.
### Props
[Props](/docs/workflows/building-workflows/using-props/) allow you to pass input to [components](/docs/glossary/#component).
### Python runtime
Pipedream’s internal code in the [execution environment](/docs/glossary/#execution-environment) responsible for running Python code.
### Object explorer
The [builder](/docs/glossary/#builder) UI that allows you to search objects [exported](/docs/glossary/#export) from prior steps. [See the docs](/docs/workflows/building-workflows/using-props/#use-the-object-explorer) for more details.
## Q
## R
### Registry
Synonym for [component registry](/docs/glossary/#component-registry).
### Registry source
An [event source](/docs/glossary/#event-source) available in the [component registry](/docs/glossary/#component-registry). Registry sources are reviewed and approved by Pipedream.
## S
### Schedule trigger
A [workflow trigger](/docs/glossary/#trigger) that runs on a schedule. This trigger exposes a cron-like syntax that you can use to schedule the workflow.
### Single sign-on (SSO)
Users can [configure SSO](/docs/workspaces/sso/) to authenticate with Pipedream using their identity provider.
### Source
Synonym for [event source](/docs/glossary/#event-source).
### Step
[Steps](/docs/workflows/#steps) are the building blocks used to create workflows. Steps can be [triggers](/docs/glossary/#trigger), [actions](/docs/glossary/#action), or [code steps](/docs/glossary/#code-step).
### Step export
JSON-serializable data returned from steps, available in future steps of a workflow. [See the docs](/docs/workflows/#step-exports) for more details.
### Step notes
[Step notes](/docs/workflows/#step-notes) are Markdown notes you can add to a step to document its purpose.
### Subscription
A connection between a [listener](/docs/glossary/#listener) and an [emitter](/docs/glossary/#emitter) that allows the listener to receive events from the emitter.
### Suspend
Workflow [executions](/docs/glossary/#execution) are suspended when you [delay](/docs/glossary/#delay) or use functions like [`$.flow.suspend`](/docs/workflows/building-workflows/code/nodejs/rerun/#flowsuspend) to pause the workflow.
## T
### Throttling
Synonym for [execution rate controls](/docs/glossary/#execution-rate-controls).
### Timeout
All workflows have [a default timeout](/docs/workflows/limits/#time-per-execution). You can configure a custom timeout in the [workflow settings](/docs/workflows/building-workflows/settings/#execution-timeout-limit).
### `/tmp` directory
A directory available to the workflow’s [execution environment](/docs/glossary/#execution-environment) for storing files. Files stored in `/tmp` are only guaranteed to be available for the duration of the workflow execution, and are not accessible across [workers](/docs/glossary/#worker).
### Trigger
Triggers process data from third-party APIs and [emit](/docs/glossary/#emit) [events](/docs/glossary/#event) that run workflows. Triggers can be [HTTP triggers](/docs/glossary/#http-trigger), [schedule triggers](/docs/glossary/#schedule-trigger), [email triggers](/docs/glossary/#email-trigger), [event sources](/docs/glossary/#event-source), and more.
### Two-factor authentication (2FA)
Two-factor authentication. [Configure 2FA](/docs/account/user-settings/#two-factor-authentication) to add an extra layer of security to your Pipedream account.
## U
## V
### VPC (Virtual Private Cloud)
VPCs are customer-specific private networks where workflows can run. [See the docs](/docs/workflows/vpc/) for more details.
## W-Z
### Worker
An instance of a workflow [execution environment](/docs/glossary/#execution-environment) available to processes [events](/docs/glossary/#event).
### Workspace
You create a workspace when you sign up for Pipedream. Workspaces contain projects, workflows, and other resources. [See the docs](/docs/workspaces/) for more details.
### Workspace admin
A workspace can have multiple [admins](/docs/workspaces/#promoting-a-member-to-admin), who can administer the workspace, manage billing, and more.
### Workspace member
A user invited to a workspace. Members can create projects, workflows, and other resources in the workspace, but cannot manage billing or administer the workspace.
### Workspace owner
The user who created the workspace.
### Workflow serialization
When you use [GitHub Sync](/docs/glossary/#github-sync), Pipedream serializes the workflow configuration to a YAML file. Optionally, if your workflow contains custom code, Pipedream serializes the code to a separate file.
### Workspace settings
[Workspace settings](/docs/glossary/#workspace-settings) let [workspace admins](/docs/glossary/#workspace-admin) configure settings like membership, [SSO](/docs/glossary/#single-sign-on-sso), and more.
### Workflow template
When you [share a workflow](/docs/workflows/building-workflows/sharing/), you create a template that anyone can copy and run.
### Workspace variable
An environment variable available across all projects in a workspace.
### Workflow
Workflows are the primary resource in Pipedream. They process events from [triggers](/docs/glossary/#trigger) and run [steps](/docs/glossary/#step) to perform actions like making HTTP requests, sending emails, and more.
[Troubleshooting](/docs/troubleshooting/ "Troubleshooting")
# Plans And Pricing
Source: https://pipedream.com/docs/pricing
export const PUBLIC_APPS = '2,700';
We believe anyone should be able to run simple, low-volume workflows at no cost, and whatever you’re building, we want you to be able to prototype and get to value before having to pay Pipedream any money. We also hope that you share your [sources](/docs/components/contributing/#sources), [workflows](/docs/workflows/building-workflows/), [actions](/docs/components/contributing/#actions), and other integration components so that other Pipedream users benefit from your work.
To support these goals, Pipedream offers a generous [free plan](/docs/pricing/#free-plan), free access to Pipedream Connect [while in `development`](/docs/connect/managed-auth/environments//), and you can **[request a free trial of our Advanced plan](https://pipedream.com/pricing)**. If you exceed the limits of the free plan, you can upgrade to one of our [paid plans](/docs/pricing/#paid-plans).
[Read more about our plans and pricing here](https://pipedream.com/pricing).
## Free plan
Workspaces on the Free plan have access to many of the Pipedream features, including [Workflows](/docs/workflows/) and [Connect](/docs/connect/). However, the free plan includes several limitations:
**General:**
* **Daily credit limit**: Free workspaces have a [daily limit of free credits](/docs/workflows/limits/#daily-credits-limit) that cannot be exceeded
* **Support**: Community support is available via our [community forum](https://pipedream.com/community) and [Slack](https://join.slack.com/t/pipedream-users/shared_invite/zt-36p4ige2d-9CejV713NlwvVFeyMJnQPwL)
**Connect:**
* **Development environment**: Access all the features of Connect while in [development mode](/docs/connect/managed-auth/environments//)
**Workflows:**
* **Active workflows**: Limited number of active workflows
* **Connected accounts**: Limited number of connected accounts (like Slack, Google Sheets, GitHub, etc.)
To lift the daily credit limit, access additional features, or use Connect in production, [upgrade to a paid plan](https://pipedream.com/pricing).
## Paid plans
[Visit our pricing page](https://pipedream.com/pricing) to learn more about the details of our paid plans.
## Pipedream Connect
Pipedream Connect provides SDKs and APIs to let you easily add {PUBLIC_APPS}+ integrations to your app or AI agent. Connect pricing is based on two inputs:
1. **[API usage](/docs/pricing/#how-credits-work-in-connect)**: Credits consumed by action executions, tool calls, and other operations
2. **[End users](/docs/pricing/#end-users)**: Referred to as “external users” throughout the docs and API, this is the number of unique users in your application who connect accounts
### How credits work in Connect
API operations that consume credits (1 credit per 30 seconds of compute time):
* **Action executions**
* **Tool calls via MCP**
* **Source execution for deployed triggers**
* **Requests to the Connect proxy**
API operations that do **not** consume credits:
* **Listing apps, actions, and triggers**
* **Listing accounts**
* **Configuring actions and triggers**
* **Other management operations**
### End users
End (external) users are a core billing component for Connect pricing, separate from credit usage:
* **End user definition**: A unique end user in your application who connects one or more accounts through Connect
* **User to account relationship**: Each end user can have multiple connected accounts (e.g., one user might connect their Slack, Gmail, and GitHub accounts)
* **Billing impact**: For standard plans, you’re billed based on the number of unique external users, not the number of connected accounts
## Pipedream Workflows
Pipedream Workflows uses a credit-based pricing model where you pay for compute time used during workflow execution.
### Credits and billing
Pipedream uses a number of terms to describe platform metrics and details of our plans. See the definitions of key terms in the [glossary](/docs/glossary/).
#### How credits work for Workflows
Pipedream charges one credit per 30 seconds of compute time at 256MB megabytes of memory (the default) per [workflow segment](/docs/workflows/building-workflows/control-flow/#workflow-segments). Credits are also charged for [dedicated workers](/docs/workflows/building-workflows/settings/#eliminate-cold-starts).
Unlike some other platforms, Pipedream does not charge for usage based on the number of steps. Credits are not charged for workflows during development or testing.
Adding additional memory capacity to workflows will increase credit usage in intervals of 256 megabytes. For example, doubling the memory of a workflow from 256MB to 512MB will double the cost of credits in the same execution time.
**Scenarios**
Developing a workflow with test events in the Pipedream workflow builder is free. No credit usage is incurred.
If an active workflow isn’t executed in a billing period no credit usage is incurred. Pipedream only charges credits for workflow executions.
**Workflow segments configured to use 256MB memory (default)**
| Scenario | Credits Used |
| ------------------------------------------------------------------------------------------------------------------------------------------ | ------------ |
| Simple linear workflow - 1 second of compute | 1 credit |
| Simple linear workflow - 15 seconds of compute | 1 credit |
| Simple linear workflow - 35 seconds of compute | 2 credits |
| Linear workflow with a delay- 15 seconds before the delay- 15 seconds after execution resumes | 2 credits |
| Workflow with a branch - 3 seconds before the branch- 15 seconds within the executed branch | 2 credits |
| Workflow with a branch - 3 seconds before the branch, 15 seconds within the executed branch, 3 seconds after the branch in the parent flow | 3 credits |
**Workflow segments configured to use 1GB memory**
| Scenario | Credits Used |
| ------------------------------------------------------------------------------------------------------------------------------------------ | ------------ |
| Simple linear workflow - 1 second of compute | 4 credit |
| Simple linear workflow - 15 seconds of compute | 4 credit |
| Simple linear workflow - 35 seconds of compute | 8 credits |
| Linear workflow with a delay- 15 seconds before the delay- 15 seconds after execution resumes | 8 credits |
| Workflow with a branch - 3 seconds before the branch- 15 seconds within the executed branch | 8 credits |
| Workflow with a branch - 3 seconds before the branch, 15 seconds within the executed branch, 3 seconds after the branch in the parent flow | 24 credits |
#### Source credit usage
When an [event source](/docs/workflows/building-workflows/triggers/) triggers a workflow, **the source execution is included for free.** This includes workspaces on the [Free plan](/docs/pricing/#free-plan).
When a source is configured as a workflow trigger, the core value is in the workflow. You won’t be charged for two credits (one to run the source, one to run the workflow) when the workflow contains the core logic.
This free credit per execution **only** applies to sources from the [Pipedream public registry](/docs/workflows/building-workflows/triggers/). If you deploy a private custom source to your account, then all computation time including the inital 30 seconds for that private source counts toward credits.
A polling source finishing under 30 seconds per execution
For example, a source that polls an API for new events like [Airtable - New Row Added](https://pipedream.com/apps/airtable/triggers/new-records) only takes \~5 seconds to poll and emit events to subscribing workflows.
This would result in **0 credits** per run because the **Airtable - New Row Added** source is a [publicly available component](https://pipedream.com/apps/airtable/triggers/new-records).
A polling source finishing over 30 seconds per execution
Consider a source (like **RSS - New Item in Feed** for instance) that takes 60 seconds total to finish polling, per execution.
Each execution of this source would result in **0 credits** because the **RSS - New Item in Feed** source is a [publicly available component](https://pipedream.com/apps/rss/triggers/new-item-in-feed).
A custom source that finished under 30 seconds per execution
This would result in **1 credit** per execution because the initial free credit only applies to Pipedream Public Registry sources attached to at least one workflow.
#### Included credits
When you sign up for a paid plan, you pay a platform fee at the start of each [billing period](/docs/pricing/#billing-period). This minimum monthly charge grants you a base of included credits that you can use for the rest of your billing period (see your [Billing and Usage Settings](https://pipedream.com/settings/billing) for your exact quota). If you have been granted any additional credit increases by Pipedream, that is added to the included credits.
#### Additional credits
Any credits you run over your [included credit](/docs/workflows/limits/#daily-credits-limit) are called **additional credits**. This usage is added to the invoice for your next [billing period](/docs/pricing/#billing-period), according to the [invoicing cycle described here](/docs/pricing/faq/#when-am-i-billed-for-paid-plans).
#### Data store keys
A Data Store key represents a single record in a Data Store.
In the example below, there are two records in the Data Store, and therefore there are two keys total.
## Managing your plan
To cancel, upgrade or downgrade your plan, open the [pricing page](https://pipedream.com/pricing).
To update your billing details, such as your VAT number, email address, etc. use the **Manage Billing Information** button in your [workspace billing settings](https://pipedream.com/settings/billing) to change your plan. Within this portal you can cancel, upgrade or downgrade your plan at any time.
### Billing period
Many of the usage statistics for paid users are tied to a **billing period**. Your billing period starts when you sign up for a paid plan, and recurs roughly once a month for the duration of your subscription.
For example, if you sign up on Jan 1st, your first billing period will last one month, ending around Feb 1st, at which point you’ll start a new billing period.
Your invoices are tied to your billing period. [Read more about invoicing / billing here](/docs/pricing/faq/#when-am-i-billed-for-paid-plans).
### Upgrading
Upgrading your subscription instantly activates the features available to your workspace. For example, if you upgrade your workspace from Free to Basic, that workspace will immediately be able to activate more workflows and connected accounts.
### Downgrading
Downgrades will apply at the end of your billing cycle, and any workflows or integrations that use features outside the new billing plan will be automatically disabled.
For example, if your workspace downgrades from Advanced to Basic and a workflow uses an Advanced feature such as [auto-retries](/docs/workflows/building-workflows/settings/#auto-retry-errors), then this workflow will be disabled because the workspace plan no longer qualifies for that feature.
Additionally, resource limits such as the number of active workflows and connected accounts will also be enforced at this same time.
### Cancelling your plan
To cancel your plan, open the [pricing page](https://pipedream.com/pricing) and click **Cancel** beneath your current plan.
Cancelling your subscription will apply at the end of your current billing period. Workflows, connected accounts and sources will be deactivated from newest to oldest until the Free limits have been reached.
## Detailed pricing information
Refer to our [pricing page](https://pipedream.com/pricing) for detailed pricing information.
# FAQ
Source: https://pipedream.com/docs/pricing/faq
export const MEMORY_LIMIT = '256MB';
## How does workflow memory affect credits?
Pipedream charges credits proportional to the memory configuration. If you run your workflow at the default memory of {MEMORY_LIMIT}, you are charged one credit each time your workflow executes for 30 seconds. But if you configure your workflow with `1024MB` of memory, for example, you’re charged **four** credits, since you’re using `4x` the default memory.
## Are there any limits on paid tiers?
**You can run any number of credits for any amount of compute time** on any paid tier. [Other platform limits](/docs/workflows/limits/) apply.
## When am I billed for paid plans?
When you upgrade to a paid tier, Stripe will immediately charge your payment method on file for the platform fee tied to your plan (see [https://pipedream.com/pricing](https://pipedream.com/pricing))
If you accrue any [additional credits](/docs/pricing/#additional-credits), that usage is reported to Stripe throughout the [billing period](/docs/pricing/#billing-period). That overage, as well as the next platform fee, is charged at the start of the *next* billing period.
## Do any plans support payment by invoice, instead of credit / debit card?
Yes, Pipedream can issue invoices on the Business Plan. [Please reach out to support](https://pipedream.com/support)
## How does Pipedream secure my credit card data?
Pipedream stores no information on your payment method and uses Stripe as our payment processor. [See our security docs](/docs/privacy-and-security/#payment-processor) for more information.
## Are unused credits rolled over from one period to the next?
**No**. On the Free tier, unused included daily credits under the daily limit are **not** rolled over to the next day.
On paid tiers, unused included credits are also **not** rolled over to the next month.
## How do I change my billing payment method?
Please visit your [Stripe customer portal](https://pipedream.com/settings/billing?rtsbp=1) to change your payment method.
## How can I view my past invoices?
Invoices are emailed to your billing email address. You can also visit your [Stripe customer portal](https://pipedream.com/settings/billing?rtsbp=1) to view past invoices.
## Can I retrieve my billing information via API?
Yes. You can retrieve your usage and billing metadata from the [/users/me](/docs/rest-api/#get-current-user-info) endpoint in the Pipedream REST API.
## How do I cancel my paid plan?
You can cancel your plan in your [Billing and Usage Settings](https://pipedream.com/settings/billing). You will have access to your paid plan through the end of your current billing period. Pipedream does not prorate plans cancelled within a billing period.
If you’d like to process your cancellation immediately, and downgrade to the free tier, please [reach out](https://pipedream.com/support).
## How do I change the billing email, VAT, or other company details tied to my invoice?
You can update your billing information in your [Stripe customer portal](https://pipedream.com/settings/billing?rtsbp=1).
## How do I contact the Pipedream team with other questions?
You can start a support ticket [on our support page](https://pipedream.com/support). Select the **Billing Issues** category to start a billing related ticket.
# Privacy And Security At Pipedream
Source: https://pipedream.com/docs/privacy-and-security
export const PUBLIC_APPS = '2,700';
Pipedream is committed to the privacy and security of your data. Below, we outline how we handle specific data and what we do to secure it. This is not an exhaustive list of practices, but an overview of key policies and procedures.
It is also your responsibility as a customer to ensure you’re securing your workflows’ code and data. See our [security best practices](/docs/privacy-and-security/best-practices/) for more information.
Pipedream has demonstrated SOC 2 compliance and can provide a SOC 2 Type 2 report upon request (please reach out to [support@pipedream.com](mailto:support@pipedream.com)).
If you have any questions related to data privacy, please email [privacy@pipedream.com](mailto:privacy@pipedream.com). If you have any security-related questions, or if you’d like to report a suspected vulnerability, please email [security@pipedream.com](mailto:security@pipedream.com).
## Reporting a Vulnerability
If you’d like to report a suspected vulnerability, please contact [security@pipedream.com](mailto:security@pipedream.com).
If you need to encrypt sensitive data as part of your report, you can use our security team’s [PGP key](/docs/privacy-and-security/pgp-key/).
## Reporting abuse
If you suspect Pipedream resources are being used for illegal purposes, or otherwise violate [the Pipedream Terms](https://pipedream.com/terms), [report abuse here](/docs/abuse/).
## Compliance
### SOC 2
Pipedream undergoes annual third-party audits. We have demonstrated SOC 2 compliance and can provide a SOC 2 Type 2 report upon request. Please reach out to [support@pipedream.com](mailto:support@pipedream.com) to request the latest report.
We use [Drata](https://drata.com) to continuosly monitor our infrastructure’s compliance with standards like SOC 2, and you can visit our [Security Report](https://app.drata.com/security-report/b45c2f79-1968-496b-8a10-321115b55845/27f61ebf-57e1-4917-9536-780faed1f236) to see a list of policies and processes we implement and track within Drata.
### Annual penetration test
Pipedream performs annual pen tests with a third-party security firm. Please reach out to [support@pipedream.com](mailto:support@pipedream.com) to request the latest report.
### GDPR
#### Data Protection Addendum
Pipedream is considered both a Controller and a Processor as defined by the GDPR. As a Processor, Pipedream implements policies and practices that secure the personal data you send to the platform, and includes a [Data Protection Addendum](https://pipedream.com/dpa) as part of our standard [Terms of Service](https://pipedream.com/terms).
The Pipedream Data Protection Addendum includes the [Standard Contractual Clauses (SCCs)](https://ec.europa.eu/info/law/law-topic/data-protection/international-dimension-data-protection/standard-contractual-clauses-scc_en). These clarify how Pipedream handles your data, and they update our GDPR policies to cover the latest standards set by the European Commission.
You can find a list of Pipedream subprocessors [here](/docs/subprocessors/).
#### Submitting a GDPR deletion request
When you [delete your account](/docs/account/user-settings/#delete-account), Pipedream deletes all personal data we hold on you in our system and our vendors.
If you need to delete data on behalf of one of your users, you can delete the event data yourself in your workflow or event source (for example, by deleting the events, or by removing the data from data stores). Your customer event data is automatically deleted from Pipedream subprocessors.
### HIPAA
Pipedream can sign Business Associate Addendum (BAAs) for customers intending to pass PHI to Pipedream. We can also provide a third-party SOC 2 report detailing our HIPAA-related controls. See our [dedicated HIPAA docs](/docs/privacy-and-security/hipaa/) for more details.
## Hosting Details
Pipedream is hosted on the [Amazon Web Services](https://aws.amazon.com/) (AWS) platform in the `us-east-1` region. The physical hardware powering Pipedream, and the data stored by our platform, is hosted in data centers controlled and secured by AWS. You can read more about AWS’s security practices and compliance certifications [here](https://aws.amazon.com/security/).
Pipedream further secures access to AWS resources through a series of controls, including but not limited to: using multi-factor authentication to access AWS, hosting services within a private network inaccessible to the public internet, and more.
## Intrustion Detection and Prevention
Pipedream uses AWS WAF, GuardDuty, CloudTrail, CloudWatch, Datadog, and other custom alerts to monitor and block suspected attacks against Pipedream infrastructure, including DDoS attacks.
Pipedream reacts to potential threats quickly based on [our incident response policy](/docs/privacy-and-security/#incident-response).
## User Accounts, Authentication and Authorization
When you sign up for a Pipedream account, you can choose to link your Pipedream login to either an existing [Google](https://google.com) or [Github](https://github.com) account, or create an account directly with Pipedream. Pipedream also supports [single-sign on](/docs/workspaces/#configuring-single-sign-on-sso).
When you link your Pipedream login to an existing identity provider, Pipedream does not store any passwords tied to your user account —that information is secured with the identity provider. We recommend you configure two-factor authentication in the provider to further protect access to your Pipedream account.
When you create an account on Pipedream directly, with a username and password, Pipedream implements account security best practices (for example: Pipedream hashes your password, and the hashed password is encrypted in our database, which resides in a private network accessible only to select Pipedream employees).
## Third party OAuth grants, API keys, and environment variables
When you link an account from a third party application, you may be asked to either authorize a Pipedream OAuth application access to your account, or provide an API key or other credentials. This section describes how we handle these grants and keys.
When a third party application supports an [OAuth integration](https://oauth.net/2/), Pipedream prefers that interface. The OAuth protocol allows Pipedream to request scoped access to specific resources in your third party account without you having to provide long-term credentials directly. Pipedream must request short-term access tokens at regular intervals, and most applications provide a way to revoke Pipedream’s access to your account at any time.
Some third party applications do not provide an OAuth interface. To access these services, you must provide the required authorization mechanism (often an API key). As a best practice, if your application provides such functionality, Pipedream recommends you limit that API key’s access to only the resources you need access to within Pipedream.
Pipedream encrypts all OAuth grants, key-based credentials, and environment variables at rest in our production database. That database resides in a private network. Backups of that database are encrypted. The key used to encrypt this database is managed by [AWS KMS](https://aws.amazon.com/kms/) and controlled by Pipedream. KMS keys are 256 bit in length and use the Advanced Encryption Standard (AES) in Galois/Counter Mode (GCM). Access to administer these keys is limited to specific members of our team. Keys are automatically rotated once a year. KMS has achieved SOC 1, 2, 3, and ISO 9001, 27001, 27017, 27018 compliance. Copies of these certifications are available from Amazon on request.
When you link credentials to a specific source or workflow, the credentials are loaded into that program’s [execution environment](/docs/privacy-and-security/#execution-environment), which runs in its own virtual machine, with access to RAM and disk isolated from other users’ code.
No credentials are logged in your source or workflow by default. If you log their values or [export data from a step](/docs/workflows/#step-exports), you can always delete the data for that execution from your source or workflow. These logs will also be deleted automatically based on the [event retention](/docs/workflows/limits/#event-history) for your account.
You can delete your OAuth grants or key-based credentials at any time by visiting [https://pipedream.com/accounts](https://pipedream.com/accounts). Deleting OAuth grants within Pipedream **do not** revoke Pipedream’s access to your account. You must revoke that access wherever you manage OAuth grants in your third party application.
## Pipedream REST API security, OAuth clients
The Pipedream API supports two methods of authentication: [OAuth](/docs/rest-api/auth/#oauth) and [User API keys](/docs/rest-api/auth/#user-api-keys). **We recommend using OAuth clients** for a few reasons:
✅ OAuth clients are tied to the workspace, administered by workspace admins\
✅ Tokens are short-lived\
✅ OAuth clients support scopes, limiting access to specific operations
When testing the API or using the CLI, you can use your user API key. This key is tied to your user account and provides full access to any resources your user has access to, across workspaces.
### OAuth clients
Pipedream supports client credentials OAuth clients, which exchange a client ID and client secret for a short-lived access token. These clients are not tied to individual end users, and are meant to be used server-side. You must store these credentials securely on your server, never allowing them to be exposed in client-side code.
Client secrets are salted and hashed before being saved to the database. The hashed secret is encrypted at rest. Pipedream does not store the client secret in plaintext.
You can revoke a specific client secret at any time by visiting [https://pipedream.com/settings/api](https://pipedream.com/settings/api).
### OAuth tokens
Since Pipedream uses client credentials grants, access tokens must not be shared with end users or stored anywhere outside of your server environment.
Access tokens are issued as JWTs, signed with an ED25519 private key. The public key used to verify these tokens is available at [https://api.pipedream.com/.well-known/jwks.json](https://api.pipedream.com/.well-known/jwks.json). See [this workflow template](https://pipedream.com/new?h=tch_rBf76M) for example code you can use to validate these tokens.
Access tokens are hashed before being saved in the Pipedream database, and are encrypted at rest.
Access tokens expire after 1 hour. Tokens can be revoked at any time.
## Pipedream Connect
[Pipedream Connect](/docs/connect/) is the easiest way for your users to connect to [over {PUBLIC_APPS}+ APIs](https://pipedream.com/apps), **right in your product**.
### Client-side SDK
Pipedream provides a [client-side SDK](/docs/connect/api-reference/introduction) to initiate authorization or accept API keys on behalf of your users in environments that can run JavaScript. You can see the code for that SDK [here](https://github.com/PipedreamHQ/pipedream/tree/master/packages/sdk).
When you initiate authorization, you must:
1. [Create a server-side token for a specific end user](/docs/connect/api-reference/create-connect-token)
2. Initiate auth with that token, connecting an account for a specific user
These tokens can only initiate the auth connection flow. They have no permissions to access credentials or perform other operations against the REST API. They are meant to be scoped to a specific user, for use in clients that need to initiate auth flows.
Tokens expire after 4 hours, at which point you must create a new token for that specific user.
### Connect Link
You can also use [Connect Link](/docs/connect/managed-auth/connect-link/) to generate a URL that initiates the authorization flow for a specific user. This is useful when you want to initiate the auth flow from a client-side environment that can’t run JavaScript, or include the link in an email, chat message, etc.
Like tokens, Connect Links are coupled to specific users, and expire after 4 hours.
### REST API
The Pipedream Connect API is a subset of the [Pipedream REST API](/docs/rest-api/). See the [REST API Security](/docs/privacy-and-security/#pipedream-rest-api-security-oauth-clients) section for more information on how we secure the API.
## Execution environment
The **execution environment** refers to the environment in which your sources, workflows, and other Pipedream code is executed.
Each version of a source or workflow is deployed to its own virtual machine in AWS. This means your execution environment has its own RAM and disk, isolated from other users’ environments. You can read more about the details of the virtualization and isolation mechanisms used to secure your execution environment [here](https://firecracker-microvm.github.io/).
Instances of running VMs are called **workers**. If Pipedream spins up three VMs to handle multiple, concurrent requests for a single workflow, we’re running three **workers**. Each worker runs the same Pipedream execution environment. Workers are ephemeral —AWS will shut them down within \~5 minutes of inactivity —but you can configure [dedicated workers](/docs/workflows/building-workflows/settings/#eliminate-cold-starts) to ensure workers are always available to handle incoming requests.
## Controlling egress traffic from Pipedream
By default, outbound traffic shares the same network as other AWS services deployed in the `us-east-1` region. That means network requests from your workflows (e.g. an HTTP request or a connection to a database) originate from the standard range of AWS IP addresses.
[Pipedream VPCs](/docs/workflows/vpc/) enable you to run workflows in dedicated and isolated networks with static outbound egress IP addresses that are unique to your workspace (unlike other platforms that provide static IPs common to all customers on the platform). Outbound network requests from workflows that run in a VPC will originate from these IP addresses, and only workflows in your workspace will run there.
## Encryption of data in transit, TLS (SSL) Certificates
When you use the Pipedream web application at [https://pipedream.com](https://pipedream.com), traffic between your client and Pipedream services is encrypted in transit. When you create an HTTP interface in Pipedream, the Pipedream UI defaults to displaying the HTTPS endpoint, which we recommend you use when sending HTTP traffic to Pipedream so that your data is encrypted in transit.
All Pipedream-managed certificates, including those we create for [custom domains](/docs/workflows/domains/), are created using [AWS Certificate Manager](https://aws.amazon.com/certificate-manager/). This eliminates the need for our employees to manage certificate private keys: these keys are managed and secured by Amazon. Certificate renewal is also handled by Amazon.
## Encryption of data at rest
Pipedream encrypts customer data at rest in our databases and data stores. We use [AWS KMS](https://aws.amazon.com/kms/) to manage encryption keys, and all keys are controlled by Pipedream. KMS keys are 256 bit in length and use the Advanced Encryption Standard (AES) in Galois/Counter Mode (GCM). Access to administer these keys is limited to specific members of our team. Keys are automatically rotated once a year. KMS has achieved SOC 1, 2, 3, and ISO 9001, 27001, 27017, 27018 compliance. Copies of these certifications are available from Amazon on request.
## Email Security
Pipedream delivers emails to users for the purpose of email verification, error notifications, and more. Pipedream implements [SPF](https://en.wikipedia.org/wiki/Sender_Policy_Framework) and [DMARC](https://en.wikipedia.org/wiki/DMARC) DNS records to guard against email spoofing / forgery. You can review these records by using a DNS lookup tool like `dig`:
```
# SPF
dig pipedream.com TXT +short
# DMARC
dig _dmarc.pipedream.com TXT +short
```
## Incident Response
Pipedream implements incident response best practices for identifying, documenting, resolving and communicating incidents. Pipedream publishes incident notifications to a status page at [status.pipedream.com](https://status.pipedream.com/) and to the [@PipedreamStatus Twitter account](https://twitter.com/pipedreamstatus).
Pipedream notifies customers of any data breaches according to our [Data Protection Addendum](https://pipedream.com/dpa).
## Software Development
Pipedream uses GitHub to store and version all production code. Employee access to Pipedream’s GitHub organization is protected by multi-factor authentication.
Only authorized employees are able to deploy code to production. Deploys are tested and monitored before and after release.
## Vulnerability Management
Pipedream monitors our code, infrastructure and core application for known vulnerabilities and addresses critical vulnerabilities in a timely manner.
## Corporate Security
### Background Checks
Pipedream performs background checks on all new hires.
### Workstation Security
Pipedream provides hardware to all new hires. These machines run a local agent that sets configuration of the operating system to hardened standards, including:
* Automatic OS updates
* Hard disk encryption
* Anti-malware software
* Screen lock
and more.
### System Access
Employee access to systems is granted on a least-privilege basis. This means that employees only have access to the data they need to perform their job. System access is reviewed quarterly, on any change in role, or upon termination.
### Security Training
Pipedream provides annual security training to all employees. Developers go through a separate, annual training on secure software development practices.
## Data Retention
Pipedream retains data only for as long as necessary to provide the core service. Pipedream stores your workflow code, data in data stores, and other data indefinitely, until you choose to delete it.
Event data and the logs associated with workflow executions are stored according to [the retention rules on your account](/docs/workflows/limits/#event-history).
Pipedream deletes most internal application logs and logs tied to subprocessors within 30 days. We retain a subset of logs for longer periods where required for security investigations.
## Data Deletion
If you choose to delete your Pipedream account, Pipedream deletes all customer data and event data associated with your account. We also make a request to all subprocessors to delete any data those vendors store on our behalf.
Pipedream deletes customer data in backups within 30 days.
## Payment Processor
Pipedream uses [Stripe](https://stripe.com) as our payment processor. When you sign up for a paid plan, the details of your payment method are transmitted to and stored by Stripe [according to their security policy](https://stripe.com/docs/security/stripe). Pipedream stores no information about your payment method.
# Security Best Practices
Source: https://pipedream.com/docs/privacy-and-security/best-practices
Pipedream implements a range of [privacy and security measures](/docs/privacy-and-security/) meant to protect your data from unauthorized access. Since Pipedream [workflows](/docs/workflows/building-workflows/), [event sources](/docs/workflows/building-workflows/triggers/), and other resources can run any code and process any event data, you also have a responsibility to ensure you handle that code and data securely. We’ve outlined a handful of best practices for that below.
## Store secrets as Pipedream connected accounts or environment variables
Never store secrets like API keys directly in code. These secrets should be stored in one of two ways:
* [If Pipedream integrates with the app](https://pipedream.com/apps), use [connected accounts](/docs/apps/connected-accounts/) to link your apps / APIs.
* If you need to store credentials for an app Pipedream doesn’t support, or you need to store arbitrary configuration data, use [environment variables](/docs/workflows/environment-variables/).
Read more about how Pipedream secures connected accounts / environment variables [here](/docs/privacy-and-security/#third-party-oauth-grants-api-keys-and-environment-variables).
## Deliver data to Pipedream securely
Always send data over HTTPS to Pipedream endpoints.
## Send data out of Pipedream securely
When you connect to APIs in a workflow, or deliver data to third-party destinations, encrypt that data in transit. For example, use HTTPS endpoints when sending HTTP traffic to third parties.
## Require authorization for HTTP triggers
HTTP triggers are public by default, and require no authorization or token to invoke.
For many workflows, you should [configure authorization](/docs/workflows/building-workflows/triggers/#authorizing-http-requests) to ensure that only authorized parties can invoke your HTTP trigger.
For third-party services like webhooks, that authorize requests using their own mechanism, use the [Validate Webhook Auth action](https://pipedream.com/apps/http/actions/validate-webhook-auth). This supports common auth options, and you don’t have to write any code to configure it.
## Validate signatures for incoming events, where available
Many apps pass a **signature** with event data delivered via webhooks (or other push delivery systems). The signature is an opaque value computed from the incoming event data and a secret that only you and the app know. When you receive the event, you can validate the signature by computing it yourself and comparing it to the signature sent by the app. If the two values match, it verifies that the app sent the data, and not some third party.
Signatures are specific to the app sending the data, and the app should provide instructions for signature validation. **Not all apps compute signatures, but when they do, you should always verify them**.
When you use a Pipedream [event source](/docs/workflows/building-workflows/triggers/) as your workflow trigger, Pipedream should verify the signature for you. You can always [audit the code behind the event source](/docs/privacy-and-security/best-practices/#audit-code-or-packages-you-use-within-a-workflow) to confirm this, and suggest further security improvements that you find.
See [Stripe’s signature docs](https://stripe.com/docs/webhooks/signatures) for a real-world example. Pipedream’s Stripe event source [verifies this signature for you](https://github.com/PipedreamHQ/pipedream/blob/bb1ebedf8cbcc6f1f755a8878c759522b8cc145b/components/stripe/sources/custom-webhook-events/custom-webhook-events.js#L49).
## Audit code or packages you use within a workflow
Pipedream workflows are just code. Pipedream provides prebuilt triggers and actions that facilitate common use cases, but these are written and run as code within your workflow. You can examine and modify this code in any way you’d like.
This also means that you can audit the code for any triggers or actions you use in your workflow. We encourage this as a best practice. Even code authored by Pipedream can be improved, and if you notice a vulnerability or other issue, you can submit a patch or raise an issue [in our GitHub repo](https://github.com/PipedreamHQ/pipedream/tree/master/components).
The same follows for [npm](https://www.npmjs.com/) packages. Before you use a new npm package in your workflow, review its page on npm and its repo, if available. Good packages should have recent updates. The package should have a healthy number of downloads and related activity (like GitHub stars), and the package author should be responsive to issues raised by the community. If you don’t observe these signals, be wary of using the package in your workflow.
## Limit what you log and return from steps
[Pipedream retains a limited history of event data](/docs/workflows/limits/#event-history) and associated logs for event sources and workflows. But if you cannot log specific data in Pipedream for privacy / security reasons, or if you want to limit risk, remember that **Pipedream only stores data returned from or logged in steps**. Specifically, Pipedream will only store:
* The event data emitted from event sources, and any `console` logs / errors
* The event data that triggers your workflow, any `console` logs / errors, [step exports](/docs/workflows/#step-exports), and any data included in error stack traces.
Variables stored in memory that aren’t logged or returned from steps are not included in Pipedream logs. Since you can modify any code in your Pipedream workflow, if you want to limit what gets logged from a Pipedream action or other step, you can adjust the code accordingly, removing any logs or step exports.
# HIPAA Compliance
Source: https://pipedream.com/docs/privacy-and-security/hipaa
Pipedream can [sign Business Associate Addendums (BAAs)](/docs/privacy-and-security/hipaa/#signing-a-business-associate-addendum) for Business customers intending to pass PHI to Pipedream. We can also provide a third-party SOC 2 report detailing our HIPAA-related controls.
## HIPAA-eligible services
* [Workflows](/docs/workflows/building-workflows/)
* [Event sources](/docs/workflows/building-workflows/triggers/)
* [Data stores](/docs/workflows/data-management/data-stores/)
* [Destinations](/docs/workflows/data-management/destinations/)
* [Pipedream Connect](/docs/connect/)
### Ineligible services
Any service not listed in the [HIPAA-eligible services](/docs/privacy-and-security/hipaa/#hipaa-eligible-services) section is not eligible for use with PHI under HIPAA. Please reach out to [Pipedream support](https://pipedream.com/support) if you have questions about a specific service.
The following services are explicitly not eligible for use with PHI under HIPAA.
* [v1 workflows](/docs/deprecated/migrate-from-v1/)
* [File stores](/docs/workflows/data-management/file-stores/)
## Your obligations as a customer
If you are a covered entity or business associate under HIPAA, you must ensure that [you have a BAA in place with Pipedream](/docs/privacy-and-security/hipaa/#signing-a-business-associate-addendum) before passing PHI to Pipedream.
You must also ensure that you are using Pipedream in a manner that complies with HIPAA. This includes:
* You may only use [HIPAA-eligible services](/docs/privacy-and-security/hipaa/#hipaa-eligible-services) to process or store PHI
* You may not include PHI in Pipedream resource names, like the names of projects or workflows
## Signing a Business Associate Addendum
Pipedream is considered a Business Associate under HIPAA regulations. If you are a Covered Entity or Business Associate under HIPAA, you must have a Business Associate Agreement (BAA) in place with Pipedream before passing PHI to Pipedream. This agreement is an addendum to our standard terms, and outlines your obligations as a customer and Pipedream’s obligations as a Business Associate under HIPAA.
Please request a BAA by visiting [https://pipedream.com/support](https://pipedream.com/support).
## Requesting information on HIPAA controls
Please request compliance reports from [https://pipedream.com/support](https://pipedream.com/support). Pipedream can provide a SOC 2 Type II report covering Security controls, and a SOC 2 Type I report for Confidentiality and Availability. In 2025, Pipedream plans to include Confidentiality and Availability controls in our standard Type II audit.
# PGP Key
Source: https://pipedream.com/docs/privacy-and-security/pgp-key
If you’d like to encrypt sensitive information in communications to Pipedream, use this PGP key.
* Key ID: `3C85BC49602873EB`
* Fingerprint: `E0AD ABAC 0597 5F51 8BF5 3ECC 3C85 BC49 6028 73EB`
* User ID: `Pipedream Security `
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBF9n4TABEACxKdiysQswLHSg7u1uUtY3evSNeuqU4DGNxLwVmPLUt4CRd170
EgeCnGpLXQtbQI6HccZapD2emAF0PHXXvx/Q6VB+8iuIZorYGfafIvXsaZhIakrp
xAkSY/eZ/YVWNnnqCpwHxjccFjpIfWph/MVoJ853Eg42IwEGLF7fPMPdcZ1W6S/q
kZzRLMKW60sLvyeTdyemUd/sza72ouv0FioP5zNOmZx8mawVvVeVrM2TD14cJ454
zmYUkxlyvLnHxIF+kZUoLk628qGjTzvEnWlIrLiTQRvTyefzpBBcVz+px35zOFz9
1DdQXz6EFgP46zeLVa8m8dDBkadJPoGFWgMOKlRhYxfdlZlVHSXJcwTl7jN4EW6G
+Oagp36oQvVALtpakP7TUyZ7iKi1gU/i2nAgb885hdMO3bRX8vtg8RYan/3wzKOm
Ky/8rOAq9tFporqCU/EQWbalbBWT6yIn0zxmO33199B8cvIZPg9ZMsrMtSqKu83L
9vqGTK6pVSKEBCqSqmOhveKjgV+gsniKj+0ZiudxMQ8YzSa40dbBPTj0d5GB8Ceo
HVzzZZVcsOFbWU3mEqyhus1q81B58DjAouQlH5RoQ4U/MgsZnbghPTxkir3wttPB
kbbN3DnEHwOPI9ErvBU5UEcA39VqJaMSWImsX450GGfX/sSUNOw2wSXV7QARAQAB
tCtQaXBlZHJlYW0gU2VjdXJpdHkgPHNlY3VyaXR5QHBpcGVkcmVhbS5jb20+iQJO
BBMBCAA4FiEE4K2rrAWXX1GL9T7MPIW8SWAoc+sFAl9n4TACGwMFCwkIBwIGFQoJ
CAsCBBYCAwECHgECF4AACgkQPIW8SWAoc+vIjA//ZMkS6qnWhEygSBoKV2ZRfCF+
vsKCaimMD369w+pGSldJ9cNA7EKbGzs7cp+jkOaq9yruevy+OuHupSUJUEsaOhzk
fnSHdo0EO/47AJ6yWNrziS1IsHU/aZYA++wfbrn90RCmlCbfErgarkDSsKkFhMra
bWRNj/OXcIOJsBEHwALTpgMLjDXngGU6iwM6hFgvqIVeuBFQjbYwTLbBwfuXrxWF
o3olmqgRL4KE6wKzu3nQDBlxHrumF8+34V2P7DSgDCDZzbZOcYZ+erroQrhq8hkL
D1gf2TopHzdjiVXR6VgV0a4qOKniQU5weiiV7ESeuOnNKdGjq5JWHnI6XmEPeAvv
f87PFArbqde75Mv258BF+TTMa6Kt3Uv8wIwvVo78LRsOENja89Q+v3bUn064lWCP
6XLROQD5zaapnMOzFLVW0KDq2z0edud+0W6lNRgB9tgXZ59OgbMvEVah1rf+BnTv
OZ0In/KBsg5Xl3+OapWrspTk9WCVnZ5KOdMYM6s1pN1P5ocDTnvBkDjs2gBUCUZ8
bSjnoAaoji/DqXGWEWD5soZRbGoVjLqMCPQyMjTCrt9IH9EAAgrRkmkjeACGVXtk
FFkY+nZ/7z1Jd6wBd482H65gPjQf/wJmaBdgGX9yfG7bQJKxEYZUnCv/9yt551Jw
aE9s4B/LAHI3S7BeQDO5Ag0EX2fhMAEQAN7Rn6TfP79WOMcweATB2cLlfqAF77xf
Dh2nVdpWG6IDF2Bke2IU4hhz/Egkx+mR6Av8beufrUeaMZXevBnuWfT94Qk/nBvx
RWJbSkvYK1q7hMW5QqJgPcp+kJX6WLVMRNkCJjjyd24kdIneZ0X1oudREXO5HOBw
A68bzIwsJqzXfINt0GFJjGQ71COoStMI+/GqGlKsee8ajwzI03yBkI7nWDIx9UkJ
QFR/34jnf0QSpfE65cl3dwI5f0a9oQoBsg4XqcIAjjqJzismEfScVCyj/ru97e+g
jdAJWdEhZDyv5IDHX6+Jyc7JUl+6+chufqzeKwns7OFEBebmyKB2vVzQnah6xtJu
w+VsVk4EepIRXIC2vY/+ZRiLaO5R0U4WefJZEJS1YrKaJ7nGkEICfPnVu+d6f3jb
aed/k9S7zfIugcqX42mmac4+hKm4rJ/9/W0dqbqQnQfdRilGax6Poco9LSEzY6zQ
7Bgnkz7/gTBBC4/YyTxZNnxil+mRg6uePlUBA4/p224FMx65K0WVgKCnalELY68f
a5z7fVUmfLCKLzeMwSRvhWSjVgMJ6XB8ngkci6OtsI0eBJutGxFhcBD+GqafBOWC
KP7Me8DJRx8F6bPbpJqj+T1+hROerLTFQRfm6IuL5LjzlNGteD+ZwcIMql++22/q
ta7Pfac/+rOfABEBAAGJAjYEGAEIACAWIQTgrausBZdfUYv1Psw8hbxJYChz6wUC
X2fhMAIbDAAKCRA8hbxJYChz6+tGEACGVCcVmXpKFveN6lhSvuJG3J9eddEQgx0g
DKirFTzgXDbVc91+hRHcNK6Dk6udL7iZM79pAy26oYqP7VAkA6VwU91xG/0Sdk4a
f0/i3BkbmE2kaKiJj4pn8F3ZihEKbSgbn7VsXkvE+9k1zkLKg6c7BBn2U6s4rW5g
mBWf92bhxnp3mstP5Ci7duJoEM8xMf/BCiCJGksjIStLOmxCn+I6N0diAa1CPcA9
U34Cj1sPv8R9sB1sLjdanWLRL5aViE7Zo4bavX70oQXZWATuMoVgDcSCHD7K9iwg
ZZIVlvRF5Otq9JpE70toH4VVnkru2R0JpNwkGVm2+gp5sieNTHTB2J36/fxr0+O7
hPiE/bfOjZb/GpS8ppvKyCBB/K0pfhzl5+QsNhMFLrMF1YVHb2WImnQDz3G9mtBi
O/wgCUrBQG8lf6G50Tq1hklGPnh4VoAeJBrjojKh/Cuep8ZE6z5XYo2sYX7qKCi+
fzJDhiKbFco16Njg0kpxAJl2Qb8zn3HOwrc1Np7K79LA92Gopve/poqQLPQn46Dh
p0o5ixKvUW8yCPQDajGAFGdzcr3q75fySNkX0pBdY8IwqpFptH3BQgjZFg8M2E8x
TOmmirEWCFx1F+4Aj+iU+ustTialkqQd8D3tnR3I/uFgNmaNjqpf4K12HGwIIU1j
XLQIwNRYZw==
=SOPy
-----END PGP PUBLIC KEY BLOCK-----
```
# Projects
Source: https://pipedream.com/docs/projects
A workspace can contain one or more *projects*. Projects are a way to organize your workflows into specific groupings or categories.
## Getting started with projects
### Creating projects
To create a new project, first [open the Projects section in the dashboard](https://pipedream.com/projects).
Then click **Create project** to start a new project.
Enter in your desired name for the project in the prompt, then click **Create**.
That’s it, you now have a dedicated new project created within your workspace. Now you can create workflows within this project, or move workflows into it or create folders for further organization.
### Creating folders and workflows in projects
Within a given project, you can create folders for your workflows.
Open your project, and then click the **New** button for a dropdown to create a workflow in your current project.
Helpful hotkeys to speed up your development
* `C then F` creates a new folder.
* `C then W` creates a new workflow.
Folders can also contain sub-folders, which allows you to create a filing system to organize your workflows.
### Moving workflows into folders
To move workflows into folders, simply drag and drop the workflow into the folder.
You can move workflows or folders up a level by dragging and dropping the workflow to the folder icon at the top of the list.
### Importing workflows into projects
This only applies to Pipedream accounts that created workflows before the projects feature was released.
To import a workflow from the general **Workflows** area of your dashboard into a project:
1. Open the Workflows area in the dashboard
2. Select one or more workflows you’d like to import into a project
3. Click *Move* in the top right and select a project to move them to
### Moving workflows between projects
To move a workflow from one project to another project, first check the workflow and then click **Move** to open a dropdown of projects. Select the project to move this workflow to, and click **Move** once more to complete the move.
Github Sync limitation
At this time it’s not possible to move workflows out of GitHub Synchronized Projects.
## Finding your project’s ID
Visit your project’s **Settings** and copy the project ID.
# Access Controls
Source: https://pipedream.com/docs/projects/access-controls
The [projects list view](https://pipedream.com/projects) contains **Owner** and **Access** columns.
**Owner** indicates who within the workspace owns each project. This is typically the person who created the project.
Projects created before February 2024 don’t automatically have owners, which has no functional impact.
**Access** indicates which workspace members have access to each project, and this can be displayed as “me”, “Workspace”, or “N members”.
## Permissions
Workspace owners and admins are able to perform all actions in projects, whereas workspace members are restricted from performing certain actions in projects.
| Operation | Project creator | Workspace members |
| ------------------------------------------------------------ | --------------- | ----------------- |
| View in [projects listing](https://pipedream.com/projects) | ✅ | ✅ |
| View in [Event History](https://pipedream.com/event-history) | ✅ | ✅ |
| View in global search | ✅ | ✅ |
| Manage project workflows | ✅ | ✅ |
| Manage project files | ✅ | ✅ |
| Manage project variables | ✅ | ✅ |
| Manage member access | ✅ | ❌ |
| Manage GitHub Sync settings | ✅ | ❌ |
| Delete project | ✅ | ❌ |
**Workspace admins and owners have the same permissions as project creators for all projects in the workspace.**
## Managing access
By default, all projects are accessible to all workspace members. Workspaces on the [Business plan](https://pipedream.com/pricing) can restrict access for individual projects to specific workspace members.
You can easily modify the access rules for a project directly from the [project list view](https://pipedream.com/projects), either by clicking the access badge in the project row (fig 1) or clicking the 3 dots to open the action menu, then selecting **Manage Access** (fig 2).
Via the access badge (fig 1):
Via the action menu (fig 2):
From here, a slideout drawer reveals the access management configuration:
Toggle the **Restrict access to this project** switch to manage access:
Select specific members of the workspace to grant access:
You can always see who has access and remove access if necessary:
# Project Variables and Secrets
Source: https://pipedream.com/docs/projects/secrets
Environment variables defined at the global workspace level are accessible to all workspace members and workflows within the workspace. To restrict access to sensitive variables or secrets, define them at the project-level and [configure access controls for the project](/docs/projects/access-controls/#managing-access).
[See here](/docs/workflows/environment-variables/) for info on creating, managing, and using environment variables and secrets.
**Project variables override workspace variables**. When the same variable is defined at both the workspace and project levels (for example, `process.env.BASE_DOMAIN`), the **project** variable takes precedence.
# Get an App
Source: https://pipedream.com/docs/rest-api/api-reference/apps/get-an-app
GET /apps/{app_id}
Retrieve metadata for a specific app.
#### Endpoint
```
GET /apps/{app_id}
```
#### Path Parameters
The ID or name slug the app you’d like to retrieve. For example, Slack’s unique app ID is `app_OkrhR1`, and its name slug is `slack`.
You can find the app’s ID in the response from the [List apps](/docs/rest-api/#list-apps) endpoint, and the name slug under the **Authentication** section of any [app page](https://pipedream.com/apps).
```bash
curl https://api.pipedream.com/v1/apps/app_OkrhR1 \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
```json
{
"data": {
"id": "app_OkrhR1",
"name_slug": "slack",
"name": "Slack",
"auth_type": "oauth",
"description": "Slack is a channel-based messaging platform. With Slack, people can work together more effectively, connect all their software tools and services, and find the information they need to do their best work — all within a secure, enterprise-grade environment.",
"img_src": "https://assets.pipedream.net/s.v0/app_OkrhR1/logo/orig",
"custom_fields_json": "[]",
"categories": [
"Communication"
],
"featured_weight": 1000000001,
"connect": {
"proxy_enabled": true,
"allowed_domains": ["slack.com"],
"base_proxy_target_url": "https://slack.com"
}
}
}
```
# List Apps
Source: https://pipedream.com/docs/rest-api/api-reference/apps/list-apps
GET /apps
Retrieve a list of all apps available on Pipedream.
#### Endpoint
```
GET /apps
```
#### Parameters
A query string to filter the list of apps. For example, to search for apps that **contain** the string “Slack”, pass `q=Slack`.
***
Pass `1` to filter the list of apps to only those with public triggers or actions.
***
Pass `1` to filter the list of apps to only those with public actions.
***
Pass `1` to filter the list of apps to only those with public triggers.
```bash
curl https://api.pipedream.com/v1/apps
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
```json
{
"page_info": {
"total_count": 2,
"count": 2,
"start_cursor": "c2xhY2s",
"end_cursor": "c2xhY2tfYm90"
},
"data": [
{
"id": "app_OkrhR1",
"name_slug": "slack",
"name": "Slack",
"auth_type": "oauth",
"description": "Slack is a channel-based messaging platform. With Slack, people can work together more effectively, connect all their software tools and services, and find the information they need to do their best work — all within a secure, enterprise-grade environment.",
"img_src": "https://assets.pipedream.net/s.v0/app_OkrhR1/logo/orig",
"custom_fields_json": "[]",
"categories": [
"Communication"
],
"featured_weight": 1000000001,
"connect": {
"proxy_enabled": true,
"allowed_domains": ["slack.com"],
"base_proxy_target_url": "https://slack.com"
}
},
{
"id": "app_mWnheL",
"name_slug": "slack_bot",
"name": "Slack Bot",
"auth_type": "keys",
"description": "Interact with Slack with your own bot user",
"img_src": "https://assets.pipedream.net/s.v0/app_mWnheL/logo/orig",
"custom_fields_json": "[{\"name\":\"bot_token\",\"label\":\"Bot Token\",\"description\":null,\"default\":null,\"optional\":null,\"type\":\"password\"}]",
"categories": [
"Communication"
],
"featured_weight": 4100,
"connect": {
"proxy_enabled": true,
"allowed_domains": ["slack.com"],
"base_proxy_target_url": "https://slack.com"
}
}
]
}
```
The `apps` API returns a `featured_weight` for integrated apps, which powers the sort order on [pipedream.com/apps](https://pipedream.com/apps). Note that this is roughly based on popularity from Pipedream users, but is manually defined by Pipedream and is subject to change.
# Components
Source: https://pipedream.com/docs/rest-api/api-reference/components
These docs discuss the management of Pipedream components. To run components on behalf of your end users in your application, refer to the [Connect API docs](/docs/connect/api-reference/list-components).
Components are objects that represent the code for an [event source](/docs/rest-api/#sources).
# Create a component
Source: https://pipedream.com/docs/rest-api/api-reference/components/create-a-component
POST /components
`/components` endpoints are only available when using [user API keys](/docs/rest-api/auth/#user-api-keys), not yet for workspace [OAuth tokens](/docs/rest-api/auth/#oauth).
Before you can create a source using the REST API, you must first create a **component** - the code for the source.
This route returns the components `id`, `code`, `configurable_props`, and other metadata you’ll need to [deploy a source](/docs/rest-api/#create-a-source) from this component.
#### Endpoint
```
POST /components
```
#### Parameters
The full code for a [Pipedream component](/docs/components/contributing/api/).
***
A reference to the URL where the component is hosted.
For example, to create an RSS component, pass `https://github.com/PipedreamHQ/pipedream/blob/master/components/rss/sources/new-item-in-feed/new-item-in-feed.ts`.
***
One of `component_code` *or* `component_url` is required. If both are present, `component_code` is preferred and `component_url` will be used only as metadata to identify the location of the code.
```bash
curl https://api.pipedream.com/v1/components \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d '{"component_url": "https://github.com/PipedreamHQ/pipedream/blob/master/components/rss/sources/new-item-in-feed/new-item-in-feed.ts"}'
```
```json
{
"data": {
"id": "sc_JDi8EB",
"code": "component code here",
"code_hash": "685c7a680d055eaf505b08d5d814feef9fabd516d5960837d2e0838d3e1c9ed1",
"name": "rss",
"version": "0.0.1",
"configurable_props": [
{
"name": "url",
"type": "string",
"label": "Feed URL",
"description": "Enter the URL for any public RSS feed."
},
{
"name": "timer",
"type": "$.interface.timer",
"default": {
"intervalSeconds": 900
}
}
],
"created_at": 1588866900,
"updated_at": 1588866900
}
}
```
# Get a component
Source: https://pipedream.com/docs/rest-api/api-reference/components/get-a-component
GET /components/{key|id}
Retrieve a component saved or published in your account using its saved component ID **or** key.
This endpoint returns the component’s metadata and configurable props.
#### Endpoint
```
GET /components/{key|id}
```
#### Parameters
The component key (identified by the `key` property within the component’s source code) you’d like to fetch metadata for (example: `my-component`)
**or**
The saved component ID you’d like to fetch metadata for (example: `sc_JDi8EB`)
```bash
curl https://api.pipedream.com/v1/components/my-component \
-H "Authorization: Bearer "
```
```json
{
"data": {
"id": "sc_JDi8EB",
"code": "component code here",
"code_hash": "685c7a680d055eaf505b08d5d814feef9fabd516d5960837d2e0838d3e1c9ed1",
"name": "rss",
"version": "0.0.1",
"configurable_props": [
{
"name": "url",
"type": "string",
"label": "Feed URL",
"description": "Enter the URL for any public RSS feed."
},
{
"name": "timer",
"type": "$.interface.timer",
"default": {
"intervalSeconds": 900
}
}
],
"created_at": 1588866900,
"updated_at": 1588866900
}
}
```
# Get a component from the global registry
Source: https://pipedream.com/docs/rest-api/api-reference/components/get-a-component-from-the-global-registry
GET /components/registry/{key}
Pipedream operates a global registry of all public components (for example, for apps like Github, Google Calendar, and more). This endpoint returns the same data as the endpoint for [retrieving metadata on a component you own](/docs/rest-api/#get-a-component), but allows you to fetch data for any globally-published component.
#### Endpoint
```
GET /components/registry/{key}
```
#### Parameters
The component key (identified by the `key` property within the component’s source code) you’d like to fetch metadata for (example: `my-component`)
```bash
curl https://api.pipedream.com/v1/components/registry/github-new-repository \
-H "Authorization: Bearer "
```
```json
{
"data": {
"id": "sc_JDi8EB",
"code": "component code here",
"code_hash": "685c7a680d055eaf505b08d5d814feef9fabd516d5960837d2e0838d3e1c9ed1",
"name": "rss",
"version": "0.0.1",
"configurable_props": [
{
"name": "url",
"type": "string",
"label": "Feed URL",
"description": "Enter the URL for any public RSS feed."
},
{
"name": "timer",
"type": "$.interface.timer",
"default": {
"intervalSeconds": 900
}
}
],
"created_at": 1588866900,
"updated_at": 1588866900
}
}
```
# Search for registry components
Source: https://pipedream.com/docs/rest-api/api-reference/components/search-for-registry-components
GET /components/search
Search for components in the global registry with natural language. Pipedream will use AI to match your query to the most relevant components.
#### Endpoint
```
GET /components/search
```
#### Parameters
The query string to search for components in the global registry, e.g. “Send a message to Slack on new Hubspot contacts”
***
The name slug the app you’d like to filter results for. For example, Slack’s name slug is `slack`. Returned sources and actions are filtered to only those tied to the specified app.
You can find the name slug under the **Authentication** section of any [app page](https://pipedream.com/apps).
***
The minimum similarity score required for a component to be returned. The similarity score is a number between 0 and 1, where 1 is a perfect match. Similarity here is computed as the cosine distance between the embedding of the user query and the embedding of the component’s metadata.
***
Pass `debug=true` to return additional data in the response, useful for inspecting the results.
```bash
curl https://api.pipedream.com/v1/components/search\?query\="When a new Hubspot contact is added, send me an SMS"&limit=1 \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
```json
{
"sources": [
"hubspot-new-contact"
],
"actions": [
"twilio-send-sms"
]
}
```
# Delete source events
Source: https://pipedream.com/docs/rest-api/api-reference/events/delete-source-events
DELETE /sources/{id}/events
### Delete source events
Deletes all events, or a specific set of events, tied to a source.
By default, making a `DELETE` request to this endpoint deletes **all** events associated with a source. To delete a specific event, or a range of events, you can use the `start_id` and `end_id` parameters.
These IDs can be retrieved by using the [`GET /sources/{id}/event_summaries` endpoint](/docs/rest-api/#get-source-events), and are tied to the timestamp at which the event was emitted — e.g. `1589486981597-0`. They are therefore naturally ordered by time.
#### Endpoint
```
DELETE /sources/{id}/events
```
#### Parameters
The event ID from which you’d like to start deleting events.
If `start_id` is passed without `end_id`, the request will delete all events starting with and including this event ID. For example, if your source has 3 events:
* `1589486981597-0`
* `1589486981598-0`
* `1589486981599-0`
and you issue a `DELETE` request like so:
```bash
curl -X DELETE \
-H "Authorization: Bearer " \
"https://api.pipedream.com/v1/sources/dc_abc123/events?start_id=1589486981598-0"
```
The request will delete the **last two events**.
***
The event ID from which you’d like to end the range of deletion.
If `end_id` is passed without `start_id`, the request will delete all events up to and including this event ID. For example, if your source has 3 events:
* `1589486981597-0`
* `1589486981598-0`
* `1589486981599-0`
and you issue a `DELETE` request like so:
```bash
curl -X DELETE \
-H "Authorization: Bearer " \
"https://api.pipedream.com/v1/sources/dc_abc123/events?end_id=1589486981598-0"
```
The request will delete the **first two events**.
```bash
# You can delete a single event by passing its event ID in both the value of the `start_id` and `end_id` params:
curl -X DELETE \
-H "Authorization: Bearer " \
"https://api.pipedream.com/v1/sources/dc_abc123/events?start_id=1589486981598-0&end_id=1589486981598-0"
```
```
Deletion happens asynchronously, so you’ll receive a `202 Accepted` HTTP status code in response to any deletion requests.
```
# Get Source Events
Source: https://pipedream.com/docs/rest-api/api-reference/events/get-source-events
GET /sources/{id}/event_summaries
Retrieve up to the last 100 events emitted by a source.
#### Endpoint
```
GET /sources/{id}/event_summaries
```
#### Notes and Examples
The event data for events larger than `1KB` may get truncated in the response. If you’re processing larger events, and need to see the full event data, pass `?expand=event`:
```
GET /sources/{id}/event_summaries?expand=event
```
Pass `?limit=N` to retrieve the last **N** events:
```
GET /sources/{id}/event_summaries?limit=10
```
# Get a new access token
Source: https://pipedream.com/docs/rest-api/api-reference/oauth/get-a-new-access-token
POST /oauth/token
Exchanges a client ID and client secret for a new access token.
#### Endpoint
```
POST /oauth/token
```
#### Parameters
The OAuth grant type. For Pipedream, this is always `client_credentials`.
***
The client ID of the OAuth app.
***
The client secret of the OAuth app.
```bash
curl https://api.pipedream.com/v1/oauth/token \
-H 'Content-Type: application/json' \
-d '{ "grant_type": "client_credentials", "client_id": "", "client_secret": "" }'
```
```json
{
"access_token": "",
"token_type": "Bearer",
"expires_in": 3600,
"created_at": 1645142400
}
```
# Revoke an access token
Source: https://pipedream.com/docs/rest-api/api-reference/oauth/revoke-an-access-token
POST /oauth/revoke
Revokes an access token, rendering it invalid for future requests.
#### Endpoint
```
POST /oauth/revoke
```
#### Parameters
The access token to revoke.
***
The client ID of the OAuth app.
***
The client secret of the OAuth app.
***
```bash
curl https://api.pipedream.com/v1/oauth/revoke \
-H 'Content-Type: application/json' \
-d '{ "token": "", "client_id": "", "client_secret": "" }'
```
```sh
# This endpoint will return a `200 OK` response with an empty body if the token was successfully revoked:
{}
```
# Sources
Source: https://pipedream.com/docs/rest-api/api-reference/sources
Event sources run code to collect events from an API, or receive events via webhooks, emitting those events for use on Pipedream. Event sources can function as workflow triggers. [Read more here](/docs/workflows/building-workflows/triggers/).
# Create a Source
Source: https://pipedream.com/docs/rest-api/api-reference/sources/create-a-source
POST /sources
This endpoint is only available when using [user API keys](/docs/rest-api/auth/#user-api-keys), not yet for workspace [OAuth tokens](/docs/rest-api/auth/#oauth).
#### Endpoint
```
POST /sources/
```
#### Parameters
The ID of a component previously created in your account. [See the component endpoints](/docs/rest-api/#components) for information on how to retrieve this ID.
***
The full code for a [Pipedream component](/docs/components/contributing/api/).
***
A reference to the URL where the component is hosted.
For example, to create an RSS component, pass `https://github.com/PipedreamHQ/pipedream/blob/master/components/rss/sources/new-item-in-feed/new-item-in-feed.ts`.
***
One of `component_id`, `component_code`, or `component_url` is required. If all are present, `component_id` is preferred and `component_url` will be used only as metadata to identify the location of the code.
***
The name of the source.
If absent, this defaults to using the [name slug](/docs/components/contributing/api/#component-structure) of the component used to create the source.
```bash
curl https://api.pipedream.com/v1/sources \
-H "Authorization: Bearer " \
-H "Content-Type: application/json" \
-d '{"component_url": "https://github.com/PipedreamHQ/pipedream/blob/master/components/rss/sources/new-item-in-feed/new-item-in-feed.ts", "name": "your-name-here", "configured_props": { "url": "https://rss.m.pipedream.net", "timer": { "intervalSeconds": 60 }}}'
```
```json
// Example response from creating an RSS source that runs once a minute:
{
"data": {
"id": "dc_abc123",
"user_id": "u_abc123",
"component_id": "sc_abc123",
"configured_props": {
"url": "https://rss.m.pipedream.net",
"timer": {
"cron": null,
"interval_seconds": 60
}
},
"active": true,
"created_at": 1589486978,
"updated_at": 1589486978,
"name": "your-name-here",
"name_slug": "your-name-here"
}
}
```
# Delete a source
Source: https://pipedream.com/docs/rest-api/api-reference/sources/delete-a-source
DELETE /sources/{id}
#### Endpoint
```
DELETE /sources/{id}
```
# Update a source
Source: https://pipedream.com/docs/rest-api/api-reference/sources/update-a-source
PUT /sources/{id}
#### Endpoint
```
PUT /sources/{id}
```
#### Parameters
The ID of a component previously created in your account. [See the component endpoints](/docs/rest-api/#components) for information on how to retrieve this ID.
***
The full code for a [Pipedream component](/docs/components/contributing/api/).
***
A reference to the URL where the component is hosted.
For example, to create an RSS component, pass `https://github.com/PipedreamHQ/pipedream/blob/master/components/rss/sources/new-item-in-feed/new-item-in-feed.ts`.
***
One of `component_id`, `component_code`, or `component_url` is required. If all are present, `component_id` is preferred and `component_url` will be used only as metadata to identify the location of the code.
***
The name of the source.
If absent, this defaults to using the [name slug](/docs/components/contributing/api/#component-structure) of the component used to create the source.
***
The active state of a component. To disable a component, set to `false`. To enable a component, set to `true`.
Default: `true`.
# Subscriptions
Source: https://pipedream.com/docs/rest-api/api-reference/subscription
The Subscriptions API is currently incompatible with projects that have [GitHub Sync](/docs/workflows/git/) enabled. To [trigger another workflow](/docs/workflows/building-workflows/code/nodejs/#invoke-another-workflow), use `$.flow.trigger` instead.
# Automatically subscribe a listener to events from new workflows / sources
Source: https://pipedream.com/docs/rest-api/api-reference/subscription/automatically-subscribe-a-listener-to-events-from-new-workflows-sources
POST /auto_subscriptions
You can use this endpoint to automatically receive events, like workflow errors, in another listening workflow or event source. Once you setup the auto-subscription, any new workflows or event sources you create will automatically deliver the specified events to the listener.
Note: this will configure subscriptions for *new* workflows and sources after the time you configure the subscription. To deliver events to your listener from *existing* workflows or sources, use the [`POST /subscriptions` endpoint](/docs/rest-api/#listen-for-events-from-another-source-or-workflow).
**Currently, this feature is enabled only on the API. The Pipedream UI will not display the sources configured as listeners using this API**.
#### Endpoint
```
POST /auto_subscriptions?event_name={event_name}&listener_id={receiving_source_id}
```
#### Parameters
The name of the event stream whose events you’d like to receive:
* `$errors`: Any errors thrown by workflows or sources are emitted to this stream
* `$logs`: Any logs produced by **event sources** are emitted to this stream
***
The ID of the component or workflow you’d like to receive events.
[See the component endpoints](/docs/rest-api/#components) for information on how to retrieve the ID of existing components. You can retrieve the ID of your workflow in your workflow’s URL - it’s the string `p_2gCPml` in `https://pipedream.com/@dylan/example-rss-sql-workflow-p_2gCPml/edit`.
```bash
# You can configure workflow `p_abc123` to listen to events from the source `dc_def456` using the following command:
curl "https://api.pipedream.com/v1/auto_subscriptions?event_name=$errors&listener_id=p_abc123" \
-X POST \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
# Delete a subscription
Source: https://pipedream.com/docs/rest-api/api-reference/subscription/delete-a-subscription
DELETE /subscriptions
Use this endpoint to delete an existing subscription. This endpoint accepts the same parameters as the [`POST /subscriptions` endpoint](/docs/rest-api/#listen-for-events-from-another-source-or-workflow) for creating subscriptions.
#### Endpoint
```
DELETE /subscriptions?emitter_id={emitting_component_id}&listener_id={receiving_source_id}&event_name={event_name}
```
#### Parameters
The ID of the workflow or component emitting events. Events from this component trigger the receiving component / workflow.
`emitter_id` also accepts glob patterns that allow you to subscribe to *all* workflows or components:
* `p_*`: Listen to events from all workflows
* `dc_*`: Listen to events from all event sources
[See the component endpoints](/docs/rest-api/#components) for information on how to retrieve the ID of existing components. You can retrieve the ID of your workflow in your workflow’s URL - it’s the string `p_2gCPml` in `https://pipedream.com/@dylan/example-rss-sql-workflow-p_2gCPml/edit`.
***
The ID of the component or workflow you’d like to receive events.
[See the component endpoints](/docs/rest-api/#components) for information on how to retrieve the ID of existing components. You can retrieve the ID of your workflow in your workflow’s URL - it’s the string `p_2gCPml` in `https://pipedream.com/@dylan/example-rss-sql-workflow-p_2gCPml/edit`.
***
The name of the event stream tied to your subscription. **If you didn’t specify an `event_name` when creating your subscription, pass `event_name=`**.
You’ll find the `event_name` that’s tied to your subscription when [listing your subscriptions](/docs/rest-api/#get-current-users-subscriptions):
```json
{
"id": "sub_abc123",
"emitter_id": "dc_abc123",
"listener_id": "dc_def456",
"event_name": "test"
},
{
"id": "sub_def456",
"emitter_id": "dc_abc123",
"listener_id": "wh_abc123",
"event_name": ""
}
```
```bash
# You can delete a subscription you configured for workflow `p_abc123` to listen to events from the source `dc_def456` using the following command:
curl "https://api.pipedream.com/v1/subscriptions?emitter_id=dc_def456&listener_id=p_abc123" \
-X DELETE \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
# Listen for events from another source or workflow
Source: https://pipedream.com/docs/rest-api/api-reference/subscription/listen-for-events-from-another-source-or-workflow
POST /subscriptions
You can configure a source or workflow to receive events from any number of other workflows or sources. For example, if you want a single workflow to run on 10 different RSS sources, you can configure the workflow to *listen* for events from those 10 sources.
#### Endpoint
```
POST /subscriptions?emitter_id={emitting_component_id}&event_name={event_name}&listener_id={receiving_source_id}
```
#### Parameters
The ID of the workflow or component emitting events. Events from this component trigger the receiving component / workflow.
`emitter_id` also accepts glob patterns that allow you to subscribe to *all* workflows or components:
* `p_*`: Listen to events from all workflows
* `dc_*`: Listen to events from all event sources
[See the component endpoints](/docs/rest-api/#components) for information on how to retrieve the ID of existing components. You can retrieve the ID of your workflow in your workflow’s URL - it’s the string `p_2gCPml` in `https://pipedream.com/@dylan/example-rss-sql-workflow-p_2gCPml/edit`.
***
**Only pass `event_name` when you’re listening for events on a custom channel, with the name of the custom channel**:
```
event_name=
```
See [the `this.$emit` docs](/docs/components/contributing/api/#emit) for more information on how to emit events on custom channels.
Pipedream also exposes channels for logs and errors:
* `$errors`: Any errors thrown by workflows or sources are emitted to this stream
* `$logs`: Any logs produced by **event sources** are emitted to this stream
***
The ID of the component or workflow you’d like to receive events.
[See the component endpoints](/docs/rest-api/#components) for information on how to retrieve the ID of existing components. You can retrieve the ID of your workflow in your workflow’s URL - it’s the string `p_2gCPml` in `https://pipedream.com/@dylan/example-rss-sql-workflow-p_2gCPml/edit`.
#### Example Request
You can configure workflow `p_abc123` to listen to events from the source `dc_def456` using the following command:
```bash
curl "https://api.pipedream.com/v1/subscriptions?emitter_id=dc_def456&listener_id=p_abc123" \
-X POST \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
# Users
Source: https://pipedream.com/docs/rest-api/api-reference/users
These endpoints only work when using [user API keys](/docs/rest-api/auth/#user-api-keys), and will not work with workspace-level OAuth clients.
# Get Current User Info
Source: https://pipedream.com/docs/rest-api/api-reference/users/get-current-user-info
GET /users/me
Retrieve information on the authenticated user.
#### Endpoint
```
GET /users/me
```
#### Parameters
*No parameters*
```bash
curl 'https://api.pipedream.com/v1/users/me' \
-H 'Authorization: Bearer '
```
```json Free user
{
"data": {
"id": "u_abc123",
"username": "dylburger",
"email": "dylan@pipedream.com",
"daily_compute_time_quota": 95400000,
"daily_compute_time_used": 8420300,
"daily_invocations_quota": 27344,
"daily_invocations_used": 24903
"orgs": [
{
"name": "MyWorkspace",
"id": "o_abc123",
"orgname": "myworkspace",
"email": "workspace@pipedream.com",
"daily_credits_quota": 100,
"daily_credits_used": 0
},
{
"name": "MyTeam",
"id": "o_edf456",
"orgname": "myteam",
"email": "team@pipedream.com",
"daily_credits_quota": 100,
"daily_credits_used": 0,
"daily_compute_time_quota": 1800000,
"daily_compute_time_used": 0,
"daily_invocations_quota": 100,
"daily_invocations_used": 0
}
],
}
}
```
```json Paid user
{
"data": {
"id": "u_abc123",
"username": "user-35b7389db9e5222d42df6b3f0cfa8143"
"email": "dylan@pipedream.com",
"billing_period_start_ts": 1610154978,
"billing_period_end_ts": 1612833378,
"billing_period_credits": 12345
}
}
```
# Webhooks
Source: https://pipedream.com/docs/rest-api/api-reference/webhooks
Pipedream supports webhooks as a way to deliver events to a endpoint you own. Webhooks are managed at an account-level, and you send data to these webhooks using [subscriptions](/docs/rest-api/#subscriptions).
For example, you can run a Twitter [event source](/docs/workflows/building-workflows/triggers/) that listens for new tweets. If you [subscribe](/docs/rest-api/#subscriptions) the webhook to this source, Pipedream will deliver those tweets directly to your webhook’s URL without running a workflow.
[**See these tutorials**](/docs/rest-api/examples/webhooks/) for examples.
# Create a webhook
Source: https://pipedream.com/docs/rest-api/api-reference/webhooks/create-a-webhook
POST /webhooks
Creates a webhook pointing to a URL. Configure a [subscription](/docs/rest-api/#subscriptions) to deliver events to this webhook.
#### Endpoint
```
POST /webhooks?url={your_endpoint_url}&name={name}&description={description}
```
#### Parameters
The endpoint URL where you’d like to deliver events. Any events sent to this webhook object will be delivered to this endpoint URL.
This URL **must** contain, at a minimum, a protocol — one of `http` or `https` — and hostname, but can specify resources or ports. For example, these URLs work:
```
https://example.com
http://example.com
https://example.com:12345/endpoint
```
but these do not:
```
# No protocol - needs http(s)://
example.com
# mysql protocol not supported. Must be an HTTP(S) endpoint
mysql://user:pass@host:port
```
***
The name you’d like to assign to this webhook, which will appear when [listing your webhooks](/docs/rest-api/#get-current-users-webhooks).
***
The description you’d like to assign to this webhook.
```bash
# You can create a webhook that delivers events to `https://endpoint.m.pipedream.net` using the following command:
curl "https://api.pipedream.com/v1/webhooks?url=https://endpoint.m.pipedream.net&name=name&description=description" \
-X POST \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
```json
// Successful API responses contain a webhook ID for the webhook that was created in `data.id` — the string that starts with `wh_` — which you can reference when creating [subscriptions](/docs/rest-api/#subscriptions).
{
"data": {
"id": "wh_abc123",
"user_id": "u_abc123",
"name": null,
"description": null,
"url": "https://endpoint.m.pipedream.net",
"active": true,
"created_at": 1611964025,
"updated_at": 1611964025
}
}
```
# Delete a webhook
Source: https://pipedream.com/docs/rest-api/api-reference/webhooks/delete-a-webhook
DELETE /webhooks/{id}
Use this endpoint to delete a webhook in your account.
#### Endpoint
```
DELETE /webhooks/{id}
```
#### Path Parameters
The ID of a webhook in your account.
***
```bash
curl "https://api.pipedream.com/v1/webhooks/wh_abc123" \
-X DELETE \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
# Create a Workflow
Source: https://pipedream.com/docs/rest-api/api-reference/workflows/create-a-workflow
POST /workflows
This endpoint is only available when using [user API keys](/docs/rest-api/auth/#user-api-keys), not yet for workspace [OAuth tokens](/docs/rest-api/auth/#oauth).
Creates a new workflow within an organization’s project. This endpoint allows defining workflow steps, triggers, and settings, based on a supplied template.
#### Endpoint
```
POST /workflows
```
#### Request Body
[Switch to your workspace’s context](/docs/workspaces/#switching-between-workspaces) and [find your org’s ID](/docs/workspaces/#finding-your-workspaces-id).
***
The ID of the project where the new workflow will be created. To find your project ID, switch to your desired worksapce, and click on Projects in the top left of the Pipedream dashboard.
Click on the project where you’d like to create the new workflow, and the project ID can be found in the URL, starting with `proj_`.
If the URL is [https://pipedream.com/@pd-testing/projects/proj\_GzsRY5N/tree](https://pipedream.com/@pd-testing/projects/proj_GzsRY5N/tree), your `project_id` is `proj_GzsRY5N`.
***
The ID of the workflow template to base the workflow on. To find a workflow’s `template_id`, navigate to your workflow that you’d like to create a template for, and click “Create share link”. If the URL created is [https://pipedream.com/new?h=tch\_Vdfl0l](https://pipedream.com/new?h=tch_Vdfl0l), your `template_id` is `tch_Vdfl01`.
***
Definitions of the steps to include in the workflow. Each item in the array represents a step, with its namespace and `props`.
***
Definitions of the triggers that will start the workflow. Each item in the array represents a trigger, with its type and `props`.
***
Additional settings for the workflow, such as `name` and `auto_deploy`.
```json
{
"project_id": "proj_wx9sgy",
"org_id": "o_BYDI5y",
"template_id": "tch_3BXfWO",
"steps": [
{
"namespace": "code",
"props": {
"stringProp": "asdf"
}
},
{
"namespace": "keyauth_hello_world",
"props": {
"keyauth": {
"authProvisionId": "apn_Nb6h9v"
}
}
}
],
"triggers": [
{
"props": {
"oauth": {
"authProvisionId": "apn_qZWh4A"
},
"string": "jkl"
}
}
],
"settings": {
"name": "example workflow name",
"auto_deploy": true
}
}
```
```json
{
"data": {
"id": "p_48rCxZ",
"name": "example workflow name",
"active": true,
"steps": [
{
"id": "c_bDf10L",
"type": "CodeCell",
"namespace": "code",
"disabled": false,
"code_raw": null,
"codeRaw": null,
"codeConfigJson": null,
"lang": "nodejs20.x",
"text_raw": null,
"appConnections": [],
"flat_params_visibility_json": null,
"params_json": "{}",
"component": true,
"savedComponent": {
"id": "sc_PRYiAZ",
"code": "export default defineComponent({\n props: {\n stringProp: {\n type: \"string\"\n },\n intProp: {\n type: \"integer\",\n }\n },\n async run({ steps, $ }) {\n console.log(this.stringProp)\n return steps.trigger.event\n },\n})",
"codeHash": "1908045950f3c1a861e538b20d70732adb701a81174dc59f809398e43f85d132",
"configurableProps": [
{
"name": "stringProp",
"type": "string"
},
{
"name": "intProp",
"type": "integer"
}
],
"key": null,
"description": null,
"entryPath": null,
"version": "",
"apps": []
},
"component_key": null,
"component_owner_id": null,
"configured_props_json": "{\"intProp\":5,\"stringProp\":\"asdf\"}",
"authProvisionIdMap": {},
"authProvisionIds": []
},
{
"id": "c_W3f0YV",
"type": "CodeCell",
"namespace": "python",
"disabled": false,
"code_raw": null,
"codeRaw": null,
"codeConfigJson": null,
"lang": "python3.12",
"text_raw": null,
"appConnections": [],
"flat_params_visibility_json": null,
"params_json": "{}",
"component": true,
"savedComponent": {
"id": "sc_mweiWO",
"code": "def handler(pd: \"pipedream\"):\n # Reference data from previous steps\n print(pd.steps[\"trigger\"][\"context\"][\"id\"])\n # Return data for use in future steps\n return {\"foo\": {\"test\": True}}\n",
"codeHash": "63b32f00f1bc0b594e7a109cced4bda5011ab4420e358f743058dc46de8c5270",
"configurableProps": [],
"key": null,
"description": null,
"entryPath": null,
"version": "",
"apps": []
},
"component_key": null,
"component_owner_id": null,
"configured_props_json": null,
"authProvisionIdMap": {},
"authProvisionIds": []
},
{
"id": "c_D7feVN",
"type": "CodeCell",
"namespace": "keyauth_hello_world",
"disabled": false,
"code_raw": null,
"codeRaw": null,
"codeConfigJson": null,
"lang": "nodejs20.x",
"text_raw": null,
"appConnections": [],
"flat_params_visibility_json": null,
"params_json": "{}",
"component": true,
"savedComponent": {
"id": "sc_71Li4l",
"code": "const keyauth = {\n type: \"app\",\n app: \"keyauth\",\n propDefinitions: {},\n}\n\nexport default {\n name: \"Key auth hello world\",\n version: \"0.0.1\",\n key: \"keyauth-hello-world\",\n type: \"action\",\n description: \"simple hello world with dev keyauth app.\",\n props: {\n keyauth,\n },\n async run() {\n console.log(\"hello world\")\n return \"hello world\"\n },\n}\n",
"codeHash": "b7d5c6540f60e63174a96d5e5ba4aa89bf45b7b9d9fdc01db0ee64c905962415",
"configurableProps": [
{
"name": "keyauth",
"type": "app",
"app": "keyauth"
}
],
"key": "keyauth-hello-world",
"description": "simple hello world with dev keyauth app.",
"entryPath": null,
"version": "0.0.1",
"apps": [
{
"appId": "app_1xohQx",
"nameSlug": "keyauth",
"authType": "keys"
}
]
},
"component_key": "keyauth-hello-world",
"component_owner_id": null,
"configured_props_json": "{\"keyauth\":{\"authProvisionId\":\"apn_Nb6h9v\"}}",
"authProvisionIdMap": {},
"authProvisionIds": []
}
],
"triggers": [
{
"id": "hi_0R3HKG",
"key": "eohq5aaq8yr4sye",
"endpoint_url": "http://eojq5abv8yr4sye.m.d.pipedream.net",
"custom_response": false,
"created_at": 1707418403,
"updated_at": 1707418403
},
{
"id": "dc_rmXuv3",
"owner_id": "o_BYDI5y",
"component_id": "sc_PgliBJ",
"configured_props": {},
"active": true,
"created_at": 1707241571,
"updated_at": 1707241571,
"name": "Emit hello world",
"name_slug": "emit-hello-world-6"
},
{
"id": "ti_aPxTPY",
"interval_seconds": 3600,
"cron": null,
"timezone": "America/New_York",
"schedule_changed_at": 1707418408,
"created_at": 1707418404,
"updated_at": 1707418404
},
{
"id": "dc_5nvuPv",
"owner_id": "o_BYDI5y",
"component_id": "sc_XGBiLw",
"configured_props": {
"oauth": {
"authProvisionId": "apn_qZWh4A"
},
"string": "jkl"
},
"active": true,
"created_at": 1707418404,
"updated_at": 1707418404,
"name": "oauth-test-source",
"name_slug": "oauth-test-source-3"
},
{
"id": "ei_QbGT3D",
"email_address": "em5tdwgfgbw9piv@upload.pipedream.net",
"created_at": 1707418407,
"updated_at": 1707418407
}
]
}
}
```
# Get a Workflow’s details
Source: https://pipedream.com/docs/rest-api/api-reference/workflows/get-a-workflows-details
GET /workflows/{workflow_id}
Retrieves the details of a specific workflow within an organization’s project.
#### Endpoint
```
GET /workflows/{workflow_id}
```
#### Path Parameters
The ID of the workflow to retrieve.
```bash
curl 'https://api.pipedream.com/v1/workflows/p_abc123?org_id=o_abc123' \
-H 'Authorization: Bearer '
```
```json
{
"triggers": [
{
"id": "hi_ABpHKz",
"key": "eabcdefghiklmnop",
"endpoint_url": "http://eabcdefghiklmnop.m.d.pipedream.net",
"custom_response": false
}
],
"steps": [
{
"id": "c_abc123",
"namespace": "code",
"disabled": false,
"lang": "nodejs20.x",
"appConnections": [],
"component": true,
"savedComponent": {
"id": "sc_abc123",
"codeHash": "long-hash-here",
"configurableProps": [
{
"name": "channelId",
"type": "string"
},
{
"name": "message",
"type": "string"
},
{
"name": "slack",
"type": "app",
"app": "slack"
}
],
"version": ""
},
"component_key": null,
"component_owner_id": "o_abc123",
"configured_props_json": "{}"
}
]
}
```
# Get Workflow Emits
Source: https://pipedream.com/docs/rest-api/api-reference/workflows/get-workflows-emits
GET /workflows/{workflow_id}/event_summaries
Retrieve up to the last 100 events emitted from a workflow using [`$send.emit()`](/docs/workflows/data-management/destinations/emit/#emit-events).
#### Endpoint
```
GET /workflows/{workflow_id}/event_summaries
```
#### Notes and Examples
The event data for events larger than `1KB` may get truncated in the response. If you’re retrieving larger events, and need to see the full event data, pass `?expand=event`:
```
GET /workflows/{workflow_id}/event_summaries&expand=event
```
Pass `?limit=N` to retrieve the last **N** events:
```
GET /v1/workflows/{workflow_id}/event_summaries?expand=event&limit=1
```
```bash
curl 'https://api.pipedream.com/v1/workflows/p_abc123/event_summaries?expand=event&limit=1' \
-H 'Authorization: Bearer '
```
```json
{
"page_info": {
"total_count": 1,
"start_cursor": "1606511826306-0",
"end_cursor": "1606511826306-0",
"count": 1
},
"data": [
{
"id": "1606511826306-0",
"indexed_at_ms": 1606511826306,
"event": {
"raw_event": {
"name": "Luke",
"title": "Jedi"
}
},
"metadata": {
"emit_id": "1ktF96gAMsLqdYSRWYL9KFS5QqW",
"name": "",
"emitter_id": "p_abc123"
}
}
]
}
```
# Get Workflow Errors
Source: https://pipedream.com/docs/rest-api/api-reference/workflows/get-workflows-errors
GET /workflows/{workflow_id}/$errors/event_summaries
Retrieve up to the last 100 events for a workflow that threw an error. The details of the error, along with the original event data, will be included
#### Endpoint
```
GET /workflows/{workflow_id}/$errors/event_summaries
```
#### Notes and Examples
The event data for events larger than `1KB` may get truncated in the response. If you’re processing larger events, and need to see the full event data, pass `?expand=event`:
```
GET /workflows/{workflow_id}/$errors/event_summaries&expand=event
```
Pass `?limit=N` to retrieve the last **N** events:
```
GET /v1/workflows/{workflow_id}/$errors/event_summaries?expand=event&limit=1
```
```bash
curl 'https://api.pipedream.com/v1/workflows/p_abc123/$errors/event_summaries?expand=event&limit=1' \
-H 'Authorization: Bearer '
```
```json
{
"page_info": {
"total_count": 100,
"start_cursor": "1606370816223-0",
"end_cursor": "1606370816223-0",
"count": 1
},
"data": [
{
"id": "1606370816223-0",
"indexed_at_ms": 1606370816223,
"event": {
"original_event": {
"name": "Luke",
"title": "Jedi"
},
"original_context": {
"id": "1kodJIW7jVnKfvB2yp1OoPrtbFk",
"ts": "2020-11-26T06:06:44.652Z",
"workflow_id": "p_abc123",
"deployment_id": "d_abc123",
"source_type": "SDK",
"verified": false,
"owner_id": "u_abc123",
"platform_version": "3.1.20"
},
"error": {
"code": "InternalFailure",
"cellId": "c_abc123",
"ts": "2020-11-26T06:06:56.077Z",
"stack": " at Request.extractError ..."
},
"metadata": {
"emitter_id": "p_abc123",
"emit_id": "1kodKnAdWGeJyhqYbqyW6lEXVAo",
"name": "$errors"
}
}
}
]
}
```
# Invoke workflow
Source: https://pipedream.com/docs/rest-api/api-reference/workflows/invoke-workflow
POST /workflows/{workflow_id}/invoke
You can invoke workflows by making an HTTP request to a workflow’s HTTP trigger. [See the docs on authorizing requests and invoking workflows](/docs/workflows/building-workflows/triggers/#authorizing-http-requests) for more detail.
# Update a Workflow
Source: https://pipedream.com/docs/rest-api/api-reference/workflows/update-a-workflow
PUT /workflows/{id}
### Update a Workflow
This endpoint is only available when using [user API keys](/docs/rest-api/auth/#user-api-keys), not yet for workspace [OAuth tokens](/docs/rest-api/auth/#oauth).
Updates the workflow’s activation status. If you need to modify the workflow’s steps, triggers, or connected accounts [consider making a new workflow](/docs/rest-api/#create-a-workflow).
#### Endpoint
```
PUT /workflows/{id}
```
#### Path Parameters
The ID of the workflow to update.
To find your workflow ID, navigate to your workflow.
If the URL is [https://pipedream.com/@michael-testing/api-p\_13CDnxK/inspect](https://pipedream.com/@michael-testing/api-p_13CDnxK/inspect), your `workflow_id` begins with `p_` and would be `p_13CDnxK`.
#### Request Body
The activation status of a workflow. Set to `true` to activate the workflow, or `false` to deactivate it.
[Find your org’s ID](/docs/workspaces/#finding-your-workspaces-id).
```bash
curl -X PUT 'https://api.pipedream.com/v1/workflows/p_abc123' \
-H 'Authorization: Bearer ' \
-H 'Content-Type: application/json' \
-d '{"active": false, "org_id": "o_BYDI5y"}'
```
# Workspaces
Source: https://pipedream.com/docs/rest-api/api-reference/workspaces
[Workspaces](/docs/workspaces/) provide your team a way to manage resources in a shared workspace. Any resources created by the workspace are owned by the workspace and accessible to its members.
# Get a Workspace
Source: https://pipedream.com/docs/rest-api/api-reference/workspaces/get-a-workspace
GET /workspaces/{org_id}
Programmatically view your workspace’s current credit usage for the billing period in real time.
#### Endpoint
```
GET /v1/workspaces/
```
#### Path Parameters
[Switch to your workspace’s context](/docs/workspaces/#switching-between-workspaces) and [find your org’s ID](/docs/workspaces/#finding-your-workspaces-id).
```json
{
"data": {
"id": "o_Qa8I1Z",
"orgname": "asdf",
"name": "asdf",
"email": "dev@pipedream.com",
"daily_credits_quota": 100,
"daily_credits_used": 0
}
}
```
# Get Workspaces’s Sources
Source: https://pipedream.com/docs/rest-api/api-reference/workspaces/get-workspaces-sources
GET /orgs/{org_id}/sources
Retrieve all the [event sources](/docs/rest-api/#sources) configured for a specific workspace.
#### Endpoint
```
GET /orgs//sources
```
#### Path Parameters
[Switch to your workspace’s context](/docs/workspaces/#switching-between-workspaces) and [find your org’s ID](/docs/workspaces/#finding-your-workspaces-id).
```bash
curl 'https://api.pipedream.com/v1/orgs/o_abc123/sources' \
-H 'Authorization: Bearer '
```
```json
{
"page_info": {
"total_count": 19,
"count": 10,
"start_cursor": "ZGNfSzB1QWVl",
"end_cursor": "ZGNfeUx1alJx"
},
"data": [
{
"id": "dc_abc123",
"component_id": "sc_def456",
"configured_props": {
"http": {
"endpoint_url": "https://myendpoint.m.pipedream.net"
}
},
"active": true,
"created_at": 1587679599,
"updated_at": 1587764467,
"name": "test",
"name_slug": "test"
}
]
}
```
# Get Workspaces’s Subscriptions
Source: https://pipedream.com/docs/rest-api/api-reference/workspaces/get-workspaces-subscriptions
GET /workspaces/{org_id}/subscriptions
Retrieve all the [subscriptions](/docs/rest-api/#subscriptions) configured for a specific workspace.
#### Endpoint
```
GET /workspaces//subscriptions
```
#### Path Parameters
[Switch to your workspace’s context](/docs/workspaces/#switching-between-workspaces) and [find your org’s ID](/docs/workspaces/#finding-your-workspaces-id).
```bash
curl 'https://api.pipedream.com/v1/workspaces/o_abc123/subscriptions' \
-H 'Authorization: Bearer '
```
```json
{
"data": [
{
"id": "sub_abc123",
"emitter_id": "dc_abc123",
"listener_id": "p_abc123",
"event_name": ""
},
{
"id": "sub_def456",
"emitter_id": "dc_def456",
"listener_id": "p_def456",
"event_name": ""
}
]
}
```
# Authentication
Source: https://pipedream.com/docs/rest-api/auth
The Pipedream API supports two methods of authentication: [OAuth](/docs/rest-api/auth/#oauth) and [User API keys](/docs/rest-api/auth/#user-api-keys).
**We use OAuth** for the majority of the API, for a few reasons:
✅ OAuth clients are tied to the workspace, administered by workspace admins\
✅ Tokens are short-lived\
✅ OAuth clients support scopes, limiting access to specific operations (coming soon!)\
✅ Limit access to specific Pipedream projects (coming soon!)
When testing the API or using the CLI, you can use your user API key. This key is tied to your user account and provides full access to any resources your user has access to, across workspaces.
## OAuth
Workspace administrators can create OAuth clients in your workspace’s [API settings](https://pipedream.com/settings/api).
Since API requests are meant to be made server-side, and since grants are not tied to individual end users, all OAuth clients are [**Client Credentials** applications](https://www.oauth.com/oauth2-servers/access-tokens/client-credentials/).
### Creating an OAuth client
1. Visit the [API settings](https://pipedream.com/settings/api) for your workspace.
2. Click the **New OAuth Client** button.
3. Name your client and click **Create**.
4. Copy the client secret. **It will not be accessible again**. Click **Close**.
5. Copy the client ID from the list.
### How to get an access token
In the client credentials model, you exchange your OAuth client ID and secret for an access token. Then you use the access token to make API requests.
If you’re running a server that executes JavaScript, we recommend using [the Pipedream SDK](/docs/connect/api-reference/introduction), which automatically refreshes tokens for you.
```javascript
import { PipedreamClient } from "@pipedream/sdk";
// These secrets should be saved securely and passed to your environment
const client = new PipedreamClient({
clientId: "YOUR_CLIENT_ID",
clientSecret: "YOUR_CLIENT_SECRET",
projectId: "YOUR_PROJECT_ID", // This is typically required for most Connect API endpoints
projectEnvironment: "development" // or "production"
});
// Use the SDK's helper methods to make requests
const accounts = await client.accounts.list({ include_credentials: 1 });
// Or make any Pipedream API request with the fresh token
const accounts = await client.makeAuthorizedRequest("/accounts", {
method: "GET",
params: {
include_credentials: 1,
}
});
```
You can also manage this token refresh process yourself, using the `/oauth/token` API endpoint:
```bash
curl https://api.pipedream.com/v1/oauth/token \
-H 'Content-Type: application/json' \
-d '{ "grant_type": "client_credentials", "client_id": "", "client_secret": "" }'
```
Access tokens expire after 1 hour. Store access tokens securely, server-side.
### Revoking a client secret
1. Visit your workspace’s [API settings](https://pipedream.com/settings/api).
2. Click the **…** button to the right of the OAuth client whose secret you want to revoke, then click **Rotate client secret**.
3. Copy the new client secret. **It will not be accessible again**.
### OAuth security
See [the OAuth section of the security docs](/docs/privacy-and-security/#pipedream-rest-api-security-oauth-clients) for more information on how Pipedream secures OAuth credentials.
## User API keys
When you sign up for Pipedream, an API key is automatically generated for your user account. You can use this key to authorize requests to the API.
You’ll find this API key in your [User Settings](https://pipedream.com/user) (**My Account** -> **API Key**).
This key is tied to your user account and provides full access to any resources your user has access to, across workspaces.
### Revoking your API key
You can revoke your API key in your [Account Settings](https://pipedream.com/settings/account) (**Settings** -> **Account**). Click on the **REVOKE** button directly to the right of your API key.
This will revoke your original API key, generating a new one. Any API requests made with the original token will yield a `401 Unauthorized` error.
## Authorizing API requests
Whether you use OAuth access tokens or user API keys, Pipedream uses [Bearer Authentication](https://oauth.net/2/bearer-tokens/) to authorize your access to the API or SSE event streams. When you make API requests, pass an `Authorization` header of the following format:
```
# OAuth access token
Authorization: Bearer
# User API key
Authorization: Bearer
```
For example, here’s how you can use `cURL` to fetch profile information for the authenticated user:
```bash
curl 'https://api.pipedream.com/v1/users/me' \
-H 'Authorization: Bearer '
```
## Using the Pipedream CLI
You can [link the CLI to your Pipedream account](/docs/cli/login/), which will automatically pass your API key in the `Authorization` header with every API request.
# REST API Example: Create An RSS Source
Source: https://pipedream.com/docs/rest-api/examples/rss
Here, we’ll walk through an example of how to create an RSS [event source](/docs/workflows/building-workflows/triggers/) and retrieve events from that source using the [REST API](/docs/rest-api/).
Before you begin, you’ll need your [Pipedream API Key](/docs/rest-api/auth/#user-api-keys).
## Find the details of the source you’d like to create
To create an event source using Pipedream’s REST API, you’ll need two things:
* The `key` that identifies the component by name
* The `props` - input data - required to create the source
You can find the `key` by reviewing the code for the source, [in Pipedream’s Github repo](https://github.com/PipedreamHQ/pipedream/tree/master/components).
In the `components/` directory, you’ll see a list of apps. Navigate to the app-specific directory for your source, then visit the `sources/` directory in that dir to find your source. For example, to create an RSS source, visit the [`components/rss/sources/new-item-in-feed/new-item-in-feed.js` source](https://github.com/PipedreamHQ/pipedream/blob/master/components/rss/sources/new-item-in-feed/new-item-in-feed.ts).
The `key` is a globally unique identifier for the source. You’ll see the `key` for this source near the top of the file:
```javascript
key: "rss-new-item-in-feed",
```
Given this key, make an API request to the `/components/registry/{key}` endpoint of Pipedream’s REST API:
```bash
curl https://api.pipedream.com/v1/components/registry/rss-new-item-in-feed \
-H "Authorization: Bearer XXX" -vvv \
-H "Content-Type: application/json"
```
This returns information about the component, including a `configurable_props` section that lists the input you’ll need to provide to create the source:
```json
"configurable_props": [
{
"name": "rss",
"type": "app",
"app": "rss"
},
{
"name": "url",
"type": "string",
"label": "Feed URL",
"description": "Enter the URL for any public RSS feed."
},
{
"name": "timer",
"type": "$.interface.timer",
"default": {
"intervalSeconds": 900
}
}
],
```
In this specific case, you can ignore the `rss` “app” prop. The other two props — `url` and `timer` — are inputs that you can control:
* `url`: the URL to the RSS feed
* `timer` (optional): the frequency at which you’d like to poll the RSS feed for new items. By default, this source will poll for new items every 15 minutes.
## Creating the source
To create an RSS event source, make an HTTP POST request to the `/v1/sources` endpoint of Pipedream’s REST API, passing the `url` you’d like to poll and the frequency at which you’d like to run the source in the `timer` object. In this example, we’ll run the source once every 60 seconds.
```bash
curl https://api.pipedream.com/v1/sources \
-H "Authorization: Bearer XXX" -vvv \
-H "Content-Type: application/json" \
-d '{"key": "rss-new-item-in-feed", "name": "test-rss", "configured_props": { "url": "https://rss.m.pipedream.net", "timer": { "intervalSeconds": 60 }}}'
```
If successful, you should get back a `200 OK` response from the API with the following payload:
```json
{
"data": {
"id": "dc_abc123",
"user_id": "u_abc123",
"component_id": "sc_abc123",
"configured_props": {
"url": "https://rss.m.pipedream.net",
"timer": {
"cron": null,
"interval_seconds": 60
}
},
"active": true,
"created_at": 1589486978,
"updated_at": 1589486978,
"name": "your-name-here",
"name_slug": "your-name-here"
}
}
```
Visit [https://pipedream.com/sources](https://pipedream.com/sources) to see your running source. You should see the source listed on the left with the name you specified in the API request.
## Fetching new events
The RSS source polls your feed URL for items at the specified frequency. It emits new items as **events** of the following shape:
```json
{
"permalink": "https://example.com/8161",
"guid": "https://example.com/8161",
"title": "Example post",
"link": "https://example.com/8161"
}
```
### SSE
You can subscribe to new events in real time by listening to the SSE stream tied to this source. Take the `id` from the API response above —`dc_abc123` in our example —and make a request to this endpoint:
```bash
curl -H "Authorization: Bearer " \
"https://api.pipedream.com/sources/dc_abc123/sse"
```
[See the SSE docs for more detail on this interface](/docs/workflows/data-management/destinations/sse/).
### REST API
You can also fetch items in batch using the REST API. If you don’t need to act on items in real time, and just need to fetch new items from the feed on a regular interval, you can fetch events like so:
```bash
curl -H "Authorization: Bearer " \
"https://api.pipedream.com/v1/sources/dc_BVuN2Q/event_summaries"
```
[See the docs on the `/event_summaries` endpoint](/docs/rest-api/#get-source-events) for more details on the parameters it accepts. For example, you can pass a `limit` param to return only `N` results per page, and paginate over results using the `before` and `after` cursors described in the [pagination docs](/docs/rest-api/#pagination).
# REST API Example: Webhooks
Source: https://pipedream.com/docs/rest-api/examples/webhooks
Pipedream supports webhooks as a way to deliver events to an endpoint you own. Webhooks are managed at an account-level, and you send data to these webhooks using [subscriptions](/docs/rest-api/#subscriptions).
For example, you can run a Twitter [event source](/docs/workflows/building-workflows/triggers/) that listens for new tweets. If you [subscribe](/docs/rest-api/#subscriptions) the webhook to this source, Pipedream will deliver those tweets directly to your webhook’s URL without running a workflow.
## Send events from an existing event source to a webhook
[Event sources](/docs/workflows/building-workflows/triggers/) source data from a service / API, emitting events that can trigger Pipedream workflows. For example, you can run a Github event source that emits an event anytime someone stars your repo, triggering a workflow on each new star.
**You can also send the events emitted by an event source to a webhook**.
### Step 1 - retrieve the source’s ID
First, you’ll need the ID of your source. You can visit [https://pipedream.com/sources](https://pipedream.com/sources), select a source, and copy its ID from the URL. It’s the string that starts with `dc_`:
You can also find the ID by running `pd list sources` using [the CLI](/docs/cli/reference/#pd-list).
### Step 2 - Create a webhook
You can create a webhook using the [`POST /webhooks` endpoint](/docs/rest-api/#create-a-webhook). The endpoint accepts 3 params:
* `url`: the endpoint to which you’d like to deliver events
* `name`: a name to assign to the webhook, for your own reference
* `description`: a longer description
You can make a request to this endpoint using `cURL`:
```powershell
curl "https://api.pipedream.com/v1/webhooks?url=https://endpoint.m.pipedream.net&name=name&description=description" \
-X POST \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
Successful API responses contain a webhook ID in `data.id` — the string that starts with `wh_` — which you’ll use in **Step 3**:
```javascript
{
"data": {
"id": "wh_abc123"
...
}
}
```
### Step 3 - Create a subscription
[Subscriptions](/docs/rest-api/#subscriptions) allow you to deliver events from one Pipedream resource to another. In the language of subscriptions, the webhook will **listen** for events **emitted** by the event source.
You can make a request to the [`POST /subscriptions` endpoint](/docs/rest-api/#listen-for-events-from-another-source-or-workflow) to create this subscription. This endpoint requires two params:
* `emitter_id`: the source ID from **Step 1**
* `listener_id`: the webhook ID from **Step 2**
You can make a request to this endpoint using `cURL`:
```powershell
curl "https://api.pipedream.com/v1/subscriptions?emitter_id=dc_abc123&listener_id=wh_abc123" \
-X POST \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
If successful, this endpoint should return a `200 OK` with metadata on the subscription.
### Step 4 - Trigger an event
Trigger an event in your source (for example, send a tweet, star a Github repo, etc). You should see the event emitted by the source delivered to the webhook URL.
## Extending these ideas
You can configure *any* events to be delivered to a webhook: events emitted by event source, or those [emitted by a workflow](/docs/workflows/data-management/destinations/emit/).
You can also configure an event to be delivered to *multiple* webhooks by creating multiple webhooks / subscriptions.
# Example: Create A Workflow
Source: https://pipedream.com/docs/rest-api/examples/workflows
Here, we’ll walk through an example of how to create a [workflow](/docs/workflows/building-workflows/) programmatically using the [create workflow endpoint](/docs/rest-api/#create-a-workflow) from a [workflow share link](/docs/workflows/building-workflows/sharing/), and pass your own connected accounts, step and trigger props as configuration.
Before you begin, you’ll need your [Pipedream API Key](/docs/rest-api/auth/#user-api-keys).
## Creating a new workflow from a template
Workflows can be shared as templates using a [Workflow Share Link](/docs/workflows/building-workflows/sharing/). When you share a workflow, a unique key is created that represents that workflow’s triggers, steps and settings.
However, opening workflow share link with a browser will not include sharing private resources - such as connected accounts, sources and data stores. Connections to your private resources have to be populated by hand.
The [create workflow endpoint](/docs/rest-api/#create-a-workflow) allows you to programmatically assign your own connected accounts, props within the workflow, and even deploy the workflow in a single API request.
First, you’ll need a workflow template. To create a new workflow template, follow this short guide.
A workflow share link has the following format:
```
https://pipedream.com/new?h=tch_abc123
```
The `tch_abc123` portion of the URL represents the unique workflow template ID. Copy this, you’ll need it in the following steps.
**You can create workflows from any workflow template**
You’re not limited to creating new workflows from your own templates, you can create your own workflows using this endpoint with any workflow share link.
This guide will also work for any workflow share link, although we recommend copying the workflow to your account first so you can view the workflow’s available configurable props.
You’ll need to view the original workflow’s configuration so you can identify the props you’ll need to provide for the new version of the workflow.
Use the **Get Workflow** endpoint to retrieve the details about the workflow you’ve created a template for.
In the Get Workflow API response, you’ll see two properties:
* `triggers` - represents the triggers for the workflow.
* `steps` - represents the series of steps within your workflow
`triggers` and `steps` contain [props](/docs/workflows/building-workflows/using-props/) that define the connected accounts as well as configuration.
The next step is to learn how we can pass our specific connected accounts to app based props in the `steps` and/or `triggers` of the workflow template.
Within the `steps` and `triggers`, find the `configurable_props` for the trigger. Here is where you can find the available slots that you can programmtically provide configuration for the **Create Workflow** endpoint:
```json
// Example of a Get Workflow response
{
"triggers": [
{
"id": "dc_abc123",
"configurable_props": [
{
"name": "url",
"type": "string"
}
],
"configured_props": {},
"active": true,
"created_at": 1707170044,
"updated_at": 1707170044,
"name": "New Item in Feed",
"name_slug": "new-item-in-feed"
},
],
"steps": [
{
"namespace": "send_message",
"lang": "nodejs20.x",
"component": true,
"savedComponent": {
"id": "sc_abc123",
"configurableProps": [
{
"name": "slack",
"type": "app",
"app": "slack"
},
{
"name": "channelId",
"type": "string"
},
{
"name": "message",
"type": "string"
}
]
}
}
]
}
```
For the example workflow above, the RSS feed trigger has a `url` property, and the Slack step as a `slack`, `channelId` and `message` property. We’ll use these names in the next steps as arguments for the **Create Workflow** endpoint.
Now that we have the names of the configurable props for both the `triggers` and `steps` of the workflow, let’s design the payload for creating a new instance of the workflow.
First, populate the `project_id` and `org_id` where you’d like this new workflow to be instantiated under. Please refer to the [**Create Workflow** parameters documentation](/docs/rest-api/#create-a-workflow) on how to find these values.
The `template_id` for your workflow can be found from the URL of the workflow share link you created in **Step 1** of this guide.
The `trigger` as a `url` prop, so let’s provide it with a specific URL (`https://hnrss.org/newest?q=Pipedream`) for this new workflow:
```json
// Example of a Create Workflow request payload
{
"project_id": "proj_abc123",
"org_id": "o_abc123",
"template_id": "tch_abc123",
"triggers": [
{
"props": {
"url": "https://hnrss.org/newest?q=Pipedream"
}
}
]
}
```
**Triggers are addressable by index**
You may have noticed that we didn’t include the `namespace` argument to the trigger in our payload. This is because triggers are ordered sequentially, whereas steps need a `namespace` argument for proper addressing.
If we were to send this payload to the **Create Workflow** endpoint now, it will populate the *RSS - New Item in Feed* trigger with the feed we provided.
You can also populate the `steps` props.
The **Slack - Send message in a Public Channel** step requires a `channelId`, `message` and the connected Slack account (`slack`). Let’s start with connecting the Slack account.
To connect your accounts to the workflow, you’ll need to find the specific IDs for each of the accounts you’d like to connect.
You can find your connected account IDs by using the [List Accounts endpoint](/docs/rest-api/#get-workspacess-connected-accounts).
You can filter your accounts by using the `query` query parameter. For example, if you want to find your connected Slack accounts to your workspace, then add `slack` to the query param:
```
GET /workflows/workspaces//accounts?query=slack
```
This request will narrow down the results to your connected Slack accounts, for easier finding.
You’ll need the ID of each connected account you’d like to configure this new workflow with. Copy down the `apn_******` value of the connected accounts from the response you’d like to use for the steps.
```json
{
"page_info": {
"total_count": 1,
"count": 1,
"start_cursor": "YXBuXzJrVmhMUg",
"end_cursor": "YXBuXzJrVmhMUg"
},
"data": [
{
"id": "apn_abc123",
"name": "Slack Pipedream Workspace"
}
]
}
```
Now we can copy the ID for our Slack account from the response: `apn_abc123`.
Given we now have the connected account ID, we can design the rest of the payload:
```json
{
"project_id": "proj_abc123",
"org_id": "o_abc123",
"template_id": "tch_3BXfWO",
"triggers": [
{
"props": {
"url": "https://hnrss.org/newest?q=Pipedream"
}
}
],
"steps": [
{
"namespace": "send_message",
"props": {
"slack": {
"authProvisionId": "apn_abc123",
}
"channelId": "U12356",
"message": "**New HackerNews Mention** \n \n {{steps.trigger.event.item.title}} \n {{steps.trigger.event.item.description}}"
}
}
]
}
```
Our payload now instructs Pipedream to set up the `send_message` step in our workflow with our connected Slack account and specific `channelId` and `message` parameters.
You can also define workflow settings such as the workflows, name, allocated memory, or if it should be deployed immediately:
```json
{
"project_id": "proj_abc123",
"org_id": "o_abc123",
"template_id": "tch_3BXfWO",
"triggers": [
{
"props": {
"url": "https://hnrss.org/newest?q=Pipedream"
}
}
],
"steps": [
{
"namespace": "send_message",
"props": {
"slack": {
"authProvisionId": "apn_abc123",
}
"channelId": "U12356",
"message": "**New HackerNews Mention** \n \n {{steps.trigger.event.item.title}} \n {{steps.trigger.event.item.description}}"
}
}
],
"settings": {
"name": "New HackerNews Mentions to Slack",
"auto_deploy": true
}
}
```
The `auto_deploy` option instructs Pipedream to deploy this workflow automatically, without requiring a manual deployment from the dashboard.
Finally, send the request to create this new workflow with this payload we’ve designed.
You should see the new workflow within your Pipedream dashboard under the workspace and project defined in the payload.
You can use this request to dynamically create new instances of the same workflow with different props, connected accounts and settings.
# Overview
Source: https://pipedream.com/docs/rest-api/overview
Use the REST API to create workflows, manage event sources, handle subscriptions, and more.
## Base URL
All API requests should be made to:
```
https://api.pipedream.com/v1
```
## Authentication
All requests to the Pipedream API must be authenticated. Read more about [authentication here](/docs/rest-api/auth).
## Required Headers
All API requests must include:
* **Authorization**: Bearer token (required on all endpoints)
* **Content-Type**: `application/json` (required for POST and PUT requests with JSON payloads)
Example:
```bash
curl https://api.pipedream.com/v1/users/me \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
## Common Parameters
The following parameters are available on many endpoints:
* **`include`**: Specify fields to include in the response
* **`exclude`**: Specify fields to exclude from the response
* **`org_id`**: Workspace ID (required only when using User API keys to specify which workspace to operate in; not needed with OAuth tokens)
## Pagination
List endpoints return paginated results with a default page size of 10 items.
### Parameters
* **`limit`**: Number of items per page (1-100, default: 10)
* **`after`**: Cursor for next page
* **`before`**: Cursor for previous page
### Example Response
```json
{
"page_info": {
"total_count": 100,
"count": 10,
"start_cursor": "ZXhhbXBsZSBjdXJzb3I",
"end_cursor": "ZXhhbXBsZSBjdXJzb3I"
},
"data": [...]
}
```
## Errors
The API uses standard HTTP response codes:
* **2xx**: Success
* **4xx**: Client error (bad request, unauthorized, not found, etc.)
* **5xx**: Server error
Error responses include a JSON body with details about what went wrong.
# Subprocessors
Source: https://pipedream.com/docs/subprocessors
Below, you’ll find a list of Pipedream’s Subprocessors in our capacity as a Processor (as defined by our Data Protection Addendum). Please check this list frequently for any updates.
*Last updated March 17, 2021*
| Company/Service | Scope of Subprocessing |
| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------- |
| [Amazon Web Services](https://aws.amazon.com/financial-services/security-compliance/compliance-center/?country-compliance-center-cards.sort-by=item.additionalFields.headline\&country-compliance-center-cards.sort-order=asc\&awsf.country-compliance-center-master-filter=*all) | Cloud hosting |
| [Datadog](https://www.datadoghq.com/security/) | Application monitoring |
| [Google Cloud Platform](https://cloud.google.com/security/compliance) | Cloud hosting |
| [Looker Data Sciences, Inc.](https://looker.com/trust-center/compliance) | Business analytics |
| [Redis Labs](https://redislabs.com/company/compliance-and-privacy/) | Cloud hosting |
| [Snowflake Computing](https://www.snowflake.com/snowflakes-security-compliance-reports/) | Business analytics |
| [Sentry](https://sentry.io/security/) | Application monitoring |
| [OpenAI](https://openai.com/) | AI services (chat support, code gen, and more) |
# Troubleshooting Common Issues
Source: https://pipedream.com/docs/troubleshooting
export const MAX_WORKFLOW_QUEUE_SIZE = '10,000';
export const FUNCTION_PAYLOAD_LIMIT = '6MB';
export const DAILY_TESTING_LIMIT = '30 minutes';
This doc describes some common solutions for fixing issues with [pipedream.com](https://pipedream.com) or with a specific workflow.
## A feature isn’t working on pipedream.com
If you’re seeing an issue with [pipedream.com](https://pipedream.com) (for example, the site won’t load, or you think you’ve found a bug), try each of the following steps, checking to see if they fix the problem:
1. [Hard refresh](https://fabricdigital.co.nz/blog/how-to-hard-refresh-your-browser-and-clear-cache) pipedream.com in your browser.
2. Log out of your pipedream.com account, and log back in.
3. [Disable your browser extensions](https://www.computerhope.com/issues/ch001411.htm) or use features like [Chrome Guest mode](https://support.google.com/chrome/answer/6130773?hl=en\&co=GENIE.Platform%3DAndroid) to browse pipedream.com without any existing extensions / cookies / cache.
If you’re still seeing the issue after trying these steps, please [report a bug](https://github.com/PipedreamHQ/pipedream/issues/new?assignees=\&labels=bug\&template=bug_report.md\&title=%5BBUG%5D+).
## How do I contact Pipedream Support?
Start by filling out the request form at [https://pipedream.com/support](https://pipedream.com/support), providing detailed information about your issue.
### How do I share my workflow with Support?
First, navigate to your **Project Settings** and share your project with Pipedream Support.
When filling out the request form at [https://pipedream.com/support](https://pipedream.com/support), please provide detailed information along with the URL from your browser’s address bar, which should look something like:
```
https://pipedream.com/@yourworkspace/projects/proj_abc123/test-workflow-p_abc123/inspect
```
## Workflows
### Where do I find my workflow’s ID?
Open [https://pipedream.com](https://pipedream.com) and visit your workflow. Copy the URL that appears in your browser’s address bar. For example:
```
https://pipedream.com/@yourworkspace/projects/proj_abc123/test-workflow-p_abc123/inspect
```
Your workflow’s ID is the value that starts with `p_`. In this example: `p_abc123`.
### My workflow isn’t working
If you’re encountering a specific issue in a workflow, try the following steps, checking to see if they fix the problem:
1. Make a trivial change to your workflow, and **Deploy** your workflow again.
2. Try searching [the community](https://pipedream.com/support) or [the `pipedream` GitHub repo](https://github.com/PipedreamHQ/pipedream/issues) to see if anyone else has solved the same issue.
If you’re still seeing the issue after trying these steps, please reach out in [the community](https://pipedream.com/support).
### Error in workflow
If you see a generic `Error in workflow` when invoking a webhook-triggered workflow, follow these steps to resolve the issue:
1. Check if your trigger is configured to return a custom HTTP response.
2. Confirm that your workflow is [returning an HTTP response](https://pipedream.com/docs/workflows/building-workflows/triggers/#customizing-the-http-response) in *every* situation, e.g., by using `$.respond()`.
### How do I invoke another workflow?
We provide a [Trigger Workflow](https://pipedream.com/apps/helper-functions/actions/trigger-workflow) action in the [Helper Functions](https://pipedream.com/apps/helper-functions) app. [See more here](/docs/workflows/building-workflows/code/nodejs/#invoke-another-workflow).
Another option is to make an HTTP request to a Pipedream HTTP webhook trigger.
## Triggers
### Why is my trigger not saving?
If your trigger continuously spins without saving, it might be processing too much data at once or taking longer than expected. This issue often occurs with polling database-related triggers (e.g., PostgreSQL, Snowflake, Notion Databases). To resolve it, try reducing the volume of data fetched, e.g., by limiting the number of rows returned in your query.
### Why is my trigger not emitting events?
First, look at your [trigger logs](/docs/troubleshooting/#where-can-i-find-the-trigger-logs) and check for any errors there. Verify that it has been running, whether on a new webhook event or at the configured polling interval.
If your trigger operates on a large amount of data at once, it may fail to log the execution and you won’t see any new events and logs. Try polling more frequently or limiting the number of records being fetched in the API request or Database query.
#### Webhook-based instant sources
These sources will get triggered immediately. But because events come in in real-time, most will **not** automatically fetch historical events upon creation. To surface test events in your workflow while building, you’ll need to generate an eligible event in the selected app.
For example, if you’ve configured the “[Message Updates (Instant)](https://pipedream.com/apps/telegram-bot-api/triggers/message-updates) Telegram source, you’ll need to send a message in the Telegram account you’ve selected in order for an event to appear.
Sources for apps like [Telegram](https://pipedream.com/apps/telegram-bot-api/triggers/message-updates) and [Google Sheets](https://pipedream.com/apps/google-sheets/triggers/new-row-added) use webhooks and get triggered immediately.
#### Timer-based polling sources
These sources will fetch new events on a regular interval, based on a schedule you specify in the trigger configuration.
In most cases, Pipedream will automatically fetch recent historical events to help enable easier workflow development. Sources for apps like [Twitter](https://pipedream.com/apps/twitter/triggers/search-mentions) and [Spotify](https://pipedream.com/apps/spotify/triggers/new-playlist) require we poll their endpoints in order to fetch new events.
### Where do I find my event source’s ID?
Open [https://pipedream.com/sources](https://pipedream.com/sources) and click on your event source. Copy the URL that appears in your browser’s address bar. For example:
```
https://pipedream.com/sources/dc_abc123
```
Your source’s ID is the value that starts with `dc_`. In this example: `dc_abc123`.
### Where can I find the trigger logs?
Find your [source](/docs/troubleshooting/#where-do-i-find-my-event-sources-id), then click on the logs or visit this URL:
```
https://pipedream.com/sources/dc_abc123/logs
```
### Why is my trigger paused?
Pipedream automatically disables sources with a 100% error rate in the past 5 days for accounts on the Free plan.
To troubleshoot, you can look at the errors in the [source](/docs/workflows/building-workflows/triggers/) logs, and may need to reconnect your account and re-enable the source for it to run again. If the issue persists, please reach out in [the community](https://pipedream.com/support).
## Warnings
Pipedream displays warnings below steps in certain conditions. These warnings do not stop the execution of your workflow, but can signal an issue you should be aware of.
### Code was still running when the step ended
This error occurs when Promises or asynchronous code is not properly finished before the next step begins execution.
See the [Asynchronous section of the Node.js documentation](/docs/workflows/building-workflows/code/nodejs/async/#the-problem) for more details.
### Undeployed changes — You have made changes to this workflow. Deploy the latest version from the editor
On workflows that are not [synced with GitHub](/docs/workflows/git/), you may notice the following warning at the top of your workflow:
> **Undeployed changes** — You have made changes to this workflow. Deploy the latest version from the editor
This means that you’ve made some changes to your workflow that you haven’t yet deployed. To see a diff of what’s changed, we recommend [enabling GitHub sync](/docs/workflows/git/), where you’ll get a full commit history of changes made to your workflows, synced to your own GitHub repo.
## Errors
### Limit Exceeded Errors
Pipedream sets [limits](/docs/workflows/limits/) on runtime, memory, and other execution-related properties. If you exceed these limits, you’ll receive one of the errors below. [See the limits doc](/docs/workflows/limits/) for details on specific limits.
### Quota Exceeded
On the Free tier, Pipedream imposes a limit on the [daily credits](/docs/workflows/limits/#daily-credits-limit) across all workflows and sources. If you hit this limit, you’ll see a **Quota Exceeded** error.
Paid plans have no credit limit. [Upgrade here](https://pipedream.com/pricing).
### Runtime Quota Exceeded
You **do not** use credits testing workflows, but workspaces on the **Free** plan are limited to {MAX_WORKFLOW_EXECUTION_LIMIT} of test runtime per day. If you exceed this limit when testing in the builder, you’ll see a **Runtime Quota Exceeded** error.
### Timeout
Event sources and workflows have a [default time limit on a given execution](/docs/workflows/limits/#time-per-execution). If your code exceeds that limit, you may encounter a **Timeout** error.
To address timeouts, you’ll either need to:
1. Figure out why your code is running for longer than expected. It’s important to note that **timeouts are not an issue with Pipedream — they are specific to your workflow**. Often, you’re making a request to a third party API that doesn’t respond in the time you expect, or you’re processing a large amount of data in your workflow, and it doesn’t complete before you hit the execution limit.
2. If it’s expected that your code is taking a long time to run, you can raise the execution limit of a workflow in your [workflow’s settings](/docs/workflows/building-workflows/settings/#execution-timeout-limit). If you need to change the execution limit for an event source, please [reach out to our team](https://pipedream.com/support/).
### Out of Memory
Pipedream [limits the default memory](/docs/workflows/limits/#memory) available to workflows and event sources. If you exceed this memory, you’ll see an **Out of Memory** error. **You can raise the memory of your workflow [in your workflow’s Settings](/docs/workflows/building-workflows/settings/#memory)**.
⚠️
Even though the event may appear to have stopped at the trigger, the workflow steps were executed. We currently are unable to pinpoint the exact step where the OOM error occurred.
This can happen for two main reasons:
1. Even for small files or objects, avoid loading the entire content into memory (e.g., by saving it in a variable). Always stream files to/from disk to prevent potential memory leaks, using a [technique like this](/docs/workflows/building-workflows/code/nodejs/http-requests/#download-a-file-to-the-tmp-directory).
2. When you have many steps in your Pipedream workflow. When your workflow runs, Pipedream runs a separate process for each step in your workflow. That incurs some memory overhead. Typically this happens when you have more than 8-10 steps. When you see an OOM error on a workflow with many steps, try increasing the memory.
### Rate Limit Exceeded
Pipedream limits the number of events that can be processed by a given interface (e.g. HTTP endpoints) during a given interval. This limit is most commonly reached for HTTP interfaces - see the [QPS limits documentation](/docs/workflows/limits/#qps-queries-per-second) for more information on that limit.
**This limit can be raised for HTTP endpoints**. [Reach out to our team](https://pipedream.com/support/) to request an increase.
### Request Entity Too Large
By default, Pipedream limits the size of incoming HTTP payloads. If you exceed this limit, you’ll see a **Request Entity Too Large** error.
Pipedream supports two different ways to bypass this limit. Both of these interfaces support uploading data up to `5TB`, though you may encounter other [platform limits](/docs/workflows/limits/).
* You can send large HTTP payloads by passing the `pipedream_upload_body=1` query string or an `x-pd-upload-body: 1` HTTP header in your HTTP request. [Read more here](/docs/workflows/building-workflows/triggers/#sending-large-payloads).
* You can upload multiple large files, like images and videos, using the [large file upload interface](/docs/workflows/building-workflows/triggers/#large-file-support).
### Function Payload Limit Exceeded
The total size of `console.log()` statements, [step exports](/docs/workflows/#step-exports), and the original event data sent to workflows and sources cannot exceed a combined size of {FUNCTION_PAYLOAD_LIMIT}. If you produce logs or step exports larger than this - for example, passing around large API responses, CSVs, or other data - you may encounter a **Function Payload Limit Exceeded** in your workflow.
Often, this occurs when you pass large data between steps using [step exports](/docs/workflows/#step-exports). You can avoid this error by [writing that data to the `/tmp` directory](/docs/workflows/building-workflows/code/nodejs/working-with-files/#writing-a-file-to-tmp) in one step, and [reading the data into another step](/docs/workflows/building-workflows/code/nodejs/working-with-files/#reading-a-file-from-tmp), which avoids the use of step exports and should keep you under the payload limit.
Pipedream also compresses the function payload from your workflow, which can yield roughly a 2x-3x increase in payload size (somewhere between `12MB` and `18MB`), depending on the data.
### JSON Nested Property Limit Exceeded
Working with nested JavaScript objects that have more than 256 nested objects will trigger a **JSON Nested Property Limit Exceeded** error.
Often, objects with this many nested objects result from a programming error that explodes the object in an unexpected way. Please confirm the code you’re using to convert data into an object is correctly parsing the object.
### Event Queue Full
Workflows have a maximum event queue size when using concurrency and throttling controls. If the number of unprocessed events exceeds the [maximum queue size](/docs/workflows/building-workflows/settings/concurrency-and-throttling/#increasing-the-queue-size-for-a-workflow), you may encounter an **Event Queue Full** error.
[Paid plans](https://pipedream.com/pricing) can [increase their queue size up to {MAX_WORKFLOW_QUEUE_SIZE}](/docs/workflows/building-workflows/settings/concurrency-and-throttling/#increasing-the-queue-size-for-a-workflow) for a given workflow.
### Credit Budget Exceeded
Credit Budgets are configurable limits on your credit usage at the account or workspace level.
If you’re receiving this warning on a source or workflow, this means your allocated Credit Budget has been reached for the defined period.
You can increase this limit at any time in the [billing area of your settings](https://pipedream.com/settings/billing).
### Pipedream Internal Error
A `Pipedream Internal Error` is thrown whenever there’s an exception during the building or executing of a workflow that’s outside the scope of the code for the individual components (steps or actions).
There are a few known ways this can be caused and how to solve them.
## Out of date actions or sources
Pipedream components are updated continously. But when new versions of actions and sources are published to the Pipedream Component Registry, your workflows are not updated by default.
[An **Update** prompt](/docs/workflows/building-workflows/actions/#updating-actions-to-the-latest-version) is shown in the in the top right of the action if the component has a new version available.
Sources do not feature an update button at this time, to receive the latest version, you’ll need to create a new source, then attach it to your workflow.
### New package versions issues
If an NPM or PyPI package throws an error during either the building of the workflow or during it’s execution, it may cause a `Pipedream Internal Error`.
By default, Pipedream automatically updates NPM and PyPI packages to the latest version available. This is designed to make sure your workflows receive the latest package updates automatically.
However, if a new package version includes bugs, or changes it’s export signature, then this may cause a `Pipedream Internal Error`.
You can potentially fix this issue by downgrading packages by pinning in [your Node.js](/docs/workflows/building-workflows/code/nodejs/#pinning-package-versions) or [Python code steps](/docs/workflows/building-workflows/code/python/#pinning-package-versions) to the last known working version.
Alternatively, if the error is due to a major release that changes the import signature of a package, then modifying your code to match the signature may help.
⚠️
Some Pipedream components use NPM packages
Some Pipedream components like pre-built [actions and triggers for Slack use NPM packages](https://github.com/PipedreamHQ/pipedream/blob/9aea8653dc65d438d968971df72e95b17f52d51c/components/slack/slack.app.mjs#L1).
In order to downgrade these packages, you’ll need to fork the Pipedream Github Repository and deploy your own changes to test them privately. Then you can [contribute the fix back into the main Pipedream Repository](/docs/components/contributing/#contribution-process).
### Packages consuming all available storage
A `Pipedream Internal Error` could be the result of NPM or PyPI packages using the entireity of the workflow’s storage capacity.
The `lodash` library for example will import the entire package if individual modules are imported with this type of signature:
```javascript
// This style of import will cause the entire lodash package to be installed, not just the pick module
import { pick } from "lodash"
```
Instead, use the specific package that exports the `pick` module alone:
```javascript
// This style imports only the pick module, since the lodash.pick package only contains this module
import pick from "lodash.pick"
```
## Is there a way to replay workflow events programmatically?
Not via the API, but you can bulk select and replay failed events using the [Event History](/docs/workflows/event-history/).
## How do I store and retrieve data across workflow executions?
If you operate your own database or data store, you can connect to it directly in Pipedream.
Pipedream also operates a [built-in key-value store](/docs/workflows/data-management/data-stores/) that you can use to get and set data across workflow executions and different workflows.
## How do I delay the execution of a workflow?
Use Pipedream’s [built-in Delay actions](/docs/workflows/building-workflows/control-flow/delay/) to delay a workflow at any step.
## How can my workflow run faster?
Here are a few things that can help your workflow execute faster:
1. **Increase memory:** Increase your [workflow memory](/docs/workflows/building-workflows/settings/#memory) to at least 512 MB. Raising the memory limit will proportionally increase CPU resources, leading to improved performance and reduced latency.
2. **Return static HTTP responses:** If your workflow is triggered by an HTTP source, return a [static HTTP response](/docs/workflows/building-workflows/triggers/#http-responses) directly from the trigger configuration. This ensures the HTTP response is sent to the caller immediately, before the rest of the workflow steps are executed.
3. **Simplify your workflow:** Reduce the number of [steps](/docs/workflows/#code-actions) and [segments](/docs/workflows/building-workflows/control-flow/#workflow-segments) in your workflow, combining multiple steps into one, if possible. This lowers the overhead involved in managing step execution and exports.
4. **Activate warm workers:** Use [warm workers](/docs/workflows/building-workflows/settings/#eliminate-cold-starts) to reduce the startup time of workflows. Set [as many warm workers](/docs/workflows/building-workflows/settings/#how-many-workers-should-i-configure) as you want for high-volume traffic.
## How can I save common functions as steps?
You can create your own custom triggers and actions (“components”) on Pipedream using [the Component API](/docs/components/contributing/api/). These components are private to your account and can be used in any workflow.
You can also publish common functions in your own package on a public registry like [npm](https://www.npmjs.com/) or [PyPI](https://pypi.org/).
## Is Puppeteer supported in Pipedream?
Yes, see [our Puppeteer docs](/docs/workflows/building-workflows/code/nodejs/browser-automation/#puppeteer) for more detail.
## Is Playwright supported in Pipedream?
Yes, see [our Playwright docs](/docs/workflows/building-workflows/code/nodejs/browser-automation/#playwright) for more detail.
# What Are Workflows?
Source: https://pipedream.com/docs/workflows
export const PUBLIC_APPS = '2,700';
Workflows make it easy to integrate your apps, data, and APIs - all with no servers or infrastructure to manage. They’re sequences of [steps](/docs/workflows/#steps) [triggered by an event](/docs/workflows/building-workflows/triggers/), like an HTTP request, or new rows in a Google sheet.
You can use [pre-built actions](/docs/workflows/building-workflows/actions/) or custom [Node.js](/docs/workflows/building-workflows/code/nodejs/), [Python](/docs/workflows/building-workflows/code/python/), [Golang](/docs/workflows/building-workflows/code/go/), or [Bash](/docs/workflows/building-workflows/code/bash/) code in workflows and connect to any of our {PUBLIC_APPS} integrated apps.
Read [our quickstart](/docs/workflows/quickstart/) or watch our videos on [Pipedream University](https://pipedream.com/university) to learn more.
## Steps
Steps are the building blocks you use to create workflows.
* Use [triggers](/docs/workflows/building-workflows/triggers/), [code](/docs/workflows/building-workflows/code/), and [pre-built actions](/docs/components/contributing/#actions)
* Steps are run linearly, in the order they appear in your workflow
* You can pass data between steps using [the `steps` object](/docs/workflows/#step-exports)
* Observe the logs, errors, timing, and other execution details for every step
### Triggers
Every workflow begins with a [trigger](/docs/workflows/building-workflows/triggers/) step. Trigger steps initiate the execution of a workflow; i.e., workflows execute on each trigger event. For example, you can create an [HTTP trigger](/docs/workflows/building-workflows/triggers/#http) to accept HTTP requests. We give you a unique URL where you can send HTTP requests, and your workflow is executed on each request.
You can add [multiple triggers](/docs/workflows/building-workflows/triggers/#can-i-add-multiple-triggers-to-a-workflow) to a workflow, allowing you to run it on distinct events.
### Code, Actions
[Actions](/docs/components/contributing/#actions) and [code](/docs/workflows/building-workflows/code/) steps drive the logic of your workflow. Anytime your workflow runs, Pipedream executes each step of your workflow in order. Actions are prebuilt code steps that let you connect to hundreds of APIs without writing code. When you need more control than the default actions provide, code steps let you write any custom Node.js code.
Code and action steps cannot precede triggers, since they’ll have no data to operate on.
Once you save a workflow, we deploy it to our servers. Each event triggers the workflow code, whether you have the workflow open in your browser, or not.
## Step Names
Steps have names, which appear at the top of the step:
When you [share data between steps](/docs/workflows/#step-exports), you’ll use this name to reference that shared data. For example, `steps.trigger.event` contains the event that triggered your workflow. If you exported a property called `myData` from this code step, you’d reference that in other steps using `steps.code.myData`. See the docs on [step exports](/docs/workflows/#step-exports) to learn more.
You can rename a step by clicking on its name and typing a new one in its place:
After changing a step name, you’ll need to update any references to the old step. In this example, you’d now reference this step as `steps.get_data`.
Step names cannot contain spaces or dashes. Please use underscores or camel casing for your step names, like `getData` or `get_data`.
## Passing data to steps from the workflow builder
You can generate form based inputs for steps using `props`. This allows the step reuse in across many workflows with different provided arguments - all without changing code.
Learn more about using `props` in our [Node.js code step documentation.](/docs/workflows/building-workflows/code/nodejs/#passing-props-to-code-steps)
Passing props from the workflow builder to workflow steps are only available in Node.js code steps.
We do not currently offer this feature for Python, Bash or Go powered code steps.
## Step Exports
Step exports allow you to pass data between steps. Any data exported from a step must be JSON serializable; the data must be able to stored as JSON so it can be read by downstream steps.
For examples of supported data types in your steps language, see the examples below.
* [Node.js (Javascript)](/docs/workflows/building-workflows/code/nodejs/#sharing-data-between-steps)
* [Python](/docs/workflows/building-workflows/code/python/#sharing-data-between-steps)
* [Bash](/docs/workflows/building-workflows/code/bash/#sharing-data-between-steps)
* [Go](/docs/workflows/building-workflows/code/go/#sharing-data-between-steps)
## Step Notes
Pipedream lets you add notes to individual steps in your workflow so you can include helpful context to other workspace members or even yourself, and you can even write markdown!
### Adding or editing a note
1. Enter build mode on any workflow
2. Click into the overflow menu (3 dots) at the top right of any step
3. Select **Add note** (or **Edit note** if making changes to an existing note)
4. Add any text or markdown, then click **Update**
### Showing notes
Any step that has a note will have a **Note** section in the top panel in the editor pane.
### Current limitations
* Step notes are only accessible in Build mode, not in the Inspector.
# Actions
Source: https://pipedream.com/docs/workflows/building-workflows/actions
export const PUBLIC_APPS = '2,700';
Actions are reusable code steps that integrate your apps, data and APIs. For example, you can send HTTP requests to an external service using our HTTP actions, or use our Google Sheets actions to add new data. You can use thousands of actions across {PUBLIC_APPS}+ apps today.
Typically, integrating with these services requires custom code to manage connection logic, error handling, and more. Actions handle that for you. You only need to specify the parameters required for the action. For example, the HTTP `GET` Request action requires you enter the URL whose data you want to retrieve.
You can also [create your own actions](/docs/workflows/building-workflows/actions/#creating-your-own-actions) that can be shared across your own workflows, or published to all Pipedream users.
## Using Existing Actions
Adding existing actions to your workflow is easy:
1. Click the **+** button below any step.
2. Search for the app you’re looking for and select it from the list.
3. Search for the action and select it to add it to your workflow.
For example, here’s how to add the HTTP `GET` Request action:
## Updating actions to the latest version
When you use existing actions or create your own, you’ll often want to update an action you added to a workflow to the newest version. For example, the community might publish a new feature or bug fix that you want to use.
In your code steps with out of date actions, you’ll see a button appear that will update your action to the latest version. Click on this button to update your code step:
## Creating your own actions
You can author your own actions on Pipedream, too. Anytime you need to reuse the same code across steps, consider making that an action.
Start with our [action development quickstart](/docs/components/contributing/actions-quickstart/). You can read more about all the capabilities of actions in [our API docs](/docs/components/contributing/api/), and review [example actions here](/docs/components/contributing/api/#example-components).
You can also publish actions to [the Pipedream registry](/docs/components/contributing/), which makes them available for anyone on Pipedream to use.
## Reporting a bug / feature request
If you’d like to report a bug, request a new action, or submit a feature request for an existing action, [open an issue in our GitHub repo](https://github.com/PipedreamHQ/pipedream).
# Overview
Source: https://pipedream.com/docs/workflows/building-workflows/code
export const PIPEDREAM_NODE_VERSION = '20';
Pipedream comes with thousands of prebuilt [triggers](/docs/workflows/building-workflows/triggers/) and [actions](/docs/components/contributing/#actions) for [hundreds of apps](https://pipedream.com/apps). Often, these will be sufficient for building simple workflows.
But sometimes you need to run your own custom logic. You may need to make an API request to fetch additional metadata about the event, transform data into a custom format, or end the execution of a workflow early under some conditions. **Code steps let you do this and more**.
Code steps let you execute [Node.js v{PIPEDREAM_NODE_VERSION}](https://nodejs.org/) (JavaScript) code, Python, Go or even Bash right in a workflow.
Choose a language to get started:
If you’d like to see another, specific language supported, please [let us know](https://pipedream.com/community).
# Bash
Source: https://pipedream.com/docs/workflows/building-workflows/code/bash
Prefer to write quick scripts in Bash? We’ve got you covered. You can run any Bash in a Pipedream step within your workflows.
Within a Bash step, you can [share data between steps](/docs/workflows/building-workflows/code/bash/#sharing-data-between-steps) and [access environment variables](/docs/workflows/building-workflows/code/bash/#using-environment-variables). But you can’t connect accounts, return HTTP responses, or take advantage of other features available in the [Node.js](/docs/workflows/building-workflows/code/nodejs/) environment at this time.
## Adding a Bash code step
1. Click the + icon to add a new step
2. Click **Custom Code**
3. In the new step, select the `bash` runtime in language dropdown
## Logging and debugging
When it comes to debugging Bash scripts, `echo` is your friend.
Your `echo` statements will print their output in the workflow step results:
```php
MESSAGE='Hello world'
# The message will now be available in the "Result > Logs" area in the workflow step
echo $MESSAGE
```
## Available binaries
Bash steps come with many common and useful binaries preinstalled and available in `$PATH` for you to use out of the box. These binaries include but aren’t limited to:
* `curl` for making HTTP requests
* `jq` for manipulating and viewing JSON data
* `git` for interacting with remote repositories
If you need a package pre-installed in your Bash steps, [just ask us](https://pipedream.com/support). Otherwise, you can use the `/tmp` directory to download and install software from source.
## Sharing data between steps
A step can accept data from other steps in the same workflow, or pass data downstream to others.
### Using data from another step
In Bash steps, data from the initial workflow trigger and other steps are available in the `$PIPEDREAM_STEPS` environment variable.
In this example, we’ll pretend this data is coming into our HTTP trigger via a POST request.
```json
{
"id": 1,
"name": "Bulbasaur",
"type": "plant"
}
```
In our Bash script, we can access this data via the `$PIPEDREAM_STEPS` file. Specifically, this data from the POST request into our workflow is available in the `trigger` object.
```go
cat $PIPEDREAM_STEPS | jq .trigger.event
# Results in { id: 1, name: "Bulbasaur", type: "plant" }
```
The period (`.`) in front the `trigger.event` in the example is not a typo. This is to define the starting point for `jq` to traverse down the JSON in the HTTP response.
### Sending data downstream to other steps
To share data for future steps to use downstream, append it to the `$PIPEDREAM_EXPORTS` file.
```php
# Retrieve the data from an API and store it in a variable
DATA=`curl --silent https://pokeapi.co/api/v2/pokemon/charizard`
# Write data to $PIPEDREAM_EXPORTS to share with steps downstream
EXPORT="key:json=${DATA}"
echo $EXPORT >> $PIPEDREAM_EXPORTS
```
Not all data types can be stored in the `$PIPEDREAM_EXPORTS` data shared between workflow steps.
You can only export JSON-serializable data from Bash steps.
## Using environment variables
You can leverage any [environment variables defined in your Pipedream account](/docs/workflows/environment-variables/) in a bash step. This is useful for keeping your secrets out of code as well as keeping them flexible to swap API keys without having to update each step individually.
To access them, just append the `$` in front of the environment variable name.
```php
echo $POKEDEX_API_KEY
```
Or an even more useful example, using the stored environment variable to make an authenticated API request.
```bash
curl --silent -X POST -h "Authorization: Bearer $TWITTER_API_KEY" https://api.twitter.com/2/users/@pipedream/mentions
```
## Making a `GET` request
You can use `curl` to perform `GET` requests.
```php
# Get the current weather in San Francisco
WEATHER=`curl --silent https://wttr.in/San\ Francisco\?format=3`
echo $WEATHER
# Produces:
# San Francisco: 🌫 +48°F
```
Use the `--silent` flag with `curl` to suppress extra extra diagnostic information that `curl` produces when making requests.
This enables you to only worry about the body of the response so you can visualize it with tools like `echo` or `jq`.
## Making a `POST` request
```php
curl --silent -X POST https://postman-echo.com/post -d 'name=Bulbasaur&id=1'
# To store the API response in a variable, interpolate the response into a string and store it in variable
RESPONSE=`curl --silent -X POST https://postman-echo.com/post -d 'name=Bulbasaur&id=1'`
# Now the response is stored as a variable
echo $RESPONSE
```
## Using API key authentication
Some APIs require you to authenticate with a secret API key.
`curl` has an `-h` flag where you can pass your API key as a token.
For example, here’s how to retrieve mentions from the Twitter API:
```bash
# Define the "Authorization" header to include your Twitter API key
curl --silent -X POST -h "Authorization: Bearer $()" https://api.twitter.com/2/users/@pipedream/mentions
```
## Raising exceptions
You may need to stop your step immediately. You can use the normal `exit` function available in Bash to quit the step prematurely.
```php
echo "Exiting now!" 1>&2
exit 1
```
Using `exit` to quit a Bash step early *won’t* stop the execution of the rest of the workflow.
Exiting a Bash step will only apply that particular step in the workflow.
This will exit the step and output the error message to `stderr` which will appear in the results of the step in the workflow.
## File storage
If you need to download and store files, you can write them to the `/tmp` directory.
### Writing a file to /tmp
Download a file to `/tmp` using `curl`
```php
# Download the current weather in Cleveland in PNG format
curl --silent https://wttr.in/Cleveland.png --output /tmp/weather.png
# Output the contents of /tmp to confirm the file is there
ls /tmp
```
The `/tmp` directory does not have unlimited storage. Please refer to the [disk limits](/docs/workflows/limits/#disk) for details.
# Go
Source: https://pipedream.com/docs/workflows/building-workflows/code/go
export const TMP_SIZE_LIMIT = '2GB';
export const GO_LANG_VERSION = '1.21.5';
Pipedream supports [Go v{GO_LANG_VERSION}](https://go.dev) in workflows. You can use any of the [Go packages available](https://pkg.go.dev/) with a simple `import` — no `go get` needed.
When you write Go code on Pipedream, you can [share data between steps](/docs/workflows/building-workflows/code/bash/#sharing-data-between-steps) and [access environment variables](/docs/workflows/building-workflows/code/bash/#using-environment-variables). However, you can’t connect accounts, return HTTP responses, or take advantage of other features available in the [Node.js](/docs/workflows/building-workflows/code/nodejs/) environment at this time.
If you have any feedback on the Go runtime, please let us know in [our community](https://pipedream.com/support).
## Adding a Go code step
1. Click the + icon to add a new step
2. Click “Custom Code”
3. In the new step, select the `golang` language runtime in language dropdown
## Logging and debugging
You can use `fmt.Println` at any time to log information as the script is running.
The output for the `fmt.Println` **Logs** will appear in the `Results` section just beneath the code editor.
Don’t forget to import the `fmt` package in order to run `fmt.Println`.
```go highlight={3}
package main
import "fmt"
func main() {
fmt.Println("Hello World!")
}
```
## Using third party packages
You can use any packages from [Go package registry](https://pkg.go.dev). This includes popular choices such as:
* [`net/http` for making HTTP requests](https://pkg.go.dev/net/http#pkg-overview/)
* [`encoding/json` for encoding and decoding JSON](https://pkg.go.dev/encoding/json)
* [`database/sql` for reading and writing to SQL databases](https://pkg.go.dev/database/sql@go1.17.6)
To use a Go package, just `import` it in your step’s code:
```javascript
import "net/http"
```
And that’s it.
## Sending files
You can send files stored in the `/tmp` directory in an HTTP request:
```go
package main
import (
"os"
"log"
"mime/multipart"
"bytes"
"io"
"net/http"
)
func main() {
// Instantiate a new HTTP client, body and form writer
client := http.Client{}
body := &bytes.Buffer{}
writer := multipart.NewWriter(body)
// Create an empty file to start
fw, _:= writer.CreateFormFile("file", "go-logo.svg")
// Retrieve a previously saved file from workflow storage
file, _ := os.Open("/tmp/go-logo.svg")
// Close multipart form writer
writer.Close()
_, _ = io.Copy(fw, file)
// Send the POST request
req, _:= http.NewRequest("POST", "https://postman-echo.com/post", bytes.NewReader(body.Bytes()))
req.Header.Set("Content-Type", writer.FormDataContentType())
_, err := client.Do(req)
if err != nil {
log.Fatalln(err)
}
}
```
## Sharing data between steps
A step can accept data from other steps in the same workflow, or pass data downstream to others.
This makes your steps even more powerful, you can compose new workflows and reuse steps.
### Using data from another step
Data from the initial workflow trigger and other steps are available in the `pipedream-go` package.
In this example, we’ll pretend this data is coming into our HTTP trigger via POST request.
```json
{
"id": 1,
"name": "Bulbasaur",
"type": "plant"
}
```
You can access this data in the `Steps` variable from the `pd` package. Specifically, this data from the POST request into our workflow is available in the `trigger` map.
```go
package main
import (
"fmt"
"github.com/PipedreamHQ/pipedream-go"
)
func main() {
// Access previous step data using pd.Steps
fmt.Println(pd.Steps["trigger"])
}
```
### Sending data downstream to other steps
To share data for future steps to use, call the Export function from pd package:
```go
package main
import (
"encoding/json"
"github.com/PipedreamHQ/pipedream-go"
"io/ioutil"
"net/http"
)
func main() {
// Use a GET request to look up the latest data on Charizard
resp, _ := http.Get("https://pokeapi.co/api/v2/pokemon/charizard")
body, _ := ioutil.ReadAll(resp.Body)
// Unmarshal the JSON into a struct
var data map[string]interface{}
json.Unmarshal(body, &data)
// Expose the pokemon data downstream to others steps in the "pokemon" key from this step
pd.Export("pokemon", data)
}
```
Now this `pokemon` data is accessible to downstream steps within `pd.Steps["code"]["pokemon"]`
Not all data types can be stored in the `Steps` data shared between workflow steps.
For the best experience, we recommend only [exporting structs that can be marshalled into JSON](https://go.dev/blog/json).
## Using environment variables
You can leverage any [environment variables defined in your Pipedream account](/docs/workflows/environment-variables/) in a Go step. This is useful for keeping your secrets out of code as well as keeping them flexible to swap API keys without having to update each step individually.
To access them, use the `os` package.
```go
package main
import (
"log"
"os"
)
func main() {
log.PrintLn(os.Getenv('TWITTER_API_KEY'))
}
```
### Using API key authentication
If a particular service requires you to use an API key, you can pass it via the HTTP request.
This proves your identity to the service so you can interact with it:
```go
package main
import (
"os"
"bytes"
"encoding/json"
"io/ioutil"
"log"
"net/http"
"fmt"
)
func main() {
// Access the Twitter API key from the environment
const apiKey := os.Getenv('TWITTER_API_KEY'))
// JSON encode our payload
payload, _ := json.Marshal(map[string]string{
"name": "Bulbasaur",
})
payloadBuf:= bytes.NewBuffer(payload)
// Send the POST request
req, err := http.NewRequest("POST", "https://api.twitter.com/2/users/@pipedream/mentions'", payloadBuf)
// Build the headers in order to authenticate properly
req.Header = http.Header{
"Content-Type": []string{"application/json"},
"Authorization": []string{fmt.Sprintf("Bearer %s", apiKey)},
}
client := http.Client{}
resp, err := client.Do(req)
if err != nil {
log.Fatalln(err)
}
// Don't forget to close the request after the function is finished
defer resp.Body.Close()
// Read the response body
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalln(err)
}
// Convert the body into a string
sb := string(body)
// Log the body to our Workflow Results
log.Println(sb)
}
```
## Making a `GET` request
You’ll typically use `GET` requests to retrieve data from an API:
```go
package main
import (
"net/http" // HTTP client
"io/ioutil" // Reads the body of the response
"log" // Logger
)
func main() {
resp, err := http.Get("https://swapi.dev/api/people/1")
if err != nil {
log.Fatalln(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalln(err)
}
// The response status code is logged in your Pipedream step results:
log.Println(resp.Status)
// The response is logged in your Pipedream step results:
sb := string(body)
log.Println(sb)
}
```
## Making a `POST` request
```go
package main
import (
"bytes"
"encoding/json"
"io/ioutil"
"log"
"net/http"
)
func main() {
// JSON encode our payload
payload, _ := json.Marshal(map[string]string{
"name": "Bulbasaur",
})
payloadBuf:= bytes.NewBuffer(payload)
// Send the POST request
resp, err := http.Post("https://postman-echo.com/post", "application/json", payloadBuf)
if err != nil {
log.Fatalln(err)
}
defer resp.Body.Close()
// Read the response body
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatalln(err)
}
// Convert the body into a string
sb := string(body)
// Log the body to our Workflow Results
log.Println(sb)
}
```
## Handling errors
You may need to exit a workflow early, use the `os.Exit` to exit the `main` function with a specific error code.
```go
package main
import (
"fmt"
"os"
)
func main() {
os.Exit(1)
fmt.Println("This message will not be logged.")
}
```
The step will quit at the time `os.Exit` is called. In this example, the exit code `1` will appear in the **Results** of the step.
## File storage
You can also store and read files with Go steps. This means you can upload photos, retrieve datasets, accept files from an HTTP request and more.
The `/tmp` directory is accessible from your workflow steps for saving and retrieving files.
You have full access to read and write both files in `/tmp`.
### Writing a file to `/tmp`
```go
package main
import (
"io"
"net/http"
"os"
"fmt"
)
func main() {
// Define where the file is and where to save it
fileUrl := "https://golangcode.com/go-logo.svg"
filePath := "/tmp/go-logo.svg"
// Download the file
resp, err := http.Get(fileUrl)
if err != nil {
fmt.Println(err)
}
// Don't forget to the close the HTTP connection at the end of the function
defer resp.Body.Close()
// Create the empty file
out, err := os.Create(filePath)
if err != nil {
fmt.Println(err)
}
// Don't forget to close the file
defer out.Close()
// Write the file data to file
_, err = io.Copy(out, resp.Body)
if err != nil {
fmt.Println(err)
}
}
```
Now `/tmp/go-logo.svg` holds the official Go logo.
### Reading a file from /tmp
You can also open files you have previously stored in the `/tmp` directory. Let’s open the `go-logo.svg` file.
```go
package main
import (
"os"
"log"
)
func main() {
// Open the file
data, err := os.ReadFile("/tmp/go-logo.svg")
if e != nil {
log.Fatalln(e)
}
// Print it's contents to the logs
log.Println(string(data))
}
```
### `/tmp` limitations
The `/tmp` directory can store up to {TMP_SIZE_LIMIT} of storage. Also the storage may be wiped or may not exist between workflow executions.
To avoid errors, assume that the `/tmp` directory is empty between workflow runs.
# Running Node.Js In Workflows
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs
export const PIPEDREAM_NODE_VERSION = '20';
Pipedream supports writing Node.js v{PIPEDREAM_NODE_VERSION} at any point of a workflow.
Anything you can do with Node.js, you can do in a workflow. This includes using most of [npm’s 400,000+ packages](/docs/workflows/building-workflows/code/nodejs/#using-npm-packages). JavaScript is one of the [most used](https://insights.stackoverflow.com/survey/2019#technology-_-programming-scripting-and-markup-languages) [languages](https://github.blog/2018-11-15-state-of-the-octoverse-top-programming-languages/) in the world, with a thriving community and extensive package ecosystem. If you work on websites and know JavaScript well, Pipedream makes you a full stack engineer. If you’ve never used JavaScript, see the [resources below](/docs/workflows/building-workflows/code/nodejs/#new-to-javascript).
It’s important to understand the core difference between Node.js and the JavaScript that runs in your web browser: **Node doesn’t have access to some of the things a browser expects, like the HTML on the page, or its URL**. If you haven’t used Node before, be aware of this limitation as you search for JavaScript examples on the web.
## Adding a code step
1. Click the **+** button below any step of your workflow.
2. Select the option to **Run custom code**.
Note that new code steps will default to Node.js v{PIPEDREAM_NODE_VERSION}. You can add any Node.js code in the editor that appears. For example, try:
```javascript
export default defineComponent({
async run({ steps, $ }) {
console.log("This is Node.js code");
$.export("test", "Some test data");
return "Test data";
},
});
```
Code steps use the same editor ([Monaco](https://microsoft.github.io/monaco-editor/)) used in Microsoft’s [VS Code](https://code.visualstudio.com/), which supports syntax highlighting, automatic indentation, and more.
## Sharing data between steps
A Node.js step can use data from other steps using [step exports](/docs/workflows/#step-exports), it can also export data for other steps to use.
### Using data from another step
In Node.js steps, data from the initial workflow trigger and other steps are available in the `steps` argument passed to the `run({ steps, $ })` function.
In this example, we’ll pretend this data is coming into our HTTP trigger via POST request.
```json
{
"id": 1,
"name": "Bulbasaur",
"type": "plant"
}
```
In our Node.js step, we can access this data in the `steps` variable Specifically, this data from the POST request into our workflow is available in the `trigger` property.
```javascript
export default defineComponent({
async run({ steps, $ }) {
const pokemonName = steps.trigger.event.name;
const pokemonType = steps.trigger.event.type;
console.log(`${pokemonName} is a ${pokemonType} type Pokemon`);
},
});
```
### Sending data downstream to other steps
To share data created, retrieved, transformed or manipulated by a step to others downstream you can simply `return` it.
```javascript
// This step is named "code" in the workflow
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
const response = await axios.get(
"https://pokeapi.co/api/v2/pokemon/charizard"
);
// Store the response's JSON contents into a variable called "pokemon"
const pokemon = response.data;
// Expose the pokemon data downstream to other steps in the $return_value from this step
return pokemon;
},
});
```
### Using \$.export
Alternatively, use the built in `$.export` helper instead of returning data. The `$.export` creates a *named* export with the given value.
```javascript
// This step is named "code" in the workflow
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
const response = await axios.get(
"https://pokeapi.co/api/v2/pokemon/charizard"
);
// Store the response's JSON contents into a variable called "pokemon"
const pokemon = response.data;
// Expose the pokemon data downstream to other steps in the pokemon export from this step
$.export("pokemon", pokemon);
},
});
```
Now this `pokemon` data is accessible to downstream steps within `steps.code.pokemon`
You can only export JSON-serializable data from steps. Things like:
* strings
* numbers
* objects
You cannot export functions or other complex objects that don’t serialize to JSON. [You can save that data to a file in the `/tmp` directory](/docs/workflows/building-workflows/code/nodejs/working-with-files/).
## Passing props to code steps
You can make code steps reusable by allowing them to accept props. Instead of hard-coding the values of variables within the code itself, you can pass them to the code step as arguments or parameters *entered in the workflow builder*.
For example, let’s define a `firstName` prop. This will allow us to freely enter text from the workflow builder.
```javascript
export default defineComponent({
props: {
firstName: {
type: "string",
label: "Your first name",
default: "Dylan",
},
},
async run({ steps, $ }) {
console.log(
`Hello ${this.firstName}, congrats on crafting your first prop!`
);
},
});
```
The workflow builder now can accept text input to populate the `firstName` to this particular step only:
Accepting a single string is just one example, you can build a step to accept arrays of strings through a dropdown presented in the workflow builder.
[Read the props reference for the full list of options](/docs/components/contributing/api/#props).
## How Pipedream Node.js components work
When you add a new Node.js code step or use the examples in this doc, you’ll notice a common structure to the code:
```javascript
export default defineComponent({
async run({ steps, $ }) {
// this Node.js code will execute when the step runs
},
});
```
This defines [a Node.js component](/docs/components/contributing/api/). Components let you:
* Pass input to steps using [props](/docs/workflows/building-workflows/code/nodejs/#passing-props-to-code-steps)
* [Connect an account to a step](/docs/apps/connected-accounts/#from-a-code-step)
* [Issue HTTP responses](/docs/workflows/building-workflows/triggers/#http-responses)
* Perform workflow-level flow control, like [ending a workflow early](/docs/workflows/building-workflows/code/nodejs/#ending-a-workflow-early)
When the step runs, Pipedream executes the `run` method:
* Any asynchronous code within a code step [**must** be run synchronously](/docs/workflows/building-workflows/code/nodejs/async/), using the `await` keyword or with a Promise chain, using `.then()`, `.catch()`, and related methods.
* Pipedream passes the `steps` variable to the run method. `steps` is also an object, and contains the [data exported from previous steps](/docs/workflows/#step-exports) in your workflow.
* You also have access to the `$` variable, which gives you access to methods like `$.respond`, `$.export`, [and more](/docs/components/contributing/api/#actions).
If you’re using [props](/docs/workflows/building-workflows/code/nodejs/#passing-props-to-code-steps) or [connect an account to a step](/docs/apps/connected-accounts/#from-a-code-step), the component exposes them in the variable `this`, which refers to the current step:
```javascript
export default defineComponent({
async run({ steps, $ }) {
// `this` refers to the running component. Props, connected accounts, etc. are exposed here
console.log(this);
},
});
```
When you [connect an account to a step](/docs/apps/connected-accounts/#from-a-code-step), Pipedream exposes the auth info in the variable [`this.appName.$auth`](/docs/workflows/building-workflows/code/nodejs/auth/#accessing-connected-account-data-with-thisappnameauth).
## Logs
You can call `console.log` or `console.error` to add logs to the execution of a code step.
These logs will appear just below the associated step. `console.log` messages appear in black, `console.error` in red.
### `console.dir`
If you need to print the contents of JavaScript objects, use `console.dir`:
```javascript
export default defineComponent({
async run({ steps, $ }) {
console.dir({
name: "Luke",
});
},
});
```
## Syntax errors
Pipedream will attempt to catch syntax errors when you’re writing code, highlighting the lines where the error occurred in red.
While you can save a workflow with syntax errors, it’s unlikely to run correctly on new events. Make sure to fix syntax errors before running your workflow.
## Using `npm` packages
[npm](https://www.npmjs.com/) hosts JavaScript packages: libraries of code someone else wrote and packaged for others to use. npm has over 400,000 packages and counting.
### Just `import` it
To use an npm package on Pipedream, simply `import` it:
```javascript
import axios from "axios";
```
By default, workflows don’t have any packages installed. Just import any package in this manner to make it available in the step.
If a package only supports the [CommonJS module format](https://nodejs.org/api/modules.html), you may have to `require` it:
```javascript
const axios = require("axios");
```
**Within a single step, you can only use `import` or `require` statements, not both**. See [this section](/docs/workflows/building-workflows/code/nodejs/#require-is-not-defined) for more details.
When Pipedream runs your workflow, we download the associated npm package for you before running your code steps.
If you’ve used Node before, you’ll notice there’s no `package.json` file to upload or edit. We want to make package management simple, so just `import` or `require` the module like you would in your code, after package installation, and get to work.
### Third-party package limitations
Some packages require access to a web browser to run, and don’t work with Node.js. Often this limitation is documented on the package `README`, but often it’s not. If you’re not sure and need to use it, we recommend just trying to `import` or `require` it.
Other packages require access to binaries or system libraries that aren’t installed in the Pipedream execution environment.
If you’re seeing any issues with a specific package, please [let us know](https://pipedream.com/support/) and we’ll try to help you make it work.
### Pinning package versions
Each time you deploy a workflow with Node.js code, Pipedream downloads the npm packages you `import` in your step. **By default, Pipedream deploys the latest version of the npm package each time you deploy a change**.
There are many cases where you may want to specify the version of the packages you’re using. If you’d like to use a *specific* version of a package in a workflow, you can add that version in the `import` string, for example:
```javascript
import axios from "axios@0.19.2";
```
You can also pass the version specifiers used by npm to support [semantic version](https://semver.org/) upgrades. For example, to allow for future patch version upgrades:
```javascript
import axios from "axios@~0.20.0";
```
To allow for patch and minor version upgrades, use:
```javascript
import got from "got@^11.0.0";
```
The behavior of the caret (`^`) operator is different for 0.x versions, for which it will only match patch versions, and not minor versions.
You can also specify different versions of the same package in different steps. Each step will used the associated version. Note that this also increases the size of your deployment, which can affect cold start times.
### CommonJS vs. ESM imports
In Node.js, you may be used to importing third-party packages using the `require` statement:
```javascript
const axios = require("axios");
```
In this example, we’re including the `axios` [CommonJS module](https://nodejs.org/api/modules.html) published to npm. You import CommonJS modules using the `require` statement.
But you may encounter this error in workflows:
`Error Must use import to load ES Module`
This means that the package you’re trying to `require` uses a different format to export their code, called [ECMAScript modules](https://nodejs.org/api/esm.html#esm_modules_ecmascript_modules) (**ESM**, or **ES modules**, for short). With ES modules, you instead need to `import` the package:
```javascript
import got from "got";
```
Most package publish both CommonJS and ESM versions, so **if you always use `import`, you’re less likely to have problems**. In general, refer to the documentation for your package for instructions on importing it correctly.
### `require` is not defined
This error means that you cannot use CommonJS and ESM imports in the same step. For example, if you run code like this:
```javascript
import fetch from "node-fetch";
const axios = require("axios");
```
your workflow will throw a `require is not defined` error. There are two solutions:
1. Try converting your CommonJS `require` statement into an ESM `import` statement. For example, convert this:
```javascript
const axios = require("axios");
```
to this:
```javascript
import axios from "axios";
```
2. If the `import` syntax fails to work, separate your imports into different steps, using only CommonJS requires in one step, and only ESM imports in another.
## Variable scope
Any variables you create within a step are scoped to that step. That is, they cannot be referenced in any other step.
Within a step, the [normal rules of JavaScript variable scope](https://developer.mozilla.org/en-US/docs/Glossary/Scope) apply.
**When you need to share data across steps, use [step exports](/docs/workflows/#step-exports).**
## Making HTTP requests from your workflow
There are two ways to make HTTP requests in code steps:
* Use any HTTP client that works with Node.js. [See this example guide for how to use `axios` to make HTTP requests](/docs/workflows/building-workflows/code/nodejs/http-requests/).
* [Use `$.send.http()`](/docs/workflows/data-management/destinations/http/#using-sendhttp-in-workflows), a Pipedream-provided method for making asynchronous HTTP requests.
In general, if you just need to make an HTTP request but don’t care about the response, [use `$.send.http()`](/docs/workflows/data-management/destinations/http/#using-sendhttp-in-workflows). If you need to operate on the data in the HTTP response in the rest of your workflow, [use `axios`](/docs/workflows/building-workflows/code/nodejs/http-requests/).
## Returning HTTP responses
You can return HTTP responses from [HTTP-triggered workflows](/docs/workflows/building-workflows/triggers/#http) using the [`$.respond()` function](/docs/workflows/building-workflows/triggers/#http-responses).
## Invoke another workflow
This is an alpha feature and is subject to change without prior notice.
You can invoke another workflow in your workspace with `$.flow.trigger`:
```javascript
await $.flow.trigger(
workflowId, // your Pipedream workflow ID, e.g. p_abc123
payload, // any JSON-serializable data
)
```
[Find your workflow’s ID here.](/docs/troubleshooting/#where-do-i-find-my-workflows-id)
This invokes the workflow directly — you don’t need to configure a trigger, and the request does not leave the platform.
We also provide a [Trigger Workflow](https://pipedream.com/apps/helper-functions/actions/trigger-workflow) action in the [Helper Functions](https://pipedream.com/apps/helper-functions) app so you don’t need to do it by code!
## Ending a workflow early
Sometimes you want to end your workflow early, or otherwise stop or cancel the execution or a workflow under certain conditions. For example:
* You may want to end your workflow early if you don’t receive all the fields you expect in the event data.
* You only want to run your workflow for 5% of all events sent from your source.
* You only want to run your workflow for users in the United States. If you receive a request from outside the U.S., you don’t want the rest of the code in your workflow to run.
* You may use the `user_id` contained in the event to look up information in an external API. If you can’t find data in the API tied to that user, you don’t want to proceed.
**In any code step, calling `return $.flow.exit()` will end the execution of the workflow immediately.** No remaining code in that step, and no code or destination steps below, will run for the current event.
It’s a good practice to use `return $.flow.exit()` to immediately exit the workflow. In contrast, `$.flow.exit()` on its own will end the workflow only after executing all remaining code in the step.
```javascript
export default defineComponent({
async run({ steps, $ }) {
return $.flow.exit();
console.log(
"This code will not run, since $.flow.exit() was called above it"
);
},
});
```
You can pass any string as an argument to `$.flow.exit()`:
```javascript
export default defineComponent({
async run({ steps, $ }) {
return $.flow.exit("End message");
},
});
```
Or exit the workflow early within a conditional:
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Flip a coin, running $.flow.exit() for 50% of events
if (Math.random() > 0.5) {
return $.flow.exit();
}
console.log("This code will only run 50% of the time");
},
});
```
## Errors
[Errors](https://nodejs.org/dist/latest-v10.x/docs/api/errors.html#errors_errors) raised in a code step will stop the execution of code or destinations that follow.
### Configuration Error
Throwing a `ConfigurationError` in a Node.js step will display the error message in a dedicated area.
This is useful for providing feedback during validation of `props`. In the example below, a required Header value is missing from the Google Sheets action:
Or you can use it for validating the format of a given `email` prop:
```javascript
import { ConfigurationError } from "@pipedream/platform";
export default defineComponent({
props: {
email: { type: "string" },
},
async run({ steps, $ }) {
// if the email address doesn't include a @, it's not valid
if (!this.email.includes("@")) {
throw new ConfigurationError("Provide a valid email address");
}
},
});
```
## Using secrets in code
Workflow code is private. Still, we recommend you don’t include secrets — API keys, tokens, or other sensitive values — directly in code steps.
Pipedream supports [environment variables](/docs/workflows/environment-variables/) for keeping secrets separate from code. Once you create an environment variable in Pipedream, you can reference it in any workflow using `process.env.VARIABLE_NAME`. The values of environment variables are private.
See the [Environment Variables](/docs/workflows/environment-variables/) docs for more information.
## Limitations of code steps
Code steps operate within the [general constraints on workflows](/docs/workflows/limits/#workflows). As long as you stay within those limits and abide by our [acceptable use policy](/docs/workflows/limits/#acceptable-use), you can add any number of code steps in a workflow to do virtually anything you’d be able to do in Node.js.
If you’re trying to run code that doesn’t work or you have questions about any limits on code steps, [please reach out](https://pipedream.com/support/).
## Editor settings
We use the [Monaco Editor](https://microsoft.github.io/monaco-editor/), which powers VS Code and other web-based editors.
We also let you customize the editor. For example, you can enable Vim mode, and change the default tab size for indented code. Visit your [Settings](https://pipedream.com/settings) to modify these settings.
## Keyboard Shortcuts
We use the [Monaco Editor](https://microsoft.github.io/monaco-editor/), which powers VS Code. As a result, many of the VS Code [keyboard shortcuts](https://code.visualstudio.com/docs/getstarted/keybindings) should work in the context of the editor.
For example, you can use shortcuts to search for text, format code, and more.
## New to JavaScript?
We understand many of you might be new to JavaScript, and provide resources for you to learn the language below.
When you’re searching for how to do something in JavaScript, some of the code you try might not work in Pipedream. This could be because the code expects to run in a browser, not a Node.js environment. The same goes for [npm packages](/docs/workflows/building-workflows/code/nodejs/#using-npm-packages).
### I’m new to programming
Many of the most basic JavaScript tutorials are geared towards writing code for a web browser to run. This is great for learning — a webpage is one of the coolest things you can build with code. We recommend starting with these general JavaScript tutorials and trying the code you learn on Pipedream:
* [JavaScript For Cats](http://jsforcats.com/)
* [Mozilla - JavaScript First Steps](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/First_steps)
* [StackOverflow](https://stackoverflow.com/) operates a programming Q\&A site that typically has the first Google result when you’re searching for something specific. It’s a great place to find answers to common questions.
### I know how to code, but don’t know JavaScript
* [A re-introduction to JavaScript (JS tutorial)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/A_re-introduction_to_JavaScript)
* [MDN language overview](https://developer.mozilla.org/en-US/docs/Web/JavaScript)
* [Eloquent Javascript](https://eloquentjavascript.net/)
* [Node School](https://nodeschool.io/)
* [You Don’t Know JS](https://github.com/getify/You-Dont-Know-JS)
# Using AI To Generate Code
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/ai-code-generation
Tell Pipedream the code you want, we generate it for you.
Pipedream’s [built-in actions](/docs/workflows/building-workflows/actions/) are great for running common API operations without having to write code, but sometimes you need code-level control in a workflow. You can [write this code yourself](/docs/workflows/building-workflows/code/), or you can let Pipedream generate it for you with AI.
This feature is new, and [we welcome feedback](https://pipedream.com/support). Please let us know what we can improve or add to make this more useful for you.
## Getting Started
Access the feature either from within a Node.js code cell or from any app in the step selector.
A window should pop up and ask for your prompt. Write exactly what you want to do within that step. **Be verbose** and see our tips for [getting the best results](/docs/workflows/building-workflows/code/nodejs/ai-code-generation/#getting-the-best-results).
* **Bad**: “Send a Slack message”
* **Good**: “Send a Slack message in the following format: `Hello, ${name}`. Let me select the channel from a list of available options.”
Once you’re done, hit **Enter** or click **Generate**.
Code will immediately start streaming to the editor. You can modify the prompt and re-generate the code if it doesn’t look right, or click **Use this code** to add it to your code cell and test it.
Pipedream will automatically refresh the step to show connected accounts and any input fields (props) above the step.
Edit the code however you’d like. Once you’re done, test the code. You’ll see the option to provide a :+1: or :-1: on the code, which helps us learn what’s working and what’s not.
## Editing existing code
You can also edit existing code with AI. Click the **Edit with AI** button at the top-right of any Node.js code step. You’ll see the code gen window appear with the original code from your step. Enter a prompt to suggest an edit, and we’ll give you the modified code.
## Getting the best results
**Generating code works best with clear, precise, and detailed instructions of what you want to do in your step.** The code gen service understands the [Pipedream component API](/docs/components/contributing/api/) and references the API docs of [integrated apps](https://pipedream.com/apps). For example, you can tell it to include specific [props](/docs/components/contributing/api/#props) (input) or [async options](/docs/components/contributing/api/#async-options-example), and reference specific API endpoints you want to use for the selected app.
### Examples
#### Slack
Send a message to the `#general` channel that says, “This message was sent with AI-generate code!” Format it as a Slack block, with a header named, “Hello, world!”
**Output**:
```php
import { axios } from "@pipedream/platform";
export default defineComponent({
props: {
slack: {
type: "app",
app: "slack",
},
},
async run({ steps, $ }) {
return await axios($, {
method: "POST",
url: `https://slack.com/api/chat.postMessage`,
headers: {
Authorization: `Bearer ${this.slack.$auth.oauth_access_token}`,
},
data: {
channel: "#general",
blocks: [
{
type: "header",
text: {
type: "plain_text",
text: "Hello, world!",
},
},
{
type: "section",
text: {
type: "mrkdwn",
text: "This message was sent with AI-generated code!",
},
},
],
},
});
},
});
```
#### Stripe
Get the subscription information about a specific customer based on their email address
**Output:**
```php
import { axios } from "@pipedream/platform";
export default defineComponent({
props: {
stripe: {
type: "app",
app: "stripe",
},
email: {
type: "string",
label: "Email Address",
description:
"The email address of the customer to get subscription information for",
},
},
async run({ steps, $ }) {
const customerResponse = await axios($, {
method: "GET",
url: `https://api.stripe.com/v1/customers`,
headers: {
Authorization: `Bearer ${this.stripe.$auth.api_key}`,
},
params: {
email: this.email,
},
});
if (customerResponse.data.length === 0) {
throw new Error("Customer not found");
}
const customerId = customerResponse.data[0].id;
return await axios($, {
method: "GET",
url: `https://api.stripe.com/v1/subscriptions`,
headers: {
Authorization: `Bearer ${this.stripe.$auth.api_key}`,
},
params: {
customer: customerId,
},
});
},
});
```
## Current limitations, and what we’re working on next
* Currently supports Pipedream actions, not triggers
* Only supports Node.js output. Python coming soon.
* It supports single steps, and not entire workflows (also coming soon)
# Running Asynchronous Code In Node.Js
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/async
If you’re not familiar with asynchronous programming concepts like [callback functions](https://developer.mozilla.org/en-US/docs/Glossary/Callback_function) or [Promises](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises), see [this overview](https://eloquentjavascript.net/11_async.html).
## The problem
**Any asynchronous code within a Node.js code step must complete before the next step runs**. This ensures future steps have access to its data. If Pipedream detects that code is still running by the time the step completes, you’ll see the following warning below the code step:
> **This step was still trying to run code when the step ended. Make sure you await all Promises, or promisify callback functions.**
As the warning notes, this often arises from one of two issues:
* You forgot to `await` a Promise. [All Promises must be awaited](/docs/workflows/building-workflows/code/nodejs/async/#await-all-promises) within a step so they resolve before the step finishes.
* You tried to run a callback function. Since callback functions run asynchronously, they typically will not finish before the step ends. [You can wrap your function in a Promise](/docs/workflows/building-workflows/code/nodejs/async/#wrap-callback-functions-in-a-promise) to make sure it resolves before the step finishes.
## Solutions
### `await` all Promises
Most Node.js packages that run async code return Promises as the result of method calls. For example, [`axios`](/docs/workflows/building-workflows/code/nodejs/http-requests/#basic-axios-usage-notes) is an HTTP client. If you make an HTTP request like this in a Pipedream code step:
```php
const resp = axios({
method: "GET",
url: `https://swapi.dev/api/films/`,
});
```
It won’t send the HTTP request, since **`axios` returns a Promise**. Instead, add an `await` in front of the call to `axios`:
```php
const resp = await axios({
method: "GET",
url: `https://swapi.dev/api/films/`,
});
```
In short, always do this:
```javascript
const res = await runAsyncCode();
```
instead of this:
```javascript
// This code may not finish by the time the workflow finishes
runAsyncCode();
```
### Wrap callback functions in a Promise
Before support for Promises was widespread, [callback functions](https://developer.mozilla.org/en-US/docs/Glossary/Callback_function) were a popular way to run some code asynchronously, after some operation was completed. For example, [PDFKit](https://pdfkit.org/) lets you pass a callback function that runs when certain events fire, like when the PDF has been finalized:
```javascript
import PDFDocument from "pdfkit";
import fs from "fs";
let doc = new PDFDocument({ size: "A4", margin: 50 });
this.fileName = `tmp/test.pdf`;
let file = fs.createWriteStream(this.fileName);
doc.pipe(file);
doc.text("Hello world!");
// Finalize PDF file
doc.end();
file.on("finish", () => {
console.log(fs.statSync(this.fileName));
});
```
This is the callback function:
```javascript
() => {
console.log(fs.statSync(this.fileName));
};
```
and **it will not run**. By running a callback function in this way, we’re saying that we want the function to be run asynchronously. But on Pipedream, this code must finish by the time the step ends. **You can wrap this callback function in a Promise to make sure that happens**. Instead of running:
```javascript
file.on("finish", () => {
console.log(fs.statSync(this.fileName));
});
```
run:
```javascript
// Wait for PDF to finalize
await new Promise((resolve) => file.on("finish", resolve));
// Once done, get stats
const stats = fs.statSync(this.filePath);
```
This is called “[promisification](https://javascript.info/promisify)”.
You can often promisify a function in one line using Node.js’ [`util.promisify` function](https://2ality.com/2017/05/util-promisify.html).
### Other solutions
If a specific library doesn’t support Promises, you can often find an equivalent library that does. For example, many older HTTP clients like `request` didn’t support Promises natively, but the community [published packages that wrapped it with a Promise-based interface](https://www.npmjs.com/package/request#promises--asyncawait) (note: `request` has been deprecated, this is just an example).
## False positives
This warning can also be a false positive. If you’re successfully awaiting all Promises, Pipedream could be throwing the warning in error. If you observe this, please [file a bug](https://github.com/PipedreamHQ/pipedream/issues/new?assignees=\&labels=bug\&template=bug_report.md\&title=%5BBUG%5D+).
Packages that make HTTP requests or read data from disk (for example) fail to resolve Promises at the right time, or at all. This means that Pipedream is correctly detecting that code is still running, but there’s also no issue - the library successfully ran, but just failed to resolve the Promise. You can safely ignore the error if all relevant operations are truly succeeding.
# Connecting Apps In Node.Js
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/auth
When you use [prebuilt actions](/docs/components/contributing/#actions) tied to apps, you don’t need to write the code to authorize API requests. Just [connect your account](/docs/apps/connected-accounts/#connecting-accounts) for that app and run your workflow.
But sometimes you’ll need to [write your own code](/docs/workflows/building-workflows/code/nodejs/). You can also connect apps to custom code steps, using the auth information to authorize requests to that app.
For example, you may want to send a Slack message from a step. We use Slack’s OAuth integration to authorize sending messages from your workflows.
To wire up a Slack account to a workflow, define it as a `prop` to the workflow.
```javascript
import { WebClient } from '@slack/web-api'
export default defineComponent({
props: {
// This creates a connection called "slack" that connects a Slack account.
slack: {
type: 'app',
app: 'slack'
}
},
async run({ steps, $ }) {
const web = new WebClient(this.slack.$auth.oauth_access_token)
return await web.chat.postMessage({
text: "Hello, world!",
channel: "#general",
})
}
});
```
Then click the **Refresh fields** button in the editor to render the Slack field based on the `slack` prop:
Now the step in the workflow builder will allow you to connect your Slack account:
## Accessing connected account data with `this.appName.$auth`
In our Slack example above, we created a Slack `WebClient` using the Slack OAuth access token:
```javascript
const web = new WebClient(this.slack.$auth.oauth_access_token);
```
Where did `this.slack` come from? Good question. It was generated by the definition we made in `props`:
```javascript highlight={4-9}
export default defineComponent({
props: {
// the name of the app from the key of the prop, in this case it's "slack"
slack: {
// define that this prop is an app
type: 'app',
// define that this app connects to Slack
app: 'slack'
}
}
// ... rest of the Node.js step
```
The Slack access token is generated by Pipedream, and is available to this step in the `this.slack.$auth` object:
```javascript highlight={11}
export default defineComponent({
props: {
slack: {
type: 'app',
app: 'slack'
}
},
async run({ steps, $ }) {
async (steps, $) => {
// Authentication details for all of your apps are accessible under the special $ variable:
console.log(this.slack.$auth);
}
})
});
```
`this.appName.$auth` contains named properties for each account you connect to the associated step. Here, we connected Slack, so `this.slack.$auth` contains the Slack auth info (the `oauth_access_token`).
The names of the properties for each connected account will differ with the account. Pipedream typically exposes OAuth access tokens as `oauth_access_token`, and API keys under the property `api_key`. But if there’s a service-specific name for the tokens (for example, if the service calls it `server_token`), we prefer that name, instead.
To list the `this.[app name].$auth` properties available to you for a given app, run `Object.keys` on the app:
```javascript
console.log(Object.keys(this.slack.$auth)) // Replace this.slack with your app's name
```
and run your workflow. You’ll see the property names in the logs below your step.
## Writing custom steps to use `this.appName.$auth`
You can write code that utilizes connected accounts in a few different ways:
### Using the code templates tied to apps
When you write custom code that connects to an app, you can start with a code snippet Pipedream provides for each app. This is called the **test request**.
When you search for an app in a step:
1. Click the **+** button below any step.
2. Search for the app you’re looking for and select it from the list.
3. Select the option to **Use any \ API**.
This code operates as a template you can extend, and comes preconfigured with the connection to the target app and the code for authorizing requests to the API. You can modify this code however you’d like.
### Manually connecting apps to steps
See the Connected Accounts docs for [connecting an account to a code step](/docs/apps/connected-accounts/#from-a-code-step).
## Custom auth tokens / secrets
When you want to connect to a 3rd party service that isn’t supported by Pipedream, you can store those secrets in [Environment Variables](/docs/workflows/environment-variables/).
## Learn more about `props`
Not only can `props` be used to connect apps to workflow steps, but they can also be used to [collect properties collected from user input](/docs/workflows/building-workflows/code/nodejs/#passing-props-to-code-steps) and [save data between workflow runs](/docs/workflows/building-workflows/code/nodejs/using-data-stores/).
# Browser Automation With Node.Js
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/browser-automation
You can leverage headless browser automations within Pipedream workflows for web scraping, generating screenshots, or programmatically interacting with websites - even those that make heavy usage of frontend Javascript.
Pipedream manages a [specialized package](https://www.npmjs.com/package/@pipedream/browsers) that includes Puppeteer and Playwright bundled with a specialized Chromium instance that’s compatible with Pipedream’s Node.js Execution Environment.
All that’s required is importing the [`@pipedream/browsers`](https://www.npmjs.com/package/@pipedream/browsers) package into your Node.js code step and launch a browser. Pipedream will start Chromium and launch a Puppeteer or Playwright Browser instance for you.
## Usage
The `@pipedream/browsers` package exports two modules: `puppeteer` & `playwright`. Both modules share the same interface:
* `browser(opts?)` - method to instantiate a new browser (returns a browser instance)
* `launch(opts?)` - an alias to browser()
* `newPage()` - creates a new page instance and returns both the page & browser
### Puppeteer
First import the `puppeteer` module from `@pipedream/browsers` and use `browser()` or `launch()` method to instantiate a browser.
Then using this browser you can open new [Pages](https://pptr.dev/api/puppeteer.page), which have individual controls to open URLs:
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({steps, $}) {
const browser = await puppeteer.browser();
// Interact with the web page programmatically
// See Puppeeter's Page documentation for available methods:
// https://pptr.dev/api/puppeteer.page
const page = await browser.newPage();
await page.goto('https://pipedream.com/');
const title = await page.title();
const content = await page.content();
$.export('title', title);
$.export('content', content);
// The browser needs to be closed, otherwise the step will hang
await browser.close();
},
})
```
#### Screenshot a webpage
Puppeteer can take a full screenshot of a webpage rendered with Chromium. For full options [see the Puppeteer Screenshot method documentation.](https://pptr.dev/api/puppeteer.page.screenshot)
Save a screenshot within the local `/tmp` directory:
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://pipedream.com');
await page.screenshot({ path: '/tmp/screenshot.png' });
await browser.close();
},
});
```
Save the screenshot as a base 64 encoded string:
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await puppeteer.browser();
const page = await browser.newPage();
await page.goto('https://pipedream.com/');
const screenshot = await page.screenshot({ encoding: 'base64' });
await browser.close();
return screenshot;
},
});
```
#### Generate a PDF of a webpage
Puppeteer can render a PDF of a webpage. For full options [see the Puppeteer Screenshot method documentation.](https://pptr.dev/api/puppeteer.page.pdf)
Save the PDF locally to `/tmp`:
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://pipedream.com');
await page.pdf({ path: '/tmp/screenshot.pdf' });
await browser.close();
},
});
```
Save the PDF as a base 64 encoded string:
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await puppeteer.browser();
const page = await browser.newPage();
await page.goto('https://pipedream.com');
const pdfBuffer = await page.pdf();
const pdfBase64 = pdfBuffer.toString('base64');
await browser.close();
return pdfBase64;
},
});
```
#### Scrape content from a webpage
Puppeteer can scrape individual elements or return all content of a webpage.
Extract individual elements with a CSS selector:
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await puppeteer.browser();
const page = await browser.newPage();
await page.goto('https://pipedream.com');
const h1Content = await page.$eval('h1', el => el.textContent);
await browser.close();
return h1Content;
},
});
```
Extract all HTML content form a webpage:
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await puppeteer.browser();
const page = await browser.newPage();
await page.goto('https://pipedream.com');
const content = await page.content();
await browser.close();
return content;
},
});
```
#### Submit a form
Puppeteer can also programmatically click and type on a webpage.
`Page.type` accepts a CSS selector and a string to type into the field.
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({ steps, $ }) {
const browser = await puppeteer.browser();
const page = await browser.newPage();
await page.goto('https://example.com/contact');
await page.type('input[name=email]', 'pierce@pipedream.com');
await page.type('input[name=name]', 'Dylan Pierce');
await page.type('textarea[name=message]', "Hello, from a Pipedream workflow.");
await page.click('input[type=submit]');
await browser.close();
},
});
```
`Page.click` will click on the element to focus on it, then `Page.keyboard.type` emulates keyboard keystrokes.
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({ steps, $ }) {
const browser = await puppeteer.browser();
const page = await browser.newPage();
await page.goto('https://example.com/contact');
await page.click('input[name=email]')
await page.keyboard.type('pierce@pipedream.com');
await page.click('input[name=name]')
await page.keyboard.type('Dylan Pierce');
await page.click('textarea[name=message]')
await page.keyboard.type("Hello, from a Pipedream workflow.");
await page.click('input[type=submit]');
await browser.close();
},
});
```
### Playwright
First import the `playwright` module from `@pipedream/browsers` and use `browser()` or `launch()` method to instantiate a browser.
Then using this browser you can open new [Pages](https://playwright.dev/docs/api/class-page), which have individual controls to open URLs, click elements, generate screenshots and type and more:
```javascript
import { playwright } from '@pipedream/browsers';
export default defineComponent({
async run({steps, $}) {
const browser = await playwright.browser();
// Interact with the web page programmatically
// See Playwright's Page documentation for available methods:
// https://playwright.dev/docs/api/class-page
const page = await browser.newPage();
await page.goto('https://pipedream.com/');
const title = await page.title();
const content = await page.content();
$.export('title', title);
$.export('content', content);
// The browser context and browser needs to be closed, otherwise the step will hang
await page.context().close();
await browser.close();
},
})
```
**Don’t forget to close the browser context**
Playwright differs from Puppeteer slightly in that you have to close the page’s browser context before closing the browser itself.
```javascript
// Close the context & browser before returning results
// Otherwise the step will hang
await page.context().close();
await browser.close();
```
#### Screenshot a webpage
Playwright can take a full screenshot of a webpage rendered with Chromium. For full options [see the Playwright Screenshot method documentation.](https://playwright.dev/docs/api/class-page#page-screenshot)
Save a screenshot within the local `/tmp` directory:
```javascript
import { playwright } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await playwright.launch();
const page = await browser.newPage();
await page.goto('https://pipedream.com');
await page.screenshot({ path: '/tmp/screenshot.png' });
await page.context().close();
await browser.close();
},
});
```
Save the screenshot as a base 64 encoded string:
```javascript
import { playwright } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await playwright.launch();
const page = await browser.newPage();
await page.goto('https://pipedream.com/');
const screenshotBuffer = await page.screenshot();
const screenshotBase64 = screenshotBuffer.toString('base64');
await page.context().close();
await browser.close();
return screenshotBase64;
},
});
```
#### Generate a PDF of a webpage
Playwright can render a PDF of a webpage. For full options [see the Playwright Screenshot method documentation.](https://playwright.dev/docs/api/class-page#page-pdf)
Save a PDF locally to `/tmp`:
```javascript
import { playwright } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await playwright.launch();
const page = await browser.newPage();
await page.goto('https://pipedream.com');
await page.pdf({ path: '/tmp/screenshot.pdf' });
await page.context().close();
await browser.close();
},
});
```
Save the screenshot as a base 64 encoded string:
```javascript
import { playwright } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await playwright.launch();
const page = await browser.newPage();
await page.goto('https://pipedream.com/');
const screenshotBuffer = await page.pdf();
const screenshotBase64 = screenshotBuffer.toString('base64');
await page.context().close();
await browser.close();
return screenshotBase64;
},
});
```
#### Scrape content from a webpage
Playwright can scrape individual elements or return all content of a webpage.
Extract individual HTML elements using a CSS Selector:
```javascript
import { playwright } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await playwright.browser();
const page = await browser.newPage();
await page.goto('https://pipedream.com');
const h1Content = await page.$eval('h1', el => el.textContent);
await page.context().close();
await browser.close();
return h1Content;
},
});
```
Extract all HTML content from a webpage with `Page.content`:
```javascript
import { playwright } from '@pipedream/browsers';
export default defineComponent({
async run({ $ }) {
const browser = await playwright.browser();
const page = await browser.newPage();
await page.goto('https://pipedream.com');
const content = await page.content();
await page.context().close();
await browser.close();
return content;
},
});
```
#### Submit a form
Playwright can also programmatically click and type on a webpage.
`Page.type` accepts a CSS selector and a string to type into the selected element.
```javascript
import { playwright } from '@pipedream/browsers';
export default defineComponent({
async run({ steps, $ }) {
const browser = await playwright.browser();
const page = await browser.newPage();
await page.goto('https://example.com/contact');
await page.type('input[name=email]', 'pierce@pipedream.com');
await page.type('input[name=name]', 'Dylan Pierce');
await page.type('textarea[name=message]', "Hello, from a Pipedream workflow.");
await page.click('input[type=submit]');
await page.context().close();
await browser.close();
},
});
```
`Page.click` will click on the element to focus on it, then `Page.keyboard.type` emulates keyboard keystrokes.
```javascript
import { playwright } from '@pipedream/browsers';
export default defineComponent({
async run({ steps, $ }) {
const browser = await playwright.browser();
const page = await browser.newPage();
await page.goto('https://example.com/contact');
await page.click('input[name=email]')
await page.keyboard.type('pierce@pipedream.com');
await page.click('input[name=name]')
await page.keyboard.type('Dylan Pierce');
await page.click('textarea[name=message]')
await page.keyboard.type("Hello, from a Pipedream workflow.");
await page.click('input[type=submit]');
await page.context().close();
await browser.close();
},
});
```
## FAQ
### Can I use this package in sources or actions?
Yes, the same `@pipedream/browsers` package can be used in [actions](/docs/components/contributing/actions-quickstart/) as well as [sources](/docs/components/contributing/sources-quickstart/).
The steps are the same as usage in Node.js code. Open a browser, create a page, and close the browser at the end of the code step.
**Memory limits**
At this time it’s not possible to configure the allocated memory to a Source. You may experience a higher rate of Out of Memory errors on Sources that use Puppeteer or Playwright due to the high usage of memory required by Chromium.
### Workflow exited before step finished execution
Remember to close the browser instance *before* the step finishes. Otherwise, the browser will keep the step “open” and not transfer control to the next step.
### Out of memory errors or slow starts
For best results, we recommend increasing the amount of memory available to your workflow to 2 gigabytes. You can adjust the available memory in the [workflow settings](/docs/workflows/building-workflows/settings/#memory).
### Which browser are these packages using?
The `@pipedream/browsers` package includes a specific version of Chromium that is compatible with Pipedream Node.js execution environments that run your code.
For details on the specific versions of Chromium, puppeeter and playwright bundled in this package, visit the package’s [README](https://github.com/PipedreamHQ/pipedream/tree/master/packages/browsers).
### How to customize `puppeteer.launch()`?
To pass arguments to `puppeteer.launch()` to customize the browser instance, you can pass them directly to `puppeteer.browser()`.
For example, you can alter the `protocolTimeout` length just by passing it as an argument:
```javascript
import { puppeteer } from '@pipedream/browsers';
export default defineComponent({
async run({steps, $}) {
// passing a `protocolTimeout` argument to increase the timeout length for a puppeteer instance
const browser = await puppeteer.browser({ protocolTimeout: 480000 });
// rest of code
},
})
```
Please see the [`@pipedream/browsers` source code](https://github.com/PipedreamHQ/pipedream/blob/17888e631857259a6535f9bd13c23a1e7ff95381/packages/browsers/index.mjs#L14) for the default arguments that Pipedream provides.
### How do I use `puppeteer.connect()`?
To use `puppeteer.connect()` to connect to a remote browser instance, you can use the [`puppeteer-core`](https://github.com/puppeteer/puppeteer/tree/main?tab=readme-ov-file#puppeteer-core) package:
```javascript
import puppeteer from "puppeteer-core";
```
`puppeteer-core` does not download Chrome when installed, which decreases the size of your deployment and can improve cold start times.
To connect to a remote browser instance using Playwright, you can use the [`playwright-core`](https://www.npmjs.com/package/playwright-core) package, which is the no-browser Playwright package:
```javascript
import playwright from "playwright-core";
```
# Delaying A Workflow
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/delay
export const DELAY_MIN_MAX_TIME = 'You can pause your workflow for as little as one millisecond, or as long as one year';
export const MAX_WORKFLOW_EXECUTION_LIMIT = '750';
Use `$.flow.delay` to [delay a step in a workflow](/docs/workflows/building-workflows/control-flow/delay/).
These docs show you how to write Node.js code to handle delays. If you don’t need to write code, see [our built-in delay actions](/docs/workflows/building-workflows/control-flow/delay/#delay-actions).
## Using `$.flow.delay`
`$.flow.delay` takes one argument: the number of **milliseconds** you’d like to pause your workflow until the next step executes. {DELAY_MIN_MAX_TIME}.
Note that [delays happen at the end of the step where they’re called](/docs/workflows/building-workflows/code/nodejs/delay/#when-delays-happen).
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Delay a workflow for 60 seconds (60,000 ms)
$.flow.delay(60 * 1000)
// Delay a workflow for 15 minutes
$.flow.delay(15 * 60 * 1000)
// Delay a workflow based on the value of incoming event data,
// or default to 60 seconds if that variable is undefined
$.flow.delay(steps.trigger.event?.body?.delayMs ?? 60 * 1000)
// Delay a workflow a random amount of time
$.flow.delay(Math.floor(Math.random() * 1000))
}
});
```
Paused workflow state
When `$.flow.delay` is executed in a Node.js step, the workflow itself will enter a **Paused** state.
While the workflow is paused, it will not incur any credits towards compute time. You can also [view all paused workflows in the Event History](/docs/workflows/event-history/#filtering-by-status).
### Credit usage
The length of time a workflow is delayed from `$.flow.delay` does *not* impact your credit usage. For example, delaying a 256 megabyte workflow for five minutes will **not** incur ten credits.
However, using `$.flow.delay` in a workflow will incur two credits.
One credit is used to initially start the workflow, then the second credit is used when the workflow resumes after its pause period has ended.
Exact credit usage depends on duration and memory configuration
If your workflow’s [execution timeout limit](/docs/workflows/building-workflows/settings/#execution-timeout-limit) is set to longer than [default limit](/docs/workflows/limits/#time-per-execution), it may incur more than two [credits](/docs/pricing/#credits-and-billing) when using `pd.flow.delay`.
## `cancel_url` and `resume_url`
Both the built-in **Delay** actions and `$.flow.delay` return a `cancel_url` and `resume_url` that lets you cancel or resume paused executions.
These URLs are specific to a single execution of your workflow. While the workflow is paused, you can load these in your browser or send an HTTP request to either:
* Hitting the `cancel_url` will immediately cancel that execution
* Hitting the `resume_url` will immediately resume that execution early
[Since Pipedream pauses your workflow at the *end* of the step where you run call `$.flow.delay`](/docs/workflows/building-workflows/code/nodejs/delay/#when-delays-happen), you can send these URLs to third party systems, via email, or anywhere else you’d like to control the execution of your workflow.
```javascript
import axios from 'axios'
export default defineComponent({
async run({ steps, $ }) {
const { cancel_url, resume_url } = $.flow.delay(15 * 60 * 1000)
// Send the URLs to a system you own
await axios({
method: "POST",
url: `https://example.com`,
data: { cancel_url, resume_url },
});
// Email yourself the URLs. Click on the links to cancel / resume
$.send.email({
subject: `Workflow execution ${steps.trigger.context.id}`,
text: `Cancel your workflow here: ${cancel_url} . Resume early here: ${resume_url}`,
});
}
});
// Delay happens at the end of this step
```
## When delays happen
**Pipedream pauses your workflow at the *end* of the step where you call `$.flow.delay`**. This lets you [send the `cancel_url` and `resume_url` to third-party systems](/docs/workflows/building-workflows/code/nodejs/delay/#cancel_url-and-resume_url).
```javascript
export default defineComponent({
async run({ steps, $ }) {
const { cancel_url, resume_url } = $.flow.delay(15 * 60 * 1000)
// ... run any code you want here
}
});
// Delay happens at the end of this step
```
## Delays and HTTP responses
You cannot run `$.respond` after running `$.flow.delay`. Pipedream ends the original execution of the workflow when `$.flow.delay` is called and issues the following response to the client to indicate this state:
> \$.respond() not called for this invocation
If you need to set a delay on an HTTP request triggered workflow, consider using [`setTimeout`](/docs/workflows/building-workflows/code/nodejs/delay/#settimeout) instead.
## `setTimeout`
Alternatively, you can use `setTimeout` instead of using `$.flow.delay` to delay individual workflow steps.
However, there are some drawbacks to using `setTimeout` instead of `$.flow.delay`. `setTimeout` will count towards your workflow’s compute time, for example:
```javascript
export default defineComponent({
async run({ steps, $ }) {
// delay this step for 30 seconds
const delay = 30000;
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve('timer ended')
}, delay)
})
}
});
```
The Node.js step above will hold the workflow’s execution for this step for 30 seconds; however, 30 seconds will also *contribute* to your credit usage. Also consider that workflows have a hard limit of {MAX_WORKFLOW_EXECUTION_LIMIT} seconds.
# Make HTTP Requests With Node.Js
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/http-requests
HTTP requests are fundamental to working with APIs or other web services. You can make HTTP requests to retrieve data from APIs, fetch HTML from websites, or do pretty much anything your web browser can do.
**Below, we’ll review how to make HTTP requests using Node.js code on Pipedream.**
We’ll use the [`axios`](https://github.com/axios/axios) and [`got`](https://github.com/sindresorhus/got) HTTP clients in the examples below, but [you can use any npm package you’d like](/docs/workflows/building-workflows/code/nodejs/#using-npm-packages) on Pipedream, so feel free to experiment with other clients, too.
If you’re developing Pipedream components, you may find the [`@pipedream/platform` version of `axios`](/docs/workflows/building-workflows/http/#platform-axios) helpful for displaying error data clearly in the Pipedream UI.
If you’re new to HTTP, see our [glossary of HTTP terms](https://requestbin.com/blog/working-with-webhooks/#webhooks-glossary-common-terms) for a helpful introduction.
## Basic `axios` usage notes
To use `axios` on Pipedream, you’ll just need to import the `axios` npm package:
```javascript
import axios from "axios";
```
You make HTTP requests by passing a [JavaScript object](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Working_with_Objects) to `axios` that defines the parameters of the request. For example, you’ll typically want to define the HTTP method and the URL you’re sending data to:
```javascript
{
method: "GET",
url: `https://swapi.dev/api/films/`
}
```
`axios` returns a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises), which is just a fancy way of saying that it makes the HTTP request in the background (asynchronously) while the rest of your code runs. On Pipedream, [all asynchronous code must be run synchronously](/docs/workflows/building-workflows/code/nodejs/async/), which means you’ll need to wait for the HTTP request to finish before moving on to the next step. You do this by adding an `await` in front of the call to `axios`.
**Putting all of this together, here’s how to make a basic HTTP request on Pipedream:**
```php
const resp = await axios({
method: "GET",
url: `https://swapi.dev/api/films/`,
});
```
The response object `resp` contains a lot of information about the response: its data, headers, and more. Typically, you just care about the data, which you can access in the `data` property of the response:
```php
const resp = await axios({
method: "GET",
url: `https://swapi.dev/api/films/`,
});
// HTTP response data is in the data property
const data = resp.data;
```
Alternatively, you can access the data using [object destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment#Object_destructuring), which is equivalent to the above and preferred in modern JavaScript:
```javascript
const { data } = resp;
```
## Send a `GET` request to fetch data
Make a request to retrieve Star Wars films from the Star Wars API:
```javascript axios
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
// Make an HTTP GET request using axios
const res = await axios({
method: "GET",
url: `https://swapi.dev/api/films/`,
});
// Retrieve just the data from the response
const { data } = res;
}
});
```
```javascript http-request prop
export default defineComponent({
props: {
httpRequest: {
type: "http_request",
label: "Star Wars API request",
default: {
method: "GET",
url: "https://swapi.dev/api/films/"
}
},
},
async run({ steps, $ }) {
// Make an HTTP GET request using the http-request
const res = await this.httpRequest.execute();
// Retrieve just the data from the response
const { data } = res;
},
})
```
**Produces**
## Send a `POST` request to submit data
POST sample JSON to [JSONPlaceholder](https://jsonplaceholder.typicode.com/), a free mock API service:
```javascript axios
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
// Make an HTTP POST request using axios
const resp = await axios({
method: "POST",
url: `https://jsonplaceholder.typicode.com/posts`,
data: {
name: "Luke",
},
});
// Retrieve just the data from the response
const { data } = resp;
}
});
```
```php http-request prop
export default defineComponent({
props: {
httpRequest: {
type: "http_request",
label: "JSON Placeholder API request",
default: {
method: "POST",
url: "https://jsonplaceholder.typicode.com/posts",
body: {
contentType: "application/json",
fields: [{ name: "Luke" }]
}
}
},
},
async run({ steps, $ }) {
// Make an HTTP GET request using the http-request
const res = await this.httpRequest.execute();
// Retrieve just the data from the response
const { data } = res;
},
})
```
When you make a `POST` request, you pass `POST` as the `method`, and include the data you’d like to send in the `data` object.
## Pass query string parameters to a `GET` request
Retrieve fake comment data on a specific post using [JSONPlaceholder](https://jsonplaceholder.typicode.com/), a free mock API service. Here, you fetch data from the `/comments` resource, retrieving data for a specific post by query string parameter: `/comments?postId=1`.
```javascript
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
// Make an HTTP GET request using axios
const resp = await axios({
method: "GET",
url: `https://jsonplaceholder.typicode.com/comments`,
params: {
postId: 1,
},
});
// Retrieve just the data from the response
const { data } = resp;
}
});
```
```javascript
export default defineComponent({
props: {
httpRequest: {
type: "http_request",
label: "JSON Placeholder API request",
default: {
method: "GET",
url: "https://jsonplaceholder.typicode.com/comments",
params: {
fields: [{ postId: 1 }]
}
}
},
},
async run({ steps, $ }) {
// Make an HTTP GET request using the http-request
const res = await this.httpRequest.execute();
// Retrieve just the data from the response
const { data } = res;
},
})
```
You should pass query string parameters using the `params` object, like above. When you do, `axios` automatically [URL-encodes](https://www.w3schools.com/tags/ref_urlencode.ASP) the parameters for you, which you’d otherwise have to do manually.
## Send a request with HTTP headers
You pass HTTP headers in the `headers` object of the `axios` request:
```js
import axios from "axios";
// Make an HTTP POST request using axios
const resp = await axios({
method: "POST",
url: `https://jsonplaceholder.typicode.com/posts`,
headers: {
"Content-Type": "application/json",
},
data: {
name: "Luke",
},
});
```
## Send a request with a secret or API key
Most APIs require you authenticate HTTP requests with an API key or other token. **Please review the docs for your service to understand how they accept this data.**
Here’s an example showing an API key passed in an HTTP header:
```js
import axios from "axios";
// Make an HTTP POST request using axios
const resp = await axios({
method: "POST",
url: `https://jsonplaceholder.typicode.com/posts`,
headers: {
"Content-Type": "application/json",
"X-API-Key": "123", // API KEY
},
data: {
name: "Luke",
},
});
```
[Copy this workflow to run this code on Pipedream](https://pipedream.com/@dylburger/send-an-http-request-with-headers-p_q6ClzO/edit).
## Send multiple HTTP requests in sequence
There are many ways to make multiple HTTP requests. This code shows you a simple example that sends the numbers `1`, `2`, and `3` in the body of an HTTP POST request:
```javascript
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
// We'll store each response and return them in this array
const responses = [];
for await (const num of [1, 2, 3]) {
const resp = await axios({
method: "POST",
url: "https://example.com",
data: {
num, // Will send the current value of num in the loop
},
});
responses.push(resp.data);
}
return responses;
},
});
```
This sends each HTTP request *in sequence*, one after another, and returns an array of response data returned from the URL to which you send the POST request. If you need to make requests *in parallel*, [see these docs](/docs/workflows/building-workflows/code/nodejs/http-requests/#send-multiple-http-requests-in-parallel).
[Copy this workflow](https://pipedream.com/@dylburger/iterate-over-a-pipedream-step-export-sending-multiple-http-requests-p_ljCAPN/edit) and fill in your destination URL to see how this works. **This workflow iterates over the value of a Pipedream [step export](/docs/workflows/#step-exports)** - data returned from a previous step. Since you often want to iterate over data returned from a Pipedream action or other code step, this is a common use case.
## Send multiple HTTP requests in parallel
Sometimes you’ll want to make multiple HTTP requests in parallel. If one request doesn’t depend on the results of another, this is a nice way to save processing time in a workflow. It can significantly cut down on the time you spend waiting for one request to finish, and for the next to begin.
To make requests in parallel, you can use two techniques. By default, we recommend using [`promise.allSettled`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/allSettled), which makes all HTTP requests and returns data on their success / failure. If an HTTP request fails, all other requests will proceed.
```javascript
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
const arr = [
"https://www.example.com",
"https://www.cnn.com",
"https://www.espn.com",
];
const promises = arr.map((url) => axios.get(url));
return Promise.allSettled(promises);
},
});
```
First, we generate an array of `axios.get` requests (which are all [Promises](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)), and then call `Promise.allSettled` to run them in parallel.
When you want to stop future requests when *one* of the requests fails, you can use [`Promise.all`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all), instead:
```javascript
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
const arr = [
"https://www.example.com",
"https://www.cnn.com",
"https://www.espn.com",
];
const promises = arr.map((url) => axios.get(url));
return Promise.all(promises);
},
});
```
The Mozilla docs expand on the difference between these methods, and when you may want to use one or the other:
> The `Promise.allSettled()` method returns a promise that resolves after all of the given promises have either fulfilled or rejected, with an array of objects that each describes the outcome of each promise.\
> It is typically used when you have multiple asynchronous tasks that are not dependent on one another to complete successfully, or you’d always like to know the result of each promise.\
> In comparison, the Promise returned by `Promise.all()` may be more appropriate if the tasks are dependent on each other / if you’d like to immediately reject upon any of them rejecting.
## Send a `multipart/form-data` request
```javascript axios
import axios from "axios";
import FormData from "form-data";
export default defineComponent({
async run({ steps, $ }) {
const formData = new FormData();
formData.append("name", "Luke Skywalker");
const headers = formData.getHeaders();
const config = {
method: "POST",
url: "https://example.com",
headers,
data: formData,
};
return await axios(config);
}
});
```
```js http-request prop
export default defineComponent({
props: {
httpRequest: {
type: "http_request",
label: "Example Multipart Form Request",
default: {
method: "POST",
url: "https://example.com",
headers: {
contentType: "multipart/form-data",
fields: [{ name: "Luke Skywalker" }]
}
}
},
},
async run({ steps, $ }) {
// Make an HTTP GET request using the http-request
const res = await this.httpRequest.execute();
// Retrieve just the data from the response
const { data } = res;
},
})
```
[Copy this workflow](https://pipedream.com/@dylburger/send-a-multipart-form-data-request-p_WxCQRyr/edit) to run this example.
## Download a file to the `/tmp` directory
This example shows you how to download a file to a file in [the `/tmp` directory](/docs/workflows/building-workflows/code/nodejs/working-with-files/). This can be especially helpful for downloading large files: it streams the file to disk, minimizing the memory the workflow uses when downloading the file.
```javascript
import { pipeline } from "stream/promises";
import fs from "fs";
import got from "got";
export default defineComponent({
async run({ steps, $ }) {
// Download the webpage HTML file to /tmp
return await pipeline(
got.stream("https://example.com"),
fs.createWriteStream('/tmp/file.html')
);
}
})
```
[Copy this workflow](https://pipedream.com/new?h=tch_wqKfoW) to run this example.
## Upload a file from the `/tmp` directory
This example shows you how to make a `multipart/form-data` request with a file as a form part. You can store and read any files from [the `/tmp` directory](/docs/workflows/building-workflows/code/nodejs/working-with-files/#the-tmp-directory).
This can be especially helpful for uploading large files: it streams the file from disk, minimizing the memory the workflow uses when uploading the file.
```javascript
import axios from "axios";
import fs from "fs";
import FormData from "form-data";
export default defineComponent({
async run({ steps, $ }) {
const formData = new FormData();
formData.append("file", fs.createReadStream('/tmp/file.pdf'));
const headers = formData.getHeaders();
const config = {
method: "POST",
url: "https://example.com",
headers,
data: formData,
};
return await axios(config);
}
});
```
[Copy this workflow](https://pipedream.com/new?h=tch_Oknf4r) to run this example.
## IP addresses for HTTP requests made from Pipedream workflows
By default, [HTTP requests made from Pipedream can come from a large range of IP addresses](/docs/privacy-and-security/#hosting-details). **If you need to restrict the IP addresses HTTP requests come from, you have two options**:
* [Use a Pipedream VPC](/docs/workflows/vpc/) to route all outbound HTTP requests through a single IP address
* If you don’t need to access the HTTP response data, you can [use `$send.http()`](/docs/workflows/data-management/destinations/http/) to send requests from a [limited set of IP addresses](/docs/workflows/data-management/destinations/http/#ip-addresses-for-pipedream-http-requests).
## Use an HTTP proxy to proxy requests through another host
By default, HTTP requests made from Pipedream can come from a range of IP addresses. **If you need to make requests from a single IP address, you can route traffic through an HTTP proxy**:
```php
import axios from "axios";
import httpsProxyAgent from "https-proxy-agent";
export default defineComponent({
props: {
user: {
type: 'string',
label: 'Username',
description: 'The username for the HTTP proxy authentication',
},
pass: {
type: 'string',
label: 'Password',
secret: true,
description: 'The password for the HTTP proxy authentication',
},
host: {
type: 'string',
label: "HTTP Proxy Host",
description: "The URL for the HTTP proxy",
},
port: {
type: "string",
label: "Port",
description: "The port the HTTP proxy is accepting requests at",
},
target_host: {
type: 'string',
label: "Target Host",
description: "The URL for the end target to reach through the proxy",
},
method: {
type: 'string',
default: 'GET',
label: "HTTP method",
description: "The HTTP method to use to reach the end target host"
},
body: {
type: 'object',
label: "HTTP body",
description: "The HTTP body payload to send to the end target host"
}
},
async run({ steps, $ }) {
const { user, pass, host, port, target_host, method } = this;
const agent = new httpsProxyAgent(`http://${user}:${pass}@${host}:${port}`);
const config = {
method,
url: target_host,
body,
httpsAgent: agent,
};
return await axios.request(config);
}
});
```
[Copy this workflow to run this code on Pipedream](https://pipedream.com/new?h=tch_mypfby).
## Stream a downloaded file directly to another URL
Sometimes you need to upload a downloaded file directly to another service, without processing the downloaded file. You could [download the file](/docs/workflows/building-workflows/code/nodejs/http-requests/#download-a-file-to-the-tmp-directory) and then [upload it](/docs/workflows/building-workflows/code/nodejs/http-requests/#upload-a-file-from-the-tmp-directory) to the other URL, but these intermediate steps are unnecessary: you can just stream the download to the other service directly, without saving the file to disk.
This method is especially effective for large files that exceed the [limits of the `/tmp` directory](/docs/workflows/limits/#disk).
[Copy this workflow](https://pipedream.com/@dylburger/stream-download-to-upload-p_5VCLoa1/edit) or paste this code into a [new Node.js code step](/docs/workflows/building-workflows/code/nodejs/):
```javascript
import stream from "stream";
import { promisify } from "util";
import got from "got";
export default defineComponent({
async run({ steps, $ }) {
const pipeline = promisify(stream.pipeline);
await pipeline(
got.stream("https://example.com"),
got.stream.post("https://example2.com")
);
}
});
```
You’ll want to replace `https://example.com` with the URL you’d like to stream data from, and replace `https://example2.com` with the URL you’d like to send the data *to*. `got` streams the content directly, downloading the file using a `GET` request and uploading it as a `POST` request.
If you need to modify this behavior, [see the `got` Stream API](https://github.com/sindresorhus/got#gotstreamurl-options).
## Catch and process HTTP errors
By default, `axios` throws an error when the HTTP response code is in the 400-500 range (a client or server error). If you’d like to process the error data instead of throwing an error, you can pass a custom function to the `validateStatus` property:
```javascript
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
const resp = await axios({
url: "https://httpstat.us/400",
validateStatus: () => true, // will not throw error when axios gets a 400+ status code (the default behavior)
});
if (resp.status >= 400) {
this.debug = resp;
throw new Error(JSON.stringify(resp.data)); // This can be modified to throw any error you'd like
}
return resp;
}
});
```
See [the `axios` docs](https://github.com/axios/axios#request-config) for more details.
## Paginating API requests
When you fetch data from an API, the API may return records in “pages”. For example, if you’re trying to fetch a list of 1,000 records, the API might return those in groups of 100 items.
Different APIs paginate data in different ways. You’ll need to consult the docs of your API provider to see how they suggest you paginate through records.
## Send GraphQL request
Make a GraphQL request using the `graphql-request` NPM package:
```javascript
import { graphql } from 'graphql'
import { request, gql } from 'graphql-request'
export default defineComponent({
async run({ steps, $ }) {
const document = gql`
query samplePokeAPIquery {
generations: pokemon_v2_generation {
name
pokemon_species: pokemon_v2_pokemonspecies_aggregate {
aggregate {
count
}
}
}
}
`
return await request('https://beta.pokeapi.co/graphql/v1beta', document)
},
})
```
The graphql package is required
The `graphql` package is required for popular GraphQL clients to function, like `graphql-request` and `urql`.
Even though you will not need to use the `graphql` code itself in your code step, it’s required to import it in order for `graphql-request` to function.
### Send an authenticated GraphQL request
Authenticate your connected accounts in Pipedream with GraphQL requests using the `app` prop:
```javascript
import { graphql } from 'graphql'
import { GraphQLClient, gql } from 'graphql-request'
export default defineComponent({
props: {
github: {
type: 'app',
app: 'github'
}
},
async run({ steps, $ }) {
const me = gql`
query {
viewer {
login
}
}
`
const client = new GraphQLClient('https://api.github.com/graphql', {
headers: {
authorization: `Bearer ${this.github.$auth.oauth_access_token}`,
},
})
return await client.request(me)
},
})
```
Alternatively, you can use Environment Variables as well for simple API key based GraphQL APIs.
# Pause, Resume, And Rerun A Workflow
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/rerun
You can use `$.flow.suspend` and `$.flow.rerun` to pause a workflow and resume it later.
This is useful when you want to:
* Pause a workflow until someone manually approves it
* Poll an external API until some job completes, and proceed with the workflow when it’s done
* Trigger an external API to start a job, pause the workflow, and resume it when the external API sends an HTTP callback
We’ll cover all of these examples below.
## `$.flow.suspend`
Use `$.flow.suspend` when you want to pause a workflow and proceed with the remaining steps only when manually approved or cancelled.
For example, you can suspend a workflow and send yourself a link to manually resume or cancel the rest of the workflow:
```javascript
export default defineComponent({
async run({ $ }) {
const { resume_url, cancel_url } = $.flow.suspend();
$.send.email({
subject: "Please approve this important workflow",
text: `Click here to approve the workflow: ${resume_url}, and cancel here: ${cancel_url}`,
});
// Pipedream suspends your workflow at the end of the step
},
});
```
You’ll receive an email like this:
And can resume or cancel the rest of the workflow by clicking on the appropriate link.
### `resume_url` and `cancel_url`
In general, calling `$.flow.suspend` returns a `cancel_url` and `resume_url` that lets you cancel or resume paused executions. Since Pipedream pauses your workflow at the *end* of the step, you can pass these URLs to any external service before the workflow pauses. If that service accepts a callback URL, it can trigger the `resume_url` when its work is complete.
These URLs are specific to a single execution of your workflow. While the workflow is paused, you can load these in your browser or send any HTTP request to them:
* Sending an HTTP request to the `cancel_url` will cancel that execution
* Sending an HTTP request to the `resume_url` will resume that execution
If you resume a workflow, any data sent in the HTTP request is passed to the workflow and returned in the `$resume_data` [step export](/docs/workflows/#step-exports) of the suspended step. For example, if you call `$.flow.suspend` within a step named `code`, the `$resume_data` export should contain the data sent in the `resume_url` request:
Requests to the `resume_url` have [the same limits as any HTTP request to Pipedream](/docs/workflows/limits/#http-request-body-size), but you can send larger payloads using our [large payload](/docs/workflows/building-workflows/triggers/#sending-large-payloads) or [large file](/docs/workflows/building-workflows/triggers/#large-file-support) interfaces.
### Default timeout of 24 hours
By default, `$.flow.suspend` will automatically cancel the workflow after 24 hours. You can set your own timeout (in milliseconds) as the first argument:
```javascript
export default defineComponent({
async run({ $ }) {
// 7 days
const TIMEOUT = 1000 * 60 * 60 * 24 * 7;
$.flow.suspend(TIMEOUT);
},
});
```
## `$.flow.rerun`
Use `$.flow.rerun` when you want to run a specific step of a workflow multiple times. This is useful when you need to start a job in an external API and poll for its completion, or have the service call back to the step and let you process the HTTP request within the step.
### Retrying a Failed API Request
`$.flow.rerun` can be used to conditionally retry a failed API request due to a service outage or rate limit reached. Place the `$.flow.rerun` call within a `catch` block to only retry the API request if an error is thrown:
```php
import { axios } from "@pipedream/platform";
export default defineComponent({
props: {
openai: {
type: "app",
app: "openai",
},
},
async run({ steps, $ }) {
try {
return await axios($, {
url: `https://api.openai.com/v1/completions`,
method: "post",
headers: {
Authorization: `Bearer ${this.openai.$auth.api_key}`,
},
data: {
model: "text-davinci-003",
prompt: "Say this is a test",
max_tokens: 7,
temperature: 0,
},
});
} catch (error) {
const MAX_RETRIES = 3;
const DELAY = 1000 * 30;
// Retry the request every 30 seconds, for up to 3 times
$.flow.rerun(DELAY, null, MAX_RETRIES);
}
},
});
```
### Polling for the status of an external job
Sometimes you need to poll for the status of an external job until it completes. `$.flow.rerun` lets you rerun a specific step multiple times:
```javascript
import axios from "axios";
export default defineComponent({
async run({ $ }) {
const MAX_RETRIES = 3;
// 10 seconds
const DELAY = 1000 * 10;
const { run } = $.context;
// $.context.run.runs starts at 1 and increments when the step is rerun
if (run.runs === 1) {
// $.flow.rerun(delay, context (discussed below), max retries)
$.flow.rerun(DELAY, null, MAX_RETRIES);
} else if (run.runs === MAX_RETRIES + 1) {
throw new Error("Max retries exceeded");
} else {
// Poll external API for status
const { data } = await axios({
method: "GET",
url: "https://example.com/status",
});
// If we're done, continue with the rest of the workflow
if (data.status === "DONE") return data;
// Else retry later
$.flow.rerun(DELAY, null, MAX_RETRIES);
}
},
});
```
`$.flow.rerun` accepts the following arguments:
```javascript
$.flow.rerun(
delay, // The number of milliseconds until the step will be rerun
context, // JSON-serializable data you need to pass between runs
maxRetries // The total number of times the step will rerun. Defaults to 10
);
```
### Accept an HTTP callback from an external service
When you trigger a job in an external service, and that service can send back data in an HTTP callback to Pipedream, you can process that data within the same step using `$.flow.rerun`:
```javascript
import axios from "axios";
export default defineComponent({
async run({ steps, $ }) {
const TIMEOUT = 86400 * 1000;
const { run } = $.context;
// $.context.run.runs starts at 1 and increments when the step is rerun
if (run.runs === 1) {
const { cancel_url, resume_url } = $.flow.rerun(TIMEOUT, null, 1);
// Send resume_url to external service
await axios({
method: "POST",
url: "your callback URL",
data: {
resume_url,
cancel_url,
},
});
}
// When the external service calls back into the resume_url, you have access to
// the callback data within $.context.run.callback_request
else if (run.callback_request) {
return run.callback_request;
}
},
});
```
### Passing `context` to `$.flow.rerun`
Within a Node.js code step, `$.context.run.context` contains the `context` passed from the prior call to `rerun`. This lets you pass data from one run to another. For example, if you call:
```javascript
$.flow.rerun(1000, { hello: "world" });
```
`$.context.run.context` will contain:
### `maxRetries`
By default, `maxRetries` is **10**.
When you exceed `maxRetries`, the workflow proceeds to the next step. If you need to handle this case with an exception, `throw` an error from the step:
```javascript
export default defineComponent({
async run({ $ }) {
const MAX_RETRIES = 3;
const { run } = $.context;
if (run.runs === 1) {
$.flow.rerun(1000, null, MAX_RETRIES);
} else if (run.runs === MAX_RETRIES + 1) {
throw new Error("Max retries exceeded");
}
},
});
```
## Behavior when testing
When you’re building a workflow and test a step with `$.flow.suspend` or `$.flow.rerun`, it will not suspend the workflow, and you’ll see a message like the following:
> Workflow execution canceled — this may be due to `$.flow.suspend()` usage (not supported in test)
These functions will only suspend and resume when run in production.
## Credits when using `suspend` / `rerun`
You are not charged for the time your workflow is suspended during a `$.flow.rerun` or `$.flow.suspend`. Only when workflows are resumed will compute time count toward [credit usage](/docs/pricing/#credits-and-billing).
When a suspended workflow reawakens, it will reset the credit counter.
Each rerun or reawakening from a suspension will count as a new fresh credit.
# Sharing Code Across Workflows
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/sharing-code
[Actions](/docs/components/contributing/#actions) are reusable steps. When you author an action, you can add it to your workflow like you would other actions, by clicking the **+** button below any step.
Pipedream provides two ways to share code across workflows:
* **Publish an action from a Node.js code step**. [Publish any Node.js code step as a reusable action](/docs/workflows/building-workflows/code/nodejs/sharing-code/#publish-an-action-from-a-nodejs-code-step) from the Pipedream dashboard.
* **Create an action from code**. Develop your action code on your local filesystem and [publish to your Pipedream account using the Pipedream CLI](/docs/components/contributing/actions-quickstart/).
## Publish an action from a Node.js code step
You can publish any of your Node.js code steps into a reusable action. This enables you to write a Node.js code step once, and reuse it across many workflows without rewriting it.
To convert a Node.js code step into an publishable action, make sure to include the below properties in your Node.js step:
* `version`
* `name`
* `key`
* `type`
```js highlight={6-9}
// Adding properties to a regular Node.js code step make it publishable
import { parseISO, format } from 'date-fns';
// Returns a formatted datetime string
export default defineComponent({
name: 'Format Date',
version: '0.0.1',
key: 'format-date',
type: 'action',
props: {
date: {
type: "string",
label: "Date",
description: "Date to be formatted",
},
format: {
type: 'string',
label: "Format",
description: "Format to apply to the date. [See date-fns format](https://date-fns.org/v2.29.3/docs/format) as a reference."
}
},
async run({ $ }) {
const formatted = format(parseISO(this.date), this.format);
return formatted;
},
})
```
Click **Test** to verify the step is working as expected. Only actions that have been successfully tested are to be published.
Then open the menu in the top righthand corner of the code step and select **Publish to My Actions**:
And now you’ve successfully saved a custom Node.js code step to your account. You’ll be able to use this code step in any of your workflows.
**Why can’t I use the `steps` variable in published Node.js code steps?**
The `steps` variable contains the *workflows* step exports.
When you publish a Node.js code step as an action, it becomes reusable across many workflows.
This means that the step exports available vary depending on the workflow it’s running on.
Defining props is a way to map inputs to actions and allow individual workflows to define which exports should be used.
## Using your published actions
To use your custom action, create a new step in your workflow and select **My Actions**.
From there you’ll be able to view and select any of your published actions and use them as steps.
## Updating published Node.js code step actions
If you need to make a change and update the underlying code to your published Node.js code step, you can do so by incrementing the `version` field on the Node.js code step.
Every instance of your published action from a code step is editable. In any workflow where you’ve reused a published action, open the menu on the right side of the action and click **Edit action** button.
This will open up the code editor for this action, even if this wasn’t the original code step.
Now increment the `version` field in the code:
```js highlight={6}
import { parseISO, format } from 'date-fns';
// The version field on a Node.js action is versioned
export default defineComponent({
name: 'Format Date',
version: '0.0.2',
key: 'format-date',
type: 'action',
props: {
date: {
type: "string",
label: "Date",
description: "Date to be formatted",
},
format: {
type: 'string',
label: "Format",
description: "Format to apply to the date. [See date-fns format](https://date-fns.org/v2.29.3/docs/format) as a reference."
}
},
async run({ $ }) {
const formatted = format(parseISO(this.date), this.format);
return formatted;
},
})
```
Finally use the **Publish to My Actions** button in the right hand side menu to publish this new version.
**I’m not seeing an **Edit Action** button option in my step**
The **Edit Action** button is only available for actions that are published from Node.js code steps.
Actions submitted to the public component registry can contain multiple files. At this time it’s not possible to edit multi-file components direct in a code step.
**Will publishing a new version of an action automatically update all other steps using it?**
No, a new version of an action doesn’t automatically update all instances of the same action across your workflows. This gives you the control to gradually update.
After publishing a new version, all other steps using this same action will have the option to [update to the latest version](/docs/workflows/building-workflows/actions/#updating-actions-to-the-latest-version).
## Differences between publishing actions from workflow Node.js code steps and directly from code
Publishing reusable actions from Node.js code steps allows you to quickly scaffold and publish Node.js code steps without leaving the Pipedream dashboard. The result is the same as publishing actions from code using the Pipedream CLI.
However, there are some differences.
1. Node.js code step actions cannot make use of [app files to further reduce redundancy](/docs/components/contributing/guidelines/#promoting-reusability).
2. Node.js code step actions cannot be published to the [Pipedream Component Registry](/docs/components/contributing/).
3. Node.js code step actions have a slightly different structure than [action components](/docs/components/contributing/api/#component-api).
# Using Data Stores
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/using-data-stores
In Node.js (Javascript) code steps, you can also store and retrieve data within code steps without connecting a 3rd party database.
Add data stores to steps as props. By adding the store as a prop, it’s available under `this`.
For example, you can define a data store as a dataStore prop, and reference it at `this.dataStore`:
```javascript
export default defineComponent({
props: {
// Define that the "dataStore" variable in our component is a data store
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Now we can access the data store at "this.dataStore"
await this.dataStore.get("email");
},
});
```
**`props` injects variables into `this`**. See how we declared the `dataStore` prop in the `props` object, and then accessed it at `this.dataStore` in the `run` method.
All data store operations are asynchronous, so must be `await`ed.
## Using the data store
Once you’ve defined a data store prop for your component, then you’ll be able to create a new data store or use an existing one from your account.
## Saving data
Data Stores are key-value stores. Save data within a Data Store using the `this.dataStore.set` method. The first argument is the *key* where the data should be held, and the second argument is the *value* assigned to that key.
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Store a timestamp each time this step is executed in the workflow
await this.dataStore.set("lastRanAt", new Date());
},
});
```
### Setting expiration (TTL) for records
You can set an expiration time for a record by passing a TTL (Time-To-Live) option as the third argument to the `set` method. The TTL value is specified in seconds:
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Store a temporary value that will expire after 1 hour (3600 seconds)
await this.dataStore.set("temporaryToken", "abc123", { ttl: 3600 });
// Store a value that will expire after 1 day
await this.dataStore.set("dailyMetric", 42, { ttl: 86400 });
},
});
```
When the TTL period elapses, the record will be automatically deleted from the data store.
### Updating TTL for existing records
You can update the TTL for an existing record using the `setTtl` method:
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Update an existing record to expire after 30 minutes
await this.dataStore.setTtl("temporaryToken", 1800);
// Remove expiration from a record
await this.dataStore.setTtl("temporaryToken", null);
},
});
```
This is useful for extending the lifetime of temporary data or removing expiration from records that should now be permanent.
## Retrieving keys
Fetch all the keys in a given Data Store using the `keys` method:
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Return a list of all the keys in a given Data Store
return await this.dataStore.keys();
},
});
```
## Checking for the existence of specific keys
If you need to check whether a specific `key` exists in a Data Store, you can pass the `key` to the `has` method to get back a `true` or `false`:
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Check if a specific key exists in your Data Store
return await this.dataStore.has("lastRanAt");
},
});
```
## Retrieving data
You can retrieve data with the Data Store using the `get` method. Pass the *key* to the `get` method to retrieve the content that was stored there with `set`.
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Check if the lastRanAt key exists
const lastRanAt = await this.dataStore.get("lastRanAt");
},
});
```
## Retrieving all records
Use an async iterator to efficiently retrieve all records or keys in your data store:
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
const records = {};
for await (const [k,v] of this.dataStore) {
records[k] = v;
}
return records;
},
});
```
## Deleting or updating values within a record
To delete or update the *value* of an individual record, use the `set` method for an existing `key` and pass either the new value or `''` as the second argument to remove the value but retain the key.
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Update the value associated with the key, myKey
await this.dataStore.set("myKey", "newValue");
// Remove the value but retain the key
await this.dataStore.set("myKey", "");
},
});
```
## Deleting specific records
To delete individual records in a Data Store, use the `delete` method for a specific `key`:
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Delete the lastRanAt record
const lastRanAt = await this.dataStore.delete("lastRanAt");
},
});
```
## Deleting all records from a specific Data Store
If you need to delete all records in a given Data Store, you can use the `clear` method. **Note that this is an irreversible change, even when testing code in the workflow builder.**
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// Delete all records from a specific Data Store
return await this.dataStore.clear();
},
});
```
## Viewing store data
You can view the contents of your data stores in your [Pipedream dashboard](https://pipedream.com/stores).
From here you can also manually edit your data store’s data, rename stores, delete stores or create new stores.
## Using multiple data stores in a single code step
It is possible to use multiple data stores in a single code step, just make a unique name per store in the `props` definition. Let’s define 2 separate `customers` and `orders` data sources and leverage them in a single code step:
```csharp
export default defineComponent({
props: {
customers: { type: "data_store" },
orders: { type: "data_store" },
},
async run({ steps, $ }) {
// Retrieve the customer from our customer store
const customer = await this.customer.get(steps.trigger.event.customer_id);
// Retrieve the order from our order data store
const order = await this.orders.get(steps.trigger.event.order_id);
},
});
```
## Workflow counter example
You can use a data store as a counter. For example, this code counts the number of times the workflow runs:
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
// By default, all database entries are undefined.
// It's wise to set a default value so our code as an initial value to work with
const counter = (await this.dataStore.get("counter")) ?? 0;
// On the first run "counter" will be 0 and we'll increment it to 1
// The next run will increment the counter to 2, and so forth
await this.dataStore.set("counter", counter + 1);
},
});
```
## Dedupe data example
Data Stores are also useful for storing data from prior runs to prevent acting on duplicate data, or data that’s been seen before.
For example, this workflow’s trigger contains an email address from a potential new customer. But we want to track all emails collected so we don’t send a welcome email twice:
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
const email = steps.trigger.event.body.new_customer_email;
// Retrieve the past recorded emails from other runs
const emails = (await this.dataStore.get("emails")) ?? [];
// If the current email being passed from our webhook is already in our list, exit early
if (emails.includes(email)) {
return $.flow.exit("Already welcomed this user");
}
// Add the current email to the list of past emails so we can detect it in the future runs
await this.dataStore.set("emails", [...emails, email]);
},
});
```
## TTL use case: temporary caching and rate limiting
TTL functionality is particularly useful for implementing temporary caching and rate limiting. Here’s an example of a simple rate limiter that prevents a user from making more than 5 requests per hour:
```javascript
export default defineComponent({
props: {
dataStore: { type: "data_store" },
},
async run({ steps, $ }) {
const userId = steps.trigger.event.userId;
const rateKey = `ratelimit:${userId}`;
// Try to get current rate limit counter
let requests = await this.dataStore.get(rateKey);
if (requests === undefined) {
// First request from this user in the time window
await this.dataStore.set(rateKey, 1, { ttl: 3600 }); // Expire after 1 hour
return { allowed: true, remaining: 4 };
}
if (requests >= 5) {
// Rate limit exceeded
return { allowed: false, error: "Rate limit exceeded", retryAfter: "1 hour" };
}
// Increment the counter
await this.dataStore.set(rateKey, requests + 1);
return { allowed: true, remaining: 4 - requests };
},
});
```
This pattern can be extended for various temporary caching scenarios like:
* Session tokens with automatic expiration
* Short-lived feature flags
* Temporary access grants
* Time-based promotional codes
### Supported data types
Data stores can hold any JSON-serializable data within the storage limits. This includes data types including:
* Strings
* Objects
* Arrays
* Dates
* Integers
* Floats
But you cannot serialize Functions, Classes, or other more complex objects.
# Working With The Filesystem In Node.Js
Source: https://pipedream.com/docs/workflows/building-workflows/code/nodejs/working-with-files
export const TMP_SIZE_LIMIT = '2GB';
You’ll commonly need to work with files in a workflow, for example: downloading content from some service to upload to another. This doc explains how to work with files in Pipedream workflows and provides some sample code for common operations.
## The `/tmp` directory
Within a workflow, you have full read-write access to the `/tmp` directory. You have {TMP_SIZE_LIMIT} of available space in `/tmp` to save any file.
### Managing `/tmp` across workflow runs
The `/tmp` directory is stored on the virtual machine that runs your workflow. We call this the execution environment (“EE”). More than one EE may be created to handle high-volume workflows. And EEs can be destroyed at any time (for example, after about 10 minutes of receiving no events). This means that you should not expect to have access to files across executions. At the same time, files *may* remain, so you should clean them up to make sure that doesn’t affect your workflow. **Use [the `tmp-promise` package](https://github.com/benjamingr/tmp-promise) to cleanup files after use, or [delete the files manually](/docs/workflows/building-workflows/code/nodejs/working-with-files/#delete-a-file).**
### Reading a file from `/tmp`
This example uses [step exports](/docs/workflows/#step-exports) to return the contents of a test file saved in `/tmp` as a string:
```javascript
import fs from "fs";
export default defineComponent({
async run({ steps, $ }) {
return (await fs.promises.readFile('/tmp/your-file')).toString()
}
});
```
### Writing a file to `/tmp`
Use the [`fs` module](https://nodejs.org/api/fs.html) to write data to `/tmp`:
```javascript
import fs from "fs"
import { file } from 'tmp-promise'
export default defineComponent({
async run({ steps, $ }) {
const { path, cleanup } = await file();
await fs.promises.appendFile(path, Buffer.from("hello, world"))
await cleanup();
}
});
```
### Listing files in `/tmp`
Return a list of the files saved in `/tmp`:
```javascript
import fs from "fs";
export default defineComponent({
async run({ steps, $ }) {
return fs.readdirSync("/tmp");
}
});
```
### Delete a file
```javascript
import fs from "fs";
export default defineComponent({
async run({ steps, $ }) {
return await fs.promises.unlink('/tmp/your-file');
}
});
```
### Download a file to `/tmp`
[See this example](/docs/workflows/building-workflows/code/nodejs/http-requests/#download-a-file-to-the-tmp-directory) to learn how to download a file to `/tmp`.
### Upload a file from `/tmp`
[See this example](/docs/workflows/building-workflows/code/nodejs/http-requests/#upload-a-file-from-the-tmp-directory) to learn how to upload a file from `/tmp` in an HTTP request.
### Download a file, uploading it in another `multipart/form-data` request
[This workflow](https://pipedream.com/@dylburger/download-file-then-upload-file-via-multipart-form-data-request-p_QPCx7p/edit) provides an example of how to download a file at a specified **Download URL**, uploading that file to an **Upload URL** as form data.
### Download email attachments to `/tmp`, upload to Amazon S3
[This workflow](https://pipedream.com/@dylan/upload-email-attachments-to-s3-p_V9CGAQ/edit) is triggered by incoming emails. When copied, you’ll get a workflow-specific email address you can send any email to. This workflow takes any attachments included with inbound emails, saves them to `/tmp`, and uploads them to Amazon S3.
You should also be aware of the [inbound payload limits](/docs/workflows/limits/#email-triggers) associated with the email trigger.
### Downloading and uploading files from File Stores
Within Node.js code steps, you can download files from a File Store to the `/tmp` directory and vice versa.
The `$.files` helper includes methods to upload and download files from the Project’s File Store.
[Read the File Stores `$.files` helper documentation.](/docs/workflows/data-management/file-stores/#managing-file-stores-from-workflows)
# Python
Source: https://pipedream.com/docs/workflows/building-workflows/code/python
export const PYTHON_VERSION = '3.12';
Pipedream supports [Python v{PYTHON_VERSION}](https://www.python.org) in workflows. Run any Python code, use any [PyPI package](https://pypi.org/), connect to APIs, and more.
## Adding a Python code step
1. Click the + icon to add a new step
2. Click **Custom Code**
3. In the new step, select the `python` language runtime in language dropdown
## Python Code Step Structure
A new Python Code step will have the following structure:
```python
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test": True}}
```
You can also perform more complex operations, including [leveraging your connected accounts to make authenticated API requests](/docs/workflows/building-workflows/code/python/auth/), [accessing Data Stores](/docs/workflows/building-workflows/code/python/using-data-stores/) and [installing PyPI packages](/docs/workflows/building-workflows/code/python/#using-third-party-packages).
* [Install PyPI Packages](/docs/workflows/building-workflows/code/python/#using-third-party-packages)
* [Import data exported from other steps](/docs/workflows/building-workflows/code/python/#using-data-from-another-step)
* [Export data to downstream steps](/docs/workflows/building-workflows/code/python/#sending-data-downstream-to-other-steps)
* [Retrieve data from a data store](/docs/workflows/building-workflows/code/python/using-data-stores/#retrieving-data)
* [Store data into a data store](/docs/workflows/building-workflows/code/python/using-data-stores/#saving-data)
* [Access API credentials from connected accounts](/docs/workflows/building-workflows/code/python/auth/)
## Logging and debugging
You can use `print` at any time in a Python code step to log information as the script is running.
The output for the `print` **logs** will appear in the `Results` section just beneath the code editor.
## Using third party packages
You can use any packages from [PyPI](https://pypi.org) in your Pipedream workflows. This includes popular choices such as:
* [`requests` for making HTTP requests](https://pypi.org/project/requests/)
* [`sqlalchemy`for retrieving or inserting data in a SQL database](https://pypi.org/project/sqlalchemy/)
* [`pandas` for working with complex datasets](https://pypi.org/project/pandas/)
To use a PyPI package, just include it in your step’s code:
```javascript
import requests
```
And that’s it. No need to update a `requirements.txt` or specify elsewhere in your workflow of which packages you need. Pipedream will automatically install the dependency for you.
### If your package’s `import` name differs from its PyPI package name
Pipedream’s package installation uses [the `pipreqs` package](https://github.com/bndr/pipreqs) to detect package imports and install the associated package for you. Some packages, like [`python-telegram-bot`](https://python-telegram-bot.org/), use an `import` name that differs from their PyPI name:
```sh
pip install python-telegram-bot
```
vs.
```javascript
import telegram
```
Use the built in [magic comment system to resolve these mismatches](/docs/workflows/building-workflows/code/python/import-mappings/):
```python
# pipedream add-package python-telegram-bot
import telegram
```
### Pinning package versions
Each time you deploy a workflow with Python code, Pipedream downloads the PyPi packages you `import` in your step. **By default, Pipedream deploys the latest version of the PyPi package each time you deploy a change**.
There are many cases where you may want to specify the version of the packages you’re using. If you’d like to use a *specific* version of a package in a workflow, you can add that version in a [magic comment](/docs/workflows/building-workflows/code/python/import-mappings/), for example:
```python
# pipedream add-package pandas==2.0.0
import pandas
```
Currently, you cannot use different versions of the same package in different steps in a workflow.
## Making an HTTP request
We recommend using the popular `requests` HTTP client package available in Python to send HTTP requests.
No need to run `pip install`, just `import requests` at the top of your step’s code and it’s available for your code to use.
See the [Making HTTP Requests with Python](/docs/workflows/building-workflows/code/python/http-requests/) docs for more information.
## Returning HTTP responses
You can return HTTP responses from [HTTP-triggered workflows](/docs/workflows/building-workflows/triggers/#http) using the `pd.respond()` method:
```python
def handler(pd: "pipedream"):
pd.respond({
"status": 200,
"body": {
"message": "Everything is ok"
}
})
```
Please note to always include at least the `body` and `status` keys in your `pd.respond` argument. The `body` must also be a JSON serializable object or dictionary.
Unlike the Node.js equivalent, the Python `pd.respond` helper does not yet support responding with Streams.
*Don’t forget* to [configure your workflow’s HTTP trigger to allow a custom response](/docs/workflows/building-workflows/triggers/#http-responses). Otherwise your workflow will return the default response.
## Sharing data between steps
A step can accept data from other steps in the same workflow, or pass data downstream to others.
### Using data from another step
In Python steps, data from the initial workflow trigger and other steps are available in the `pd.steps` object.
In this example, we’ll pretend this data is coming into our workflow’s HTTP trigger via POST request.
```json
// POST .m.pipedream.net
{
"id": 1,
"name": "Bulbasaur",
"type": "plant"
}
```
In our Python step, we can access this data in the `pd.steps` object passed into the `handler`. Specifically, this data from the POST request into our workflow is available in the `trigger` dictionary item.
```python
def handler(pd: "pipedream"):
# retrieve the data from the HTTP request in the initial workflow trigger
pokemon_name = pd.steps["trigger"]["event"]["name"]
pokemon_type = pd.steps["trigger"]["event"]["type"]
print(f"{pokemon_name} is a {pokemon_type} type Pokemon")
```
### Sending data downstream to other steps
To share data created, retrieved, transformed or manipulated by a step to others downstream, `return` the data in the `handler` function:
```python
# This step is named "code" in the workflow
import requests
def handler(pd: "pipedream"):
r = requests.get("https://pokeapi.co/api/v2/pokemon/charizard")
# Store the JSON contents into a variable called "pokemon"
pokemon = r.json()
# Expose the data to other steps in the "pokemon" key from this step
return {
"pokemon": pokemon
}
```
Now this `pokemon` data is accessible to downstream steps within `pd.steps["code"]["pokemon"]`
You can only export JSON-serializable data from steps. Things like:
* strings
* numbers
* lists
* dictionaries
## Using environment variables
You can leverage any [environment variables defined in your Pipedream account](/docs/workflows/environment-variables/) in a Python step. This is useful for keeping your secrets out of code as well as keeping them flexible to swap API keys without having to update each step individually.
To access them, use the `os` module.
```python
import os
def handler(pd: "pipedream"):
token = os.environ["AIRTABLE_API_KEY"]
print(token)
```
Or an even more useful example, using the stored environment variable to make an authenticated API request.
### Using API key authentication
If an particular service requires you to use an API key, you can pass it via the headers of the request.
This proves your identity to the service so you can interact with it:
```python
import requests
import os
def handler(pd: "pipedream"):
token = os.environ["AIRTABLE_API_KEY"]
url = "https://api.airtable.com/v0/your-airtable-base/your-table"
headers = { "Authorization": f"Bearer {token}"}
r = requests.get(url, headers=headers)
print(r.text)
```
There are 2 different ways of using the `os` module to access your environment variables.
`os.environ["ENV_NAME_HERE"]` will raise an error that stops your workflow if that key doesn’t exist in your Pipedream account.
Whereas `os.environ.get("ENV_NAME_HERE")` will *not* throw an error and instead returns an empty string.
If your code relies on the presence of a environment variable, consider using `os.environ["ENV_NAME_HERE"]` instead.
## Handling errors
You may need to exit a workflow early. In a Python step, just a `raise` an error to halt a step’s execution.
```python
raise NameError("Something happened that should not. Exiting early.")
```
All exceptions from your Python code will appear in the **logs** area of the results.
## Ending a workflow early
Sometimes you want to end your workflow early, or otherwise stop or cancel the execution of a workflow under certain conditions. For example:
* You may want to end your workflow early if you don’t receive all the fields you expect in the event data.
* You only want to run your workflow for 5% of all events sent from your source.
* You only want to run your workflow for users in the United States. If you receive a request from outside the U.S., you don’t want the rest of the code in your workflow to run.
* You may use the `user_id` contained in the event to look up information in an external API. If you can’t find data in the API tied to that user, you don’t want to proceed.
**In any code step, calling `return pd.flow.exit()` will end the execution of the workflow immediately.** No remaining code in that step, and no code or destination steps below, will run for the current event.
It’s a good practice to use `return pd.flow.exit()` to immediately exit the workflow. In contrast, `pd.flow.exit()` on its own will end the workflow only after executing all remaining code in the step.
```python
def handler(pd: "pipedream"):
return pd.flow.exit("reason")
print("This code will not run, since pd.flow.exit() was called above it")
```
You can pass any string as an argument to `pd.flow.exit()`:
```python
def handler(pd: "pipedream"):
return pd.flow.exit("Exiting early. Goodbye.")
print("This code will not run, since pd.flow.exit() was called above it")
```
Or exit the workflow early within a conditional:
```python
import random
def handler(pd: "pipedream"):
# Flip a coin, running pd.flow.exit() for 50% of events
if random.randint(0, 100) <= 50:
return pd.flow.exit("reason")
print("This code will only run 50% of the time");
```
## File storage
You can also store and read files with Python steps. This means you can upload photos, retrieve datasets, accept files from an HTTP request and more.
The `/tmp` directory is accessible from your workflow steps for saving and retrieving files.
You have full access to read and write both files in `/tmp`.
See the [Working with the filesystem in Python](/docs/workflows/building-workflows/code/python/working-with-files/) docs for more information.
## FAQ
### What’s the difference between `def handler(pd)` and the `pipedream` package for Python code steps?
The pd object passed to the handler method lets you exit the [workflow early](/docs/workflows/building-workflows/code/python/#ending-a-workflow-early), [integrate a Data Store](/docs/workflows/building-workflows/code/python/using-data-stores/), and [use connected accounts](/docs/workflows/building-workflows/code/python/auth/) into your Python code steps.
However, at this time there are issues with our Python interpreter that is causing an `ECONNRESET` error.
If you need [to use data from other steps](/docs/workflows/building-workflows/code/python/#using-data-from-another-step) or [export data to other steps](/docs/workflows/building-workflows/code/python/#sending-data-downstream-to-other-steps) in your workflow, we recommend using the `pipedream` package module.
If you need to use a Data Store in your workflow, we recommend using a [pre-built action](/docs/workflows/data-management/data-stores/) to retrieve or store data or [Node.js’s Data Store](/docs/workflows/building-workflows/code/nodejs/using-data-stores/) capabilities.
### I’ve tried installing a Python package with a normal import and the magic comment system, but I still can’t. What can I do?
Some Python packages require binaries present within the environment in order to function properly. Or they include binaries but those binaries are not compatible with the Pipedream workflow environment.
Unfortunately we cannot support these types of packages at this time, but if you have an issue importing a PyPI package into a Python code step [please open a issue](https://github.com/PipedreamHQ/pipedream/issues/new/choose).
### Can I publish my Python code as a reusable pre-built action or trigger like you can with Node.js?
Not at this time. Pipedream only supports Python as a code step language. The Components system only supports Node.js at this time.
You can still duplicate Python code steps within the same workflow, but to reuse a code step, you’ll need to copy and paste the Python code to another workflow.
# Connecting Apps In Python
Source: https://pipedream.com/docs/workflows/building-workflows/code/python/auth
When you use [prebuilt actions](/docs/components/contributing/#actions) tied to apps, you don’t need to write the code to authorize API requests. Just [connect your account](/docs/apps/connected-accounts/#connecting-accounts) for that app and run your workflow.
But sometimes you’ll need to [write your own code](/docs/workflows/building-workflows/code/python/). You can also connect apps to custom code steps, using the auth information to authorize requests to that app.
For example, you may want to send a Slack message from a step. We use Slack’s OAuth integration to authorize sending messages from your workflows.
Add Slack as an app on the Python step, then connect your Slack account.
Then within the Python code step, `pd.inputs["slack"]["$auth"]["oauth_access_token"]` will contain your Slack account OAuth token.
With that token, you can make authenticated API calls to Slack:
```python
from slack_sdk import WebClient
def handler(pd: "pipedream"):
# Your Slack OAuth token is available under pd.inputs
token = pd.inputs["slack"]["$auth"]["oauth_access_token"]
# Instantiate a new Slack client with your token
client = WebClient(token=token)
# Use the client to send messages to Slack channels
response = client.chat_postMessage(
channel='#general',
text='Hello from Pipedream!'
)
# Export the Slack response payload for use in future steps
pd.export("response", response.data)
```
## Accessing connected account data with `pd.inputs[appName]["$auth"]`
In our Slack example above, we created a Slack `WebClient` using the Slack OAuth access token:
```python
# Instantiate a new Slack client with your token
client = WebClient(token=token)
```
Where did `pd.inputs["slack"]` come from? Good question. It was generated when we connected Slack to our Python step.
The Slack access token is generated by Pipedream, and is available to this step in the `pd.inputs[appName]["$auth"]` object:
```python
from slack_sdk import WebClient
def handler(pd: "pipedream"):
token = pd.inputs["slack"]["$auth"]["oauth_access_token"]
# Authentication details for all of your apps are accessible under the special pd.inputs["slack"] variable:
console.log(pd.inputs["slack"]["$auth"])
```
`pd.inputs["slack"]["$auth"]` contains named properties for each account you connect to the associated step. Here, we connected Slack, so `this.slack.$auth` contains the Slack auth info (the `oauth_access_token`).
The names of the properties for each connected account will differ with the account. Pipedream typically exposes OAuth access tokens as `oauth_access_token`, and API keys under the property `api_key`. But if there’s a service-specific name for the tokens (for example, if the service calls it `server_token`), we prefer that name, instead.
To list the `pd.inputs["slack"]["$auth"]` properties available to you for a given app, just print the contents of the `$auth` property:
```php
print(pd.inputs["slack"]["$auth"]) # Replace "slack" with your app's name
```
and run your workflow. You’ll see the property names in the logs below your step.
### Using the code templates tied to apps
When you write custom code that connects to an app, you can start with a code snippet Pipedream provides for each app. This is called the **test request**.
When you search for an app in a step:
1. Click the **+** button below any step.
2. Search for the app you’re looking for and select it from the list.
3. Select the option to **Run Python with any \[app] API**.
This code operates as a template you can extend, and comes preconfigured with the connection to the target app and the code for authorizing requests to the API. You can modify this code however you’d like.
## Custom auth tokens / secrets
When you want to connect to a 3rd party service that isn’t supported by Pipedream, you can store those secrets in [Environment Variables](/docs/workflows/environment-variables/).
# Delaying a workflow
Source: https://pipedream.com/docs/workflows/building-workflows/code/python/delay
export const DELAY_MIN_MAX_TIME = 'You can pause your workflow for as little as one millisecond, or as long as one year';
export const MAX_WORKFLOW_EXECUTION_LIMIT = '750';
Use `pd.flow.delay` to [delay a step in a workflow](/docs/workflows/building-workflows/control-flow/delay/).
These docs show you how to write Python code to handle delays. If you don’t need to write code, see [our built-in delay actions](/docs/workflows/building-workflows/control-flow/delay/#delay-actions).
## Using `pd.flow.delay`
`pd.flow.delay` takes one argument: the number of **milliseconds** you’d like to pause your workflow until the next step executes. {DELAY_MIN_MAX_TIME}.
Note that [delays happen at the end of the step where they’re called](/docs/workflows/building-workflows/code/python/delay/#when-delays-happen).
```python
import random
def handler(pd: 'pipedream'):
# Delay a workflow for 60 seconds (60,000 ms)
pd.flow.delay(60 * 1000)
# Delay a workflow for 15 minutes
pd.flow.delay(15 * 60 * 1000)
# Delay a workflow based on the value of incoming event data,
# or default to 60 seconds if that variable is undefined
default = 60 * 1000
delayMs = pd.steps["trigger"].get("event", {}).get("body", {}).get("delayMs", default)
pd.flow.delay(delayMs)
# Delay a workflow a random amount of time
pd.flow.delay(random.randint(0, 999))
```
Paused workflow state
When `pd.flow.delay` is executed in a Python step, the workflow itself will enter a **Paused** state.
While the workflow is paused, it will not incur any credits towards compute time. You can also [view all paused workflows in the Event History](/docs/workflows/event-history/#filtering-by-status).
### Credit usage
The length of time a workflow is delayed from `pd.flow.delay` does *not* impact your credit usage. For example, delaying a 256 megabyte workflow for five minutes will **not** incur ten credits.
However, using `pd.flow.delay` in a workflow will incur two credits.
One credit is used to initially start the workflow, then the second credit is used when the workflow resumes after its pause period has ended.
Exact credit usage depends on duration and memory configuration
If your workflow’s [execution timeout limit](/docs/workflows/building-workflows/settings/#execution-timeout-limit) is set to longer than [default limit](/docs/workflows/limits/#time-per-execution), it may incur more than two [credits](/docs/pricing/#credits-and-billing) when using `pd.flow.delay`.
## `cancel_url` and `resume_url`
Both the built-in **Delay** actions and `pd.flow.delay` return a `cancel_url` and `resume_url` that lets you cancel or resume paused executions.
These URLs are specific to a single execution of your workflow. While the workflow is paused, you can load these in your browser or send an HTTP request to either:
* Hitting the `cancel_url` will immediately cancel that execution
* Hitting the `resume_url` will immediately resume that execution early
[Since Pipedream pauses your workflow at the *end* of the step where you run call `pd.flow.delay`](/docs/workflows/building-workflows/code/python/delay/#when-delays-happen), you can send these URLs to third party systems, via email, or anywhere else you’d like to control the execution of your workflow.
```python
import requests
def handler(pd: 'pipedream'):
links = pd.flow.delay(15 * 60 * 1000)
# links contains a dictionary with two entries: resume_url and cancel_url
# Send the URLs to a system you own
requests.post("https://example.com", json=links)
# Email yourself the URLs. Click on the links to cancel / resume
pd.send.email(
subject=f"Workflow execution {pd.steps['trigger']['context']['id']}",
text=f"Cancel your workflow here: {links['cancel_url']} . Resume early here: {links['resume_url']}",
html=None
)
# Delay happens at the end of this step
```
In `pd.send.email`, the `html` argument defaults to `""`, so it overrides the email `text` unless explicitly set to `None`.
## When delays happen
**Pipedream pauses your workflow at the *end* of the step where you call `pd.flow.delay`**. This lets you send the `cancel_url` and `resume_url` to third-party systems.
```python
def handler(pd: 'pipedream'):
urls = pd.flow.delay(15 * 60 * 1000)
cancel_url, resume_url = urls["cancel_url"], urls["resume_url"]
# ... run any code you want here
# Delay happens at the end of this step
```
## Delays and HTTP responses
You cannot run `pd.respond` after running `pd.flow.delay`. Pipedream ends the original execution of the workflow when `pd.flow.delay` is called and issues the following response to the client to indicate this state:
> \$.respond() not called for this invocation
If you need to set a delay on an HTTP request triggered workflow, consider using [`time.sleep`](/docs/workflows/building-workflows/code/python/delay/#timesleep) instead.
## `time.sleep`
Alternatively, you can use `time.sleep` instead of using `pd.flow.delay` to delay individual workflow steps.
However, there are some drawbacks to using `time.sleep` instead of `pd.flow.delay`. `time.sleep` will count towards your workflow’s compute time, for example:
```python
import time
def handler(pd: 'pipedream'):
# delay this step for 30 seconds
delay = 30
time.sleep(delay)
```
The Python step above will hold the workflow’s execution for this step for 30 seconds; however, 30 seconds will also *contribute* to your credit usage. Also consider that workflows have a hard limit of {MAX_WORKFLOW_EXECUTION_LIMIT} seconds.
# Making HTTP Requests With Python
Source: https://pipedream.com/docs/workflows/building-workflows/code/python/http-requests
HTTP requests are fundamental to working with APIs or other web services. You can make HTTP requests to retrieve data from APIs, fetch HTML from websites, or do pretty much anything your web browser can do.
**Below, we’ll review how to make HTTP requests using Python code on Pipedream.**
We recommend using the popular `requests` HTTP client package available in Python to send HTTP requests, but [you can use any PyPi package you’d like on Pipedream](/docs/workflows/building-workflows/code/python/#using-third-party-packages).
## Basic `requests` usage notes
No need to run `pip install`, just `import requests` at the top of your step’s code and it’s available for your code to use.
To use `requests` on Pipedream, you’ll just need to import the `requests` PyPi package:
```py
import requests
```
You make HTTP requests by passing a URL and optional request parameters to one of [Requests’ 7 HTTP request methods](https://requests.readthedocs.io/en/latest/api/#main-interface).
**Here’s how to make a basic HTTP request on Pipedream:**
```py
r = requests.get('https://swapi.dev/api/films/')
```
The [Response](https://requests.readthedocs.io/en/latest/api/#requests.Response) object `r` contains a lot of information about the response: its content, headers, and more. Typically, you just care about the content, which you can access in the `text` property of the response:
```python
r = requests.get('https://swapi.dev/api/films/')
# HTTP response content is in the text property
r.text
```
Requests automatically decodes the content of the response based on its encoding, `r.encoding`, which is determined based on the HTTP headers.
If you’re dealing with JSON data, you can call `r.json()` to decode the content as JSON:
```python
r = requests.get('https://swapi.dev/api/films/')
# The json-encoded content of a response, if any
r.json()
```
If JSON decoding fails, `r.json()` raises an exception.
## Making a `GET` request
GET requests typically are for retrieving data from an API. Below is an example.
```python
import requests
def handler(pd: "pipedream"):
url = "https://swapi.dev/api/people/1"
r = requests.get(url)
# The response is logged in your Pipedream step results:
print(r.text)
# The response status code is logged in your Pipedream step results:
print(r.status_code)
```
## Making a `POST` request
```python
import requests
def handler(pd: "pipedream"):
# This a POST request to this URL will echo back whatever data we send to it
url = "https://postman-echo.com/post"
data = {"name": "Bulbasaur"}
r = requests.post(url, data=data)
# The response is logged in your Pipedream step results:
print(r.text)
# The response status code is logged in your Pipedream step results:
print(r.status_code)
```
When you make a `POST` request, pass a dictionary with the data you’d like to send to the `data` argument. Requests automatically form-encodes the data when the request is made.
The code example above will NOT set the `Content-Type` header, meaning it will NOT be set to `application/json`.
If you want the header set to `application/json` and don’t want to encode the `dict` yourself, you can pass it using the `json` parameter and it will be encoded automatically:
```powershell
url = "https://postman-echo.com/post"
data = {"name": "Bulbasaur"}
r = requests.post(url, json=data)
```
## Passing query string parameters to a `GET` request
Retrieve fake comment data on a specific post using [JSONPlaceholder](https://jsonplaceholder.typicode.com/), a free mock API service. Here, you fetch data from the `/comments` resource, retrieving data for a specific post by query string parameter: `/comments?postId=1`.
```csharp
import requests
def handler(pd: "pipedream"):
url = "https://jsonplaceholder.typicode.com/comments"
params = {"postId": 1}
# Make an HTTP GET request using requests
r = requests.get(url, params=params)
# Retrieve the content of the response
data = r.text
```
You should pass query string parameters as a dictionary using the `params` keyword argument, like above. When you do, `requests` automatically [URL-encodes](https://www.w3schools.com/tags/ref_urlencode.ASP) the parameters for you, which you’d otherwise have to do manually.
## Sending a request with HTTP headers
To add HTTP headers to a request, pass a dictionary to the `headers` parameter:
```python
import requests
import json
def handler(pd: "pipedream"):
url = "https://jsonplaceholder.typicode.com/posts"
headers = {"Content-Type": "application/json"}
data = {"name": "Luke"}
# Make an HTTP POST request using requests
r = requests.post(url, headers=headers, data=json.dumps(data))
```
## Sending a request with a secret or API key
Most APIs require you authenticate HTTP requests with an API key or other token. **Please review the docs for your service to understand how they accept this data.**
Here’s an example showing an API key passed in an HTTP header:
```python
import requests
def handler(pd: "pipedream"):
url = "https://jsonplaceholder.typicode.com/posts"
headers = {"X-API-KEY": "123"} # API KEY
data = {"name": "Luke"}
# Make an HTTP POST request using requests
r = requests.post(url, headers=headers, json=data)
```
## Sending files
An example of sending a previously stored file in the workflow’s `/tmp` directory:
```python
import requests
def handler(pd: "pipedream"):
# Retrieving a previously saved file from workflow storage
files = {"image": open("/tmp/python-logo.png", "rb")}
r = requests.post(url="https://api.imgur.com/3/image", files=files)
```
## Downloading a file to the `/tmp` directory
This example shows you how to download a file to a file in [the `/tmp` directory](/docs/workflows/building-workflows/code/python/working-with-files/). This can be especially helpful for downloading large files: it streams the file to disk, minimizing the memory the workflow uses when downloading the file.
```python
import requests
def handler(pd: "pipedream"):
# Download the webpage HTML file to /tmp
with requests.get("https://example.com", stream=True) as response:
# Check if the request was successful
response.raise_for_status()
# Open the new file /tmp/file.html in binary write mode
with open("/tmp/file.html", "wb") as file:
for chunk in response.iter_content(chunk_size=8192):
# Write the chunk to file
file.write(chunk)
```
## Uploading a file from the `/tmp` directory
This example shows you how to make a `multipart/form-data` request with a file as a form part. You can store and read any files from [the `/tmp` directory](/docs/workflows/building-workflows/code/python/working-with-files/#the-tmp-directory).
This can be especially helpful for uploading large files: it streams the file from disk, minimizing the memory the workflow uses when uploading the file.
```python
import requests
from requests_toolbelt.multipart.encoder import MultipartEncoder
def handler(pd: "pipedream"):
m = MultipartEncoder(fields={
'file': ('filename', open('/tmp/file.pdf', 'rb'), 'application/pdf')
})
r = requests.post("https://example.com", data=m,
headers={'Content-Type': m.content_type})
```
## IP addresses for HTTP requests made from Pipedream workflows
By default, [HTTP requests made from Pipedream can come from a large range of IP addresses](/docs/privacy-and-security/#hosting-details). **If you need to restrict the IP addresses HTTP requests come from, you can [Use a Pipedream VPC](/docs/workflows/vpc/) to route all outbound HTTP requests through a single IP address.**
## Using an HTTP proxy to proxy requests through another host
By default, HTTP requests made from Pipedream can come from a range of IP addresses. **If you need to make requests from a single IP address, you can route traffic through an HTTP proxy**:
```python
import requests
def handler(pd: "pipedream"):
user = "USERNAME" # Replace with your HTTP proxy username
password = "PASSWORD" # Replace with your HTTP proxy password
host = "10.10.1.10" # Replace with the HTTP proxy URL
port = 1080 # Replace with the port of the HTTP proxy
proxies = {
"https": f"http://{user}:{password}@{host}:{port}",
}
r = requests.request("GET", "https://example.com", proxies=proxies)
```
## Paginating API requests
When you fetch data from an API, the API may return records in “pages”. For example, if you’re trying to fetch a list of 1,000 records, the API might return those in groups of 100 items.
Different APIs paginate data in different ways. You’ll need to consult the docs of your API provider to see how they suggest you paginate through records.
## Sending a GraphQL request
Construct a GraphQL query as a string and then using the requests library to send it to the GraphQL server:
```python
import requests
def handler(pd: "pipedream"):
url = "https://beta.pokeapi.co/graphql/v1beta"
query = """
query samplePokeAPIquery {
generations: pokemon_v2_generation {
name
pokemon_species: pokemon_v2_pokemonspecies_aggregate {
aggregate {
count
}
}
}
}
"""
r = requests.post(url, json={"query": query})
return r.json()
```
### Sending an authenticated GraphQL request
Authenticate your connected accounts in Pipedream with GraphQL requests using `pd.inputs[appName]["$auth"]`:
```python
import requests
def handler(pd: "pipedream"):
url = "https://api.github.com/graphql"
query = """
query {
viewer {
login
}
}
"""
token = pd.inputs["github"]["$auth"]["oauth_access_token"]
headers = {"authorization": f"Bearer {token}"}
r = requests.post(url, json={"query": query}, headers=headers)
return r.json()
```
Alternatively, you can use Environment Variables as well for simple API key based GraphQL APIs.
# Use PyPI packages with differing import names
Source: https://pipedream.com/docs/workflows/building-workflows/code/python/import-mappings
When a Python package’s name matches its import name, you can use it in your Python code steps without any special configuration.
But some package names do not match their import names. Use [the `add-package` comment](/docs/workflows/building-workflows/code/python/import-mappings/#using-magic-comments) to work with these packages.
## Using Magic Comments
When a package’s name doesn’t match the import name, you can install the package with the `add-package` comment above your imports.
For example the `google.cloud` package exports `bigquery`, but you can still use it in your Python code steps in workflows:
```python
# pipedream add-package google-cloud-bigquery
from google.cloud import bigquery
```
## Search mappings
Search by package name, and you’ll find both the `add-package` comment and the `import` statement you’ll need to use for the package.
Search by package name
| PyPI Package Name | Import into Pipedream with |
| --------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
| [# pipedream add-package ShopifyAPI](https://pypi.org/project/ShopifyAPI) | import shopify |
| [# pipedream add-package google-cloud-bigquery](https://pypi.org/project/google-cloud-bigquery) | import bigquery |
| [# pipedream add-package carbon3d-client](https://pypi.org/project/carbon3d-client) | import carbon3d |
| [# pipedream add-package python-telegram-bot](https://pypi.org/project/python-telegram-bot) | import telegram |
| [# pipedream add-package pyAFQ](https://pypi.org/project/pyAFQ) | import AFQ |
| [# pipedream add-package agpy](https://pypi.org/project/agpy) | import AG\_fft\_tools |
| [# pipedream add-package pexpect](https://pypi.org/project/pexpect) | import screen |
| [# pipedream add-package Adafruit\_Libraries](https://pypi.org/project/Adafruit_Libraries) | import Adafruit |
| [# pipedream add-package Zope2](https://pypi.org/project/Zope2) | import webdav |
| [# pipedream add-package py\_Asterisk](https://pypi.org/project/py_Asterisk) | import Asterisk |
| [# pipedream add-package bitbucket\_jekyll\_hook](https://pypi.org/project/bitbucket_jekyll_hook) | import BB\_jekyll\_hook |
| [# pipedream add-package Banzai\_NGS](https://pypi.org/project/Banzai_NGS) | import Banzai |
| [# pipedream add-package BeautifulSoup](https://pypi.org/project/BeautifulSoup) | import BeautifulSoupTests |
| [# pipedream add-package biopython](https://pypi.org/project/biopython) | import BioSQL |
| [# pipedream add-package BuildbotEightStatusShields](https://pypi.org/project/BuildbotEightStatusShields) | import BuildbotStatusShields |
| [# pipedream add-package ExtensionClass](https://pypi.org/project/ExtensionClass) | import MethodObject |
| [# pipedream add-package pycryptodome](https://pypi.org/project/pycryptodome) | import Crypto |
| [# pipedream add-package pycryptodomex](https://pypi.org/project/pycryptodomex) | import Cryptodome |
| [# pipedream add-package 51degrees\_mobile\_detector\_v3\_wrapper](https://pypi.org/project/51degrees_mobile_detector_v3_wrapper) | import FiftyOneDegrees |
| [# pipedream add-package pyfunctional](https://pypi.org/project/pyfunctional) | import functional |
| [# pipedream add-package GeoBasesDev](https://pypi.org/project/GeoBasesDev) | import GeoBases |
| [# pipedream add-package ipython](https://pypi.org/project/ipython) | import IPython |
| [# pipedream add-package astro\_kittens](https://pypi.org/project/astro_kittens) | import Kittens |
| [# pipedream add-package python\_Levenshtein](https://pypi.org/project/python_Levenshtein) | import Levenshtein |
| [# pipedream add-package MySQL-python](https://pypi.org/project/MySQL-python) | import MySQLdb |
| [# pipedream add-package PyOpenGL](https://pypi.org/project/PyOpenGL) | import OpenGL |
| [# pipedream add-package pyOpenSSL](https://pypi.org/project/pyOpenSSL) | import OpenSSL |
| [# pipedream add-package Pillow](https://pypi.org/project/Pillow) | import PIL |
| [# pipedream add-package astLib](https://pypi.org/project/astLib) | import PyWCSTools |
| [# pipedream add-package astro\_pyxis](https://pypi.org/project/astro_pyxis) | import Pyxides |
| [# pipedream add-package PySide](https://pypi.org/project/PySide) | import pysideuic |
| [# pipedream add-package s3cmd](https://pypi.org/project/s3cmd) | import S3 |
| [# pipedream add-package pystick](https://pypi.org/project/pystick) | import SCons |
| [# pipedream add-package PyStemmer](https://pypi.org/project/PyStemmer) | import Stemmer |
| [# pipedream add-package topzootools](https://pypi.org/project/topzootools) | import TopZooTools |
| [# pipedream add-package DocumentTemplate](https://pypi.org/project/DocumentTemplate) | import TreeDisplay |
| [# pipedream add-package aspose\_pdf\_java\_for\_python](https://pypi.org/project/aspose_pdf_java_for_python) | import WorkingWithDocumentConversion |
| [# pipedream add-package auto\_adjust\_display\_brightness](https://pypi.org/project/auto_adjust_display_brightness) | import aadb |
| [# pipedream add-package abakaffe\_cli](https://pypi.org/project/abakaffe_cli) | import abakaffe |
| [# pipedream add-package abiosgaming.py](https://pypi.org/project/abiosgaming.py) | import abiosgaming |
| [# pipedream add-package abiquo\_api](https://pypi.org/project/abiquo_api) | import abiquo |
| [# pipedream add-package abl.cssprocessor](https://pypi.org/project/abl.cssprocessor) | import abl |
| [# pipedream add-package abl.robot](https://pypi.org/project/abl.robot) | import abl |
| [# pipedream add-package abl.util](https://pypi.org/project/abl.util) | import abl |
| [# pipedream add-package abl.vpath](https://pypi.org/project/abl.vpath) | import abl |
| [# pipedream add-package abo\_generator](https://pypi.org/project/abo_generator) | import abo |
| [# pipedream add-package abris](https://pypi.org/project/abris) | import abris\_transform |
| [# pipedream add-package abstract.jwrotator](https://pypi.org/project/abstract.jwrotator) | import abstract |
| [# pipedream add-package abu.admin](https://pypi.org/project/abu.admin) | import abu |
| [# pipedream add-package AC\_Flask\_HipChat](https://pypi.org/project/AC_Flask_HipChat) | import ac\_flask |
| [# pipedream add-package anikom15](https://pypi.org/project/anikom15) | import acg |
| [# pipedream add-package acme.dchat](https://pypi.org/project/acme.dchat) | import acme |
| [# pipedream add-package acme.hello](https://pypi.org/project/acme.hello) | import acme |
| [# pipedream add-package acted.projects](https://pypi.org/project/acted.projects) | import acted |
| [# pipedream add-package ActionServer](https://pypi.org/project/ActionServer) | import action |
| [# pipedream add-package actionbar.panel](https://pypi.org/project/actionbar.panel) | import actionbar |
| [# pipedream add-package afn](https://pypi.org/project/afn) | import activehomed |
| [# pipedream add-package ActivePapers.Py](https://pypi.org/project/ActivePapers.Py) | import activepapers |
| [# pipedream add-package address\_book\_lansry](https://pypi.org/project/address_book_lansry) | import address\_book |
| [# pipedream add-package adi.commons](https://pypi.org/project/adi.commons) | import adi |
| [# pipedream add-package adi.devgen](https://pypi.org/project/adi.devgen) | import adi |
| [# pipedream add-package adi.fullscreen](https://pypi.org/project/adi.fullscreen) | import adi |
| [# pipedream add-package adi.init](https://pypi.org/project/adi.init) | import adi |
| [# pipedream add-package adi.playlist](https://pypi.org/project/adi.playlist) | import adi |
| [# pipedream add-package adi.samplecontent](https://pypi.org/project/adi.samplecontent) | import adi |
| [# pipedream add-package adi.slickstyle](https://pypi.org/project/adi.slickstyle) | import adi |
| [# pipedream add-package adi.suite](https://pypi.org/project/adi.suite) | import adi |
| [# pipedream add-package adi.trash](https://pypi.org/project/adi.trash) | import adi |
| [# pipedream add-package aDict2](https://pypi.org/project/aDict2) | import adict |
| [# pipedream add-package aditam.agent](https://pypi.org/project/aditam.agent) | import aditam |
| [# pipedream add-package aditam.core](https://pypi.org/project/aditam.core) | import aditam |
| [# pipedream add-package adium\_sh](https://pypi.org/project/adium_sh) | import adiumsh |
| [# pipedream add-package AdjectorClient](https://pypi.org/project/AdjectorClient) | import adjector |
| [# pipedream add-package AdjectorTracPlugin](https://pypi.org/project/AdjectorTracPlugin) | import adjector |
| [# pipedream add-package Banner\_Ad\_Toolkit](https://pypi.org/project/Banner_Ad_Toolkit) | import adkit |
| [# pipedream add-package django\_admin\_tools](https://pypi.org/project/django_admin_tools) | import admin\_tools |
| [# pipedream add-package adminish\_categories](https://pypi.org/project/adminish_categories) | import adminishcategories |
| [# pipedream add-package django\_admin\_sortable](https://pypi.org/project/django_admin_sortable) | import adminsortable |
| [# pipedream add-package adspygoogle.adwords](https://pypi.org/project/adspygoogle.adwords) | import adspygoogle |
| [# pipedream add-package agtl](https://pypi.org/project/agtl) | import advancedcaching |
| [# pipedream add-package Adytum\_PyMonitor](https://pypi.org/project/Adytum_PyMonitor) | import adytum |
| [# pipedream add-package affinitic.docpyflakes](https://pypi.org/project/affinitic.docpyflakes) | import affinitic |
| [# pipedream add-package affinitic.recipe.fakezope2eggs](https://pypi.org/project/affinitic.recipe.fakezope2eggs) | import affinitic |
| [# pipedream add-package affinitic.simplecookiecuttr](https://pypi.org/project/affinitic.simplecookiecuttr) | import affinitic |
| [# pipedream add-package affinitic.verifyinterface](https://pypi.org/project/affinitic.verifyinterface) | import affinitic |
| [# pipedream add-package affinitic.zamqp](https://pypi.org/project/affinitic.zamqp) | import affinitic |
| [# pipedream add-package afpy.xap](https://pypi.org/project/afpy.xap) | import afpy |
| [# pipedream add-package agate\_sql](https://pypi.org/project/agate_sql) | import agatesql |
| [# pipedream add-package ageliaco.recipe.csvconfig](https://pypi.org/project/ageliaco.recipe.csvconfig) | import ageliaco |
| [# pipedream add-package agent.http](https://pypi.org/project/agent.http) | import agent\_http |
| [# pipedream add-package Agora\_Client](https://pypi.org/project/Agora_Client) | import agora |
| [# pipedream add-package Agora\_Fountain](https://pypi.org/project/Agora_Fountain) | import agora |
| [# pipedream add-package Agora\_Fragment](https://pypi.org/project/Agora_Fragment) | import agora |
| [# pipedream add-package Agora\_Planner](https://pypi.org/project/Agora_Planner) | import agora |
| [# pipedream add-package Agora\_Service\_Provider](https://pypi.org/project/Agora_Service_Provider) | import agora |
| [# pipedream add-package agoraplex.themes.sphinx](https://pypi.org/project/agoraplex.themes.sphinx) | import agoraplex |
| [# pipedream add-package agsci.blognewsletter](https://pypi.org/project/agsci.blognewsletter) | import agsci |
| [# pipedream add-package agx.core](https://pypi.org/project/agx.core) | import agx |
| [# pipedream add-package agx.dev](https://pypi.org/project/agx.dev) | import agx |
| [# pipedream add-package agx.generator.buildout](https://pypi.org/project/agx.generator.buildout) | import agx |
| [# pipedream add-package agx.generator.dexterity](https://pypi.org/project/agx.generator.dexterity) | import agx |
| [# pipedream add-package agx.generator.generator](https://pypi.org/project/agx.generator.generator) | import agx |
| [# pipedream add-package agx.generator.plone](https://pypi.org/project/agx.generator.plone) | import agx |
| [# pipedream add-package agx.generator.pyegg](https://pypi.org/project/agx.generator.pyegg) | import agx |
| [# pipedream add-package agx.generator.sql](https://pypi.org/project/agx.generator.sql) | import agx |
| [# pipedream add-package agx.generator.uml](https://pypi.org/project/agx.generator.uml) | import agx |
| [# pipedream add-package agx.generator.zca](https://pypi.org/project/agx.generator.zca) | import agx |
| [# pipedream add-package agx.transform.uml2fs](https://pypi.org/project/agx.transform.uml2fs) | import agx |
| [# pipedream add-package agx.transform.xmi2uml](https://pypi.org/project/agx.transform.xmi2uml) | import agx |
| [# pipedream add-package aimes.bundle](https://pypi.org/project/aimes.bundle) | import aimes |
| [# pipedream add-package aimes.skeleton](https://pypi.org/project/aimes.skeleton) | import aimes |
| [# pipedream add-package aio.app](https://pypi.org/project/aio.app) | import aio |
| [# pipedream add-package aio.config](https://pypi.org/project/aio.config) | import aio |
| [# pipedream add-package aio.core](https://pypi.org/project/aio.core) | import aio |
| [# pipedream add-package aio.signals](https://pypi.org/project/aio.signals) | import aio |
| [# pipedream add-package aio\_hs2](https://pypi.org/project/aio_hs2) | import aiohs2 |
| [# pipedream add-package aio\_routes](https://pypi.org/project/aio_routes) | import aioroutes |
| [# pipedream add-package aio\_s3](https://pypi.org/project/aio_s3) | import aios3 |
| [# pipedream add-package airbrake\_flask](https://pypi.org/project/airbrake_flask) | import airbrake |
| [# pipedream add-package airship\_icloud](https://pypi.org/project/airship_icloud) | import airship |
| [# pipedream add-package airship\_steamcloud](https://pypi.org/project/airship_steamcloud) | import airship |
| [# pipedream add-package edgegrid\_python](https://pypi.org/project/edgegrid_python) | import akamai |
| [# pipedream add-package alation\_api](https://pypi.org/project/alation_api) | import alation |
| [# pipedream add-package alba\_client\_python](https://pypi.org/project/alba_client_python) | import alba\_client |
| [# pipedream add-package alburnum\_maas\_client](https://pypi.org/project/alburnum_maas_client) | import alburnum |
| [# pipedream add-package alchemist.audit](https://pypi.org/project/alchemist.audit) | import alchemist |
| [# pipedream add-package alchemist.security](https://pypi.org/project/alchemist.security) | import alchemist |
| [# pipedream add-package alchemist.traversal](https://pypi.org/project/alchemist.traversal) | import alchemist |
| [# pipedream add-package alchemist.ui](https://pypi.org/project/alchemist.ui) | import alchemist |
| [# pipedream add-package alchemyapi\_python](https://pypi.org/project/alchemyapi_python) | import alchemyapi |
| [# pipedream add-package alerta\_server](https://pypi.org/project/alerta_server) | import alerta |
| [# pipedream add-package Alexandria\_Upload\_Utils](https://pypi.org/project/Alexandria_Upload_Utils) | import alexandria\_upload |
| [# pipedream add-package alibaba\_python\_sdk](https://pypi.org/project/alibaba_python_sdk) | import alibaba |
| [# pipedream add-package aliyun\_python\_sdk](https://pypi.org/project/aliyun_python_sdk) | import aliyun |
| [# pipedream add-package alicloudcli](https://pypi.org/project/alicloudcli) | import aliyuncli |
| [# pipedream add-package aliyun\_python\_sdk\_acs](https://pypi.org/project/aliyun_python_sdk_acs) | import aliyunsdkacs |
| [# pipedream add-package aliyun\_python\_sdk\_batchcompute](https://pypi.org/project/aliyun_python_sdk_batchcompute) | import aliyunsdkbatchcompute |
| [# pipedream add-package aliyun\_python\_sdk\_bsn](https://pypi.org/project/aliyun_python_sdk_bsn) | import aliyunsdkbsn |
| [# pipedream add-package aliyun\_python\_sdk\_bss](https://pypi.org/project/aliyun_python_sdk_bss) | import aliyunsdkbss |
| [# pipedream add-package aliyun\_python\_sdk\_cdn](https://pypi.org/project/aliyun_python_sdk_cdn) | import aliyunsdkcdn |
| [# pipedream add-package aliyun\_python\_sdk\_cms](https://pypi.org/project/aliyun_python_sdk_cms) | import aliyunsdkcms |
| [# pipedream add-package aliyun\_python\_sdk\_core](https://pypi.org/project/aliyun_python_sdk_core) | import aliyunsdkcore |
| [# pipedream add-package aliyun\_python\_sdk\_crm](https://pypi.org/project/aliyun_python_sdk_crm) | import aliyunsdkcrm |
| [# pipedream add-package aliyun\_python\_sdk\_cs](https://pypi.org/project/aliyun_python_sdk_cs) | import aliyunsdkcs |
| [# pipedream add-package aliyun\_python\_sdk\_drds](https://pypi.org/project/aliyun_python_sdk_drds) | import aliyunsdkdrds |
| [# pipedream add-package aliyun\_python\_sdk\_ecs](https://pypi.org/project/aliyun_python_sdk_ecs) | import aliyunsdkecs |
| [# pipedream add-package aliyun\_python\_sdk\_ess](https://pypi.org/project/aliyun_python_sdk_ess) | import aliyunsdkess |
| [# pipedream add-package aliyun\_python\_sdk\_ft](https://pypi.org/project/aliyun_python_sdk_ft) | import aliyunsdkft |
| [# pipedream add-package aliyun\_python\_sdk\_mts](https://pypi.org/project/aliyun_python_sdk_mts) | import aliyunsdkmts |
| [# pipedream add-package aliyun\_python\_sdk\_ocs](https://pypi.org/project/aliyun_python_sdk_ocs) | import aliyunsdkocs |
| [# pipedream add-package aliyun\_python\_sdk\_oms](https://pypi.org/project/aliyun_python_sdk_oms) | import aliyunsdkoms |
| [# pipedream add-package aliyun\_python\_sdk\_ossadmin](https://pypi.org/project/aliyun_python_sdk_ossadmin) | import aliyunsdkossadmin |
| [# pipedream add-package aliyun\_python\_sdk\_r\_kvstore](https://pypi.org/project/aliyun_python_sdk_r_kvstore) | import aliyunsdkr-kvstore |
| [# pipedream add-package aliyun\_python\_sdk\_ram](https://pypi.org/project/aliyun_python_sdk_ram) | import aliyunsdkram |
| [# pipedream add-package aliyun\_python\_sdk\_rds](https://pypi.org/project/aliyun_python_sdk_rds) | import aliyunsdkrds |
| [# pipedream add-package aliyun\_python\_sdk\_risk](https://pypi.org/project/aliyun_python_sdk_risk) | import aliyunsdkrisk |
| [# pipedream add-package aliyun\_python\_sdk\_ros](https://pypi.org/project/aliyun_python_sdk_ros) | import aliyunsdkros |
| [# pipedream add-package aliyun\_python\_sdk\_slb](https://pypi.org/project/aliyun_python_sdk_slb) | import aliyunsdkslb |
| [# pipedream add-package aliyun\_python\_sdk\_sts](https://pypi.org/project/aliyun_python_sdk_sts) | import aliyunsdksts |
| [# pipedream add-package aliyun\_python\_sdk\_ubsms](https://pypi.org/project/aliyun_python_sdk_ubsms) | import aliyunsdkubsms |
| [# pipedream add-package aliyun\_python\_sdk\_yundun](https://pypi.org/project/aliyun_python_sdk_yundun) | import aliyunsdkyundun |
| [# pipedream add-package AllAttachmentsMacro](https://pypi.org/project/AllAttachmentsMacro) | import allattachments |
| [# pipedream add-package allocine\_wrapper](https://pypi.org/project/allocine_wrapper) | import allocine |
| [# pipedream add-package django\_allowedsites](https://pypi.org/project/django_allowedsites) | import allowedsites |
| [# pipedream add-package alm.solrindex](https://pypi.org/project/alm.solrindex) | import alm |
| [# pipedream add-package aloft.py](https://pypi.org/project/aloft.py) | import aloft |
| [# pipedream add-package alpaca](https://pypi.org/project/alpaca) | import alpacalib |
| [# pipedream add-package alphabetic\_simple](https://pypi.org/project/alphabetic_simple) | import alphabetic |
| [# pipedream add-package alphasms\_client](https://pypi.org/project/alphasms_client) | import alphasms |
| [# pipedream add-package altered.states](https://pypi.org/project/altered.states) | import altered |
| [# pipedream add-package alterootheme.busycity](https://pypi.org/project/alterootheme.busycity) | import alterootheme |
| [# pipedream add-package alterootheme.intensesimplicity](https://pypi.org/project/alterootheme.intensesimplicity) | import alterootheme |
| [# pipedream add-package alterootheme.lazydays](https://pypi.org/project/alterootheme.lazydays) | import alterootheme |
| [# pipedream add-package alurinium\_image\_processing](https://pypi.org/project/alurinium_image_processing) | import alurinium |
| [# pipedream add-package alx](https://pypi.org/project/alx) | import alxlib |
| [# pipedream add-package amara3\_iri](https://pypi.org/project/amara3_iri) | import amara3 |
| [# pipedream add-package amara3\_xml](https://pypi.org/project/amara3_xml) | import amara3 |
| [# pipedream add-package AmazonAPIWrapper](https://pypi.org/project/AmazonAPIWrapper) | import amazon |
| [# pipedream add-package python\_amazon\_simple\_product\_api](https://pypi.org/project/python_amazon_simple_product_api) | import amazon |
| [# pipedream add-package ambikesh1349\_1](https://pypi.org/project/ambikesh1349_1) | import ambikesh1349-1 |
| [# pipedream add-package AmbilightParty](https://pypi.org/project/AmbilightParty) | import ambilight |
| [# pipedream add-package amifs\_core](https://pypi.org/project/amifs_core) | import amifs |
| [# pipedream add-package ami\_organizer](https://pypi.org/project/ami_organizer) | import amiorganizer |
| [# pipedream add-package amitu.lipy](https://pypi.org/project/amitu.lipy) | import amitu |
| [# pipedream add-package amitu\_putils](https://pypi.org/project/amitu_putils) | import amitu |
| [# pipedream add-package amitu\_websocket\_client](https://pypi.org/project/amitu_websocket_client) | import amitu |
| [# pipedream add-package amitu\_zutils](https://pypi.org/project/amitu_zutils) | import amitu |
| [# pipedream add-package AMLT\_learn](https://pypi.org/project/AMLT_learn) | import amltlearn |
| [# pipedream add-package amocrm\_api](https://pypi.org/project/amocrm_api) | import amocrm |
| [# pipedream add-package amqp\_dispatcher](https://pypi.org/project/amqp_dispatcher) | import amqpdispatcher |
| [# pipedream add-package AMQP\_Storm](https://pypi.org/project/AMQP_Storm) | import amqpstorm |
| [# pipedream add-package analytics\_python](https://pypi.org/project/analytics_python) | import analytics |
| [# pipedream add-package AnalyzeDirectory](https://pypi.org/project/AnalyzeDirectory) | import analyzedir |
| [# pipedream add-package ancientsolutions\_crypttools](https://pypi.org/project/ancientsolutions_crypttools) | import ancientsolutions |
| [# pipedream add-package anderson.paginator](https://pypi.org/project/anderson.paginator) | import anderson\_paginator |
| [# pipedream add-package android\_resource\_remover](https://pypi.org/project/android_resource_remover) | import android\_clean\_app |
| [# pipedream add-package AnelPowerControl](https://pypi.org/project/AnelPowerControl) | import anel\_power\_control |
| [# pipedream add-package angus\_sdk\_python](https://pypi.org/project/angus_sdk_python) | import angus |
| [# pipedream add-package Annalist](https://pypi.org/project/Annalist) | import annalist\_root |
| [# pipedream add-package ANNOgesic](https://pypi.org/project/ANNOgesic) | import annogesiclib |
| [# pipedream add-package ansible\_role\_apply](https://pypi.org/project/ansible_role_apply) | import ansible-role-apply |
| [# pipedream add-package ansible\_playbook\_debugger](https://pypi.org/project/ansible_playbook_debugger) | import ansibledebugger |
| [# pipedream add-package ansible\_docgen](https://pypi.org/project/ansible_docgen) | import ansibledocgen |
| [# pipedream add-package ansible\_flow](https://pypi.org/project/ansible_flow) | import ansibleflow |
| [# pipedream add-package ansible\_inventory\_grapher](https://pypi.org/project/ansible_inventory_grapher) | import ansibleinventorygrapher |
| [# pipedream add-package ansible\_lint](https://pypi.org/project/ansible_lint) | import ansiblelint |
| [# pipedream add-package ansible\_roles\_graph](https://pypi.org/project/ansible_roles_graph) | import ansiblerolesgraph |
| [# pipedream add-package ansible\_tools](https://pypi.org/project/ansible_tools) | import ansibletools |
| [# pipedream add-package anthill.exampletheme](https://pypi.org/project/anthill.exampletheme) | import anthill |
| [# pipedream add-package anthill.skinner](https://pypi.org/project/anthill.skinner) | import anthill |
| [# pipedream add-package anthill.tal.macrorenderer](https://pypi.org/project/anthill.tal.macrorenderer) | import anthill |
| [# pipedream add-package AnthraxDojoFrontend](https://pypi.org/project/AnthraxDojoFrontend) | import anthrax |
| [# pipedream add-package AnthraxHTMLInput](https://pypi.org/project/AnthraxHTMLInput) | import anthrax |
| [# pipedream add-package AnthraxImage](https://pypi.org/project/AnthraxImage) | import anthrax |
| [# pipedream add-package antiweb](https://pypi.org/project/antiweb) | import antisphinx |
| [# pipedream add-package antispoofing.evaluation](https://pypi.org/project/antispoofing.evaluation) | import antispoofing |
| [# pipedream add-package antlr4\_python2\_runtime](https://pypi.org/project/antlr4_python2_runtime) | import antlr4 |
| [# pipedream add-package antlr4\_python3\_runtime](https://pypi.org/project/antlr4_python3_runtime) | import antlr4 |
| [# pipedream add-package antlr4\_python\_alt](https://pypi.org/project/antlr4_python_alt) | import antlr4 |
| [# pipedream add-package anybox.buildbot.openerp](https://pypi.org/project/anybox.buildbot.openerp) | import anybox |
| [# pipedream add-package anybox.nose.odoo](https://pypi.org/project/anybox.nose.odoo) | import anybox |
| [# pipedream add-package anybox.paster.odoo](https://pypi.org/project/anybox.paster.odoo) | import anybox |
| [# pipedream add-package anybox.paster.openerp](https://pypi.org/project/anybox.paster.openerp) | import anybox |
| [# pipedream add-package anybox.recipe.sysdeps](https://pypi.org/project/anybox.recipe.sysdeps) | import anybox |
| [# pipedream add-package anybox.scripts.odoo](https://pypi.org/project/anybox.scripts.odoo) | import anybox |
| [# pipedream add-package google\_api\_python\_client](https://pypi.org/project/google_api_python_client) | import googleapiclient |
| [# pipedream add-package google\_apitools](https://pypi.org/project/google_apitools) | import apitools |
| [# pipedream add-package arpm](https://pypi.org/project/arpm) | import apm |
| [# pipedream add-package django\_appdata](https://pypi.org/project/django_appdata) | import app\_data |
| [# pipedream add-package django\_appconf](https://pypi.org/project/django_appconf) | import appconf |
| [# pipedream add-package AppDynamicsDownloader](https://pypi.org/project/AppDynamicsDownloader) | import appd |
| [# pipedream add-package AppDynamicsREST](https://pypi.org/project/AppDynamicsREST) | import appd |
| [# pipedream add-package appdynamics\_bindeps\_linux\_x64](https://pypi.org/project/appdynamics_bindeps_linux_x64) | import appdynamics\_bindeps |
| [# pipedream add-package appdynamics\_bindeps\_linux\_x86](https://pypi.org/project/appdynamics_bindeps_linux_x86) | import appdynamics\_bindeps |
| [# pipedream add-package appdynamics\_bindeps\_osx\_x64](https://pypi.org/project/appdynamics_bindeps_osx_x64) | import appdynamics\_bindeps |
| [# pipedream add-package appdynamics\_proxysupport\_linux\_x64](https://pypi.org/project/appdynamics_proxysupport_linux_x64) | import appdynamics\_proxysupport |
| [# pipedream add-package appdynamics\_proxysupport\_linux\_x86](https://pypi.org/project/appdynamics_proxysupport_linux_x86) | import appdynamics\_proxysupport |
| [# pipedream add-package appdynamics\_proxysupport\_osx\_x64](https://pypi.org/project/appdynamics_proxysupport_osx_x64) | import appdynamics\_proxysupport |
| [# pipedream add-package Appium\_Python\_Client](https://pypi.org/project/Appium_Python_Client) | import appium |
| [# pipedream add-package applibase](https://pypi.org/project/applibase) | import appliapps |
| [# pipedream add-package broadwick](https://pypi.org/project/broadwick) | import appserver |
| [# pipedream add-package archetypes.kss](https://pypi.org/project/archetypes.kss) | import archetypes |
| [# pipedream add-package archetypes.multilingual](https://pypi.org/project/archetypes.multilingual) | import archetypes |
| [# pipedream add-package archetypes.schemaextender](https://pypi.org/project/archetypes.schemaextender) | import archetypes |
| [# pipedream add-package ansible\_role\_manager](https://pypi.org/project/ansible_role_manager) | import arm |
| [# pipedream add-package armor\_api](https://pypi.org/project/armor_api) | import armor |
| [# pipedream add-package armstrong.apps.related\_content](https://pypi.org/project/armstrong.apps.related_content) | import armstrong |
| [# pipedream add-package armstrong.apps.series](https://pypi.org/project/armstrong.apps.series) | import armstrong |
| [# pipedream add-package armstrong.cli](https://pypi.org/project/armstrong.cli) | import armstrong |
| [# pipedream add-package armstrong.core.arm\_access](https://pypi.org/project/armstrong.core.arm_access) | import armstrong |
| [# pipedream add-package armstrong.core.arm\_layout](https://pypi.org/project/armstrong.core.arm_layout) | import armstrong |
| [# pipedream add-package armstrong.core.arm\_sections](https://pypi.org/project/armstrong.core.arm_sections) | import armstrong |
| [# pipedream add-package armstrong.core.arm\_wells](https://pypi.org/project/armstrong.core.arm_wells) | import armstrong |
| [# pipedream add-package armstrong.dev](https://pypi.org/project/armstrong.dev) | import armstrong |
| [# pipedream add-package armstrong.esi](https://pypi.org/project/armstrong.esi) | import armstrong |
| [# pipedream add-package armstrong.hatband](https://pypi.org/project/armstrong.hatband) | import armstrong |
| [# pipedream add-package armstrong.templates.standard](https://pypi.org/project/armstrong.templates.standard) | import armstrong |
| [# pipedream add-package armstrong.utils.backends](https://pypi.org/project/armstrong.utils.backends) | import armstrong |
| [# pipedream add-package armstrong.utils.celery](https://pypi.org/project/armstrong.utils.celery) | import armstrong |
| [# pipedream add-package arstecnica.raccoon.autobahn](https://pypi.org/project/arstecnica.raccoon.autobahn) | import arstecnica |
| [# pipedream add-package arstecnica.sqlalchemy.async](https://pypi.org/project/arstecnica.sqlalchemy.async) | import arstecnica |
| [# pipedream add-package article\_downloader](https://pypi.org/project/article_downloader) | import article-downloader |
| [# pipedream add-package artifact\_cli](https://pypi.org/project/artifact_cli) | import artifactcli |
| [# pipedream add-package arvados\_python\_client](https://pypi.org/project/arvados_python_client) | import arvados |
| [# pipedream add-package arvados\_cwl\_runner](https://pypi.org/project/arvados_cwl_runner) | import arvados\_cwl |
| [# pipedream add-package arvados\_node\_manager](https://pypi.org/project/arvados_node_manager) | import arvnodeman |
| [# pipedream add-package AsanaToGithub](https://pypi.org/project/AsanaToGithub) | import asana\_to\_github |
| [# pipedream add-package AsciiBinaryConverter](https://pypi.org/project/AsciiBinaryConverter) | import asciibinary |
| [# pipedream add-package AdvancedSearchDiscovery](https://pypi.org/project/AdvancedSearchDiscovery) | import asd |
| [# pipedream add-package askbot\_tuan](https://pypi.org/project/askbot_tuan) | import askbot |
| [# pipedream add-package askbot\_tuanpa](https://pypi.org/project/askbot_tuanpa) | import askbot |
| [# pipedream add-package asnhistory\_redis](https://pypi.org/project/asnhistory_redis) | import asnhistory |
| [# pipedream add-package aspen\_jinja2](https://pypi.org/project/aspen_jinja2) | import aspen\_jinja2\_renderer |
| [# pipedream add-package aspen\_tornado](https://pypi.org/project/aspen_tornado) | import aspen\_tornado\_engine |
| [# pipedream add-package asprise\_ocr\_sdk\_python\_api](https://pypi.org/project/asprise_ocr_sdk_python_api) | import asprise\_ocr\_api |
| [# pipedream add-package aspy.refactor\_imports](https://pypi.org/project/aspy.refactor_imports) | import aspy |
| [# pipedream add-package aspy.yaml](https://pypi.org/project/aspy.yaml) | import aspy |
| [# pipedream add-package asterisk\_ami](https://pypi.org/project/asterisk_ami) | import asterisk |
| [# pipedream add-package add\_asts](https://pypi.org/project/add_asts) | import asts |
| [# pipedream add-package asymmetricbase.enum](https://pypi.org/project/asymmetricbase.enum) | import asymmetricbase |
| [# pipedream add-package asymmetricbase.fields](https://pypi.org/project/asymmetricbase.fields) | import asymmetricbase |
| [# pipedream add-package asymmetricbase.logging](https://pypi.org/project/asymmetricbase.logging) | import asymmetricbase |
| [# pipedream add-package asymmetricbase.utils](https://pypi.org/project/asymmetricbase.utils) | import asymmetricbase |
| [# pipedream add-package asyncio\_irc](https://pypi.org/project/asyncio_irc) | import asyncirc |
| [# pipedream add-package asyncmongoorm\_je](https://pypi.org/project/asyncmongoorm_je) | import asyncmongoorm |
| [# pipedream add-package asyncssh\_unofficial](https://pypi.org/project/asyncssh_unofficial) | import asyncssh |
| [# pipedream add-package athletelistyy](https://pypi.org/project/athletelistyy) | import athletelist |
| [# pipedream add-package automium](https://pypi.org/project/automium) | import atm |
| [# pipedream add-package atmosphere\_python\_client](https://pypi.org/project/atmosphere_python_client) | import atmosphere |
| [# pipedream add-package gdata](https://pypi.org/project/gdata) | import atom |
| [# pipedream add-package AtomicWrite](https://pypi.org/project/AtomicWrite) | import atomic |
| [# pipedream add-package atomisator.db](https://pypi.org/project/atomisator.db) | import atomisator |
| [# pipedream add-package atomisator.enhancers](https://pypi.org/project/atomisator.enhancers) | import atomisator |
| [# pipedream add-package atomisator.feed](https://pypi.org/project/atomisator.feed) | import atomisator |
| [# pipedream add-package atomisator.indexer](https://pypi.org/project/atomisator.indexer) | import atomisator |
| [# pipedream add-package atomisator.outputs](https://pypi.org/project/atomisator.outputs) | import atomisator |
| [# pipedream add-package atomisator.parser](https://pypi.org/project/atomisator.parser) | import atomisator |
| [# pipedream add-package atomisator.readers](https://pypi.org/project/atomisator.readers) | import atomisator |
| [# pipedream add-package atreal.cmfeditions.unlocker](https://pypi.org/project/atreal.cmfeditions.unlocker) | import atreal |
| [# pipedream add-package atreal.filestorage.common](https://pypi.org/project/atreal.filestorage.common) | import atreal |
| [# pipedream add-package atreal.layouts](https://pypi.org/project/atreal.layouts) | import atreal |
| [# pipedream add-package atreal.mailservices](https://pypi.org/project/atreal.mailservices) | import atreal |
| [# pipedream add-package atreal.massloader](https://pypi.org/project/atreal.massloader) | import atreal |
| [# pipedream add-package atreal.monkeyplone](https://pypi.org/project/atreal.monkeyplone) | import atreal |
| [# pipedream add-package atreal.override.albumview](https://pypi.org/project/atreal.override.albumview) | import atreal |
| [# pipedream add-package atreal.richfile.preview](https://pypi.org/project/atreal.richfile.preview) | import atreal |
| [# pipedream add-package atreal.richfile.qualifier](https://pypi.org/project/atreal.richfile.qualifier) | import atreal |
| [# pipedream add-package atreal.usersinout](https://pypi.org/project/atreal.usersinout) | import atreal |
| [# pipedream add-package atsim.potentials](https://pypi.org/project/atsim.potentials) | import atsim |
| [# pipedream add-package attract\_sdk](https://pypi.org/project/attract_sdk) | import attractsdk |
| [# pipedream add-package audio.bitstream](https://pypi.org/project/audio.bitstream) | import audio |
| [# pipedream add-package audio.coders](https://pypi.org/project/audio.coders) | import audio |
| [# pipedream add-package audio.filters](https://pypi.org/project/audio.filters) | import audio |
| [# pipedream add-package audio.fourier](https://pypi.org/project/audio.fourier) | import audio |
| [# pipedream add-package audio.frames](https://pypi.org/project/audio.frames) | import audio |
| [# pipedream add-package audio.lp](https://pypi.org/project/audio.lp) | import audio |
| [# pipedream add-package audio.psychoacoustics](https://pypi.org/project/audio.psychoacoustics) | import audio |
| [# pipedream add-package audio.quantizers](https://pypi.org/project/audio.quantizers) | import audio |
| [# pipedream add-package audio.shrink](https://pypi.org/project/audio.shrink) | import audio |
| [# pipedream add-package audio.wave](https://pypi.org/project/audio.wave) | import audio |
| [# pipedream add-package auf\_refer](https://pypi.org/project/auf_refer) | import aufrefer |
| [# pipedream add-package auslfe.formonline.content](https://pypi.org/project/auslfe.formonline.content) | import auslfe |
| [# pipedream add-package auspost\_apis](https://pypi.org/project/auspost_apis) | import auspost |
| [# pipedream add-package auth0\_python](https://pypi.org/project/auth0_python) | import auth0 |
| [# pipedream add-package AuthServerClient](https://pypi.org/project/AuthServerClient) | import auth\_server\_client |
| [# pipedream add-package AuthorizeSauce](https://pypi.org/project/AuthorizeSauce) | import authorize |
| [# pipedream add-package AuthzPolicyPlugin](https://pypi.org/project/AuthzPolicyPlugin) | import authzpolicy |
| [# pipedream add-package autobahn\_rce](https://pypi.org/project/autobahn_rce) | import autobahn |
| [# pipedream add-package geonode\_avatar](https://pypi.org/project/geonode_avatar) | import avatar |
| [# pipedream add-package android\_webview](https://pypi.org/project/android_webview) | import awebview |
| [# pipedream add-package azure\_common](https://pypi.org/project/azure_common) | import azure |
| [# pipedream add-package azure\_mgmt\_common](https://pypi.org/project/azure_mgmt_common) | import azure |
| [# pipedream add-package azure\_mgmt\_compute](https://pypi.org/project/azure_mgmt_compute) | import azure |
| [# pipedream add-package azure\_mgmt\_network](https://pypi.org/project/azure_mgmt_network) | import azure |
| [# pipedream add-package azure\_mgmt\_nspkg](https://pypi.org/project/azure_mgmt_nspkg) | import azure |
| [# pipedream add-package azure\_mgmt\_resource](https://pypi.org/project/azure_mgmt_resource) | import azure |
| [# pipedream add-package azure\_mgmt\_storage](https://pypi.org/project/azure_mgmt_storage) | import azure |
| [# pipedream add-package azure\_nspkg](https://pypi.org/project/azure_nspkg) | import azure |
| [# pipedream add-package azure\_servicebus](https://pypi.org/project/azure_servicebus) | import azure |
| [# pipedream add-package azure\_servicemanagement\_legacy](https://pypi.org/project/azure_servicemanagement_legacy) | import azure |
| [# pipedream add-package azure\_storage](https://pypi.org/project/azure_storage) | import azure |
| [# pipedream add-package b2g\_commands](https://pypi.org/project/b2g_commands) | import b2gcommands |
| [# pipedream add-package b2gperf\_v1.3](https://pypi.org/project/b2gperf_v1.3) | import b2gperf |
| [# pipedream add-package b2gperf\_v1.4](https://pypi.org/project/b2gperf_v1.4) | import b2gperf |
| [# pipedream add-package b2gperf\_v2.0](https://pypi.org/project/b2gperf_v2.0) | import b2gperf |
| [# pipedream add-package b2gperf\_v2.1](https://pypi.org/project/b2gperf_v2.1) | import b2gperf |
| [# pipedream add-package b2gperf\_v2.2](https://pypi.org/project/b2gperf_v2.2) | import b2gperf |
| [# pipedream add-package b2gpopulate\_v1.3](https://pypi.org/project/b2gpopulate_v1.3) | import b2gpopulate |
| [# pipedream add-package b2gpopulate\_v1.4](https://pypi.org/project/b2gpopulate_v1.4) | import b2gpopulate |
| [# pipedream add-package b2gpopulate\_v2.0](https://pypi.org/project/b2gpopulate_v2.0) | import b2gpopulate |
| [# pipedream add-package b2gpopulate\_v2.1](https://pypi.org/project/b2gpopulate_v2.1) | import b2gpopulate |
| [# pipedream add-package b2gpopulate\_v2.2](https://pypi.org/project/b2gpopulate_v2.2) | import b2gpopulate |
| [# pipedream add-package b3j0f.annotation](https://pypi.org/project/b3j0f.annotation) | import b3j0f |
| [# pipedream add-package b3j0f.aop](https://pypi.org/project/b3j0f.aop) | import b3j0f |
| [# pipedream add-package b3j0f.conf](https://pypi.org/project/b3j0f.conf) | import b3j0f |
| [# pipedream add-package b3j0f.sync](https://pypi.org/project/b3j0f.sync) | import b3j0f |
| [# pipedream add-package b3j0f.utils](https://pypi.org/project/b3j0f.utils) | import b3j0f |
| [# pipedream add-package Babel](https://pypi.org/project/Babel) | import babel |
| [# pipedream add-package BabelGladeExtractor](https://pypi.org/project/BabelGladeExtractor) | import babelglade |
| [# pipedream add-package backplane2\_pyclient](https://pypi.org/project/backplane2_pyclient) | import backplane |
| [# pipedream add-package backport\_collections](https://pypi.org/project/backport_collections) | import backport\_abcoll |
| [# pipedream add-package backports.functools\_lru\_cache](https://pypi.org/project/backports.functools_lru_cache) | import backports |
| [# pipedream add-package backports.inspect](https://pypi.org/project/backports.inspect) | import backports |
| [# pipedream add-package backports.pbkdf2](https://pypi.org/project/backports.pbkdf2) | import backports |
| [# pipedream add-package backports.shutil\_get\_terminal\_size](https://pypi.org/project/backports.shutil_get_terminal_size) | import backports |
| [# pipedream add-package backports.socketpair](https://pypi.org/project/backports.socketpair) | import backports |
| [# pipedream add-package backports.ssl](https://pypi.org/project/backports.ssl) | import backports |
| [# pipedream add-package backports.ssl\_match\_hostname](https://pypi.org/project/backports.ssl_match_hostname) | import backports |
| [# pipedream add-package backports.statistics](https://pypi.org/project/backports.statistics) | import backports |
| [# pipedream add-package badgekit\_api\_client](https://pypi.org/project/badgekit_api_client) | import badgekit |
| [# pipedream add-package BadLinksPlugin](https://pypi.org/project/BadLinksPlugin) | import badlinks |
| [# pipedream add-package bael.project](https://pypi.org/project/bael.project) | import bael |
| [# pipedream add-package baidupy](https://pypi.org/project/baidupy) | import baidu |
| [# pipedream add-package buildtools](https://pypi.org/project/buildtools) | import balrog |
| [# pipedream add-package baluhn\_redux](https://pypi.org/project/baluhn_redux) | import baluhn |
| [# pipedream add-package bamboo.pantrybell](https://pypi.org/project/bamboo.pantrybell) | import bamboo |
| [# pipedream add-package bamboo.scaffold](https://pypi.org/project/bamboo.scaffold) | import bamboo |
| [# pipedream add-package bamboo.setuptools\_version](https://pypi.org/project/bamboo.setuptools_version) | import bamboo |
| [# pipedream add-package bamboo\_data](https://pypi.org/project/bamboo_data) | import bamboo |
| [# pipedream add-package bamboo\_server](https://pypi.org/project/bamboo_server) | import bamboo |
| [# pipedream add-package bambu\_codemirror](https://pypi.org/project/bambu_codemirror) | import bambu |
| [# pipedream add-package bambu\_dataportability](https://pypi.org/project/bambu_dataportability) | import bambu |
| [# pipedream add-package bambu\_enqueue](https://pypi.org/project/bambu_enqueue) | import bambu |
| [# pipedream add-package bambu\_faq](https://pypi.org/project/bambu_faq) | import bambu |
| [# pipedream add-package bambu\_ffmpeg](https://pypi.org/project/bambu_ffmpeg) | import bambu |
| [# pipedream add-package bambu\_grids](https://pypi.org/project/bambu_grids) | import bambu |
| [# pipedream add-package bambu\_international](https://pypi.org/project/bambu_international) | import bambu |
| [# pipedream add-package bambu\_jwplayer](https://pypi.org/project/bambu_jwplayer) | import bambu |
| [# pipedream add-package bambu\_minidetect](https://pypi.org/project/bambu_minidetect) | import bambu |
| [# pipedream add-package bambu\_navigation](https://pypi.org/project/bambu_navigation) | import bambu |
| [# pipedream add-package bambu\_notifications](https://pypi.org/project/bambu_notifications) | import bambu |
| [# pipedream add-package bambu\_payments](https://pypi.org/project/bambu_payments) | import bambu |
| [# pipedream add-package bambu\_pusher](https://pypi.org/project/bambu_pusher) | import bambu |
| [# pipedream add-package bambu\_saas](https://pypi.org/project/bambu_saas) | import bambu |
| [# pipedream add-package bambu\_sites](https://pypi.org/project/bambu_sites) | import bambu |
| [# pipedream add-package Bananas](https://pypi.org/project/Bananas) | import banana |
| [# pipedream add-package banana.maya](https://pypi.org/project/banana.maya) | import banana |
| [# pipedream add-package bangtext](https://pypi.org/project/bangtext) | import bang |
| [# pipedream add-package barcode\_generator](https://pypi.org/project/barcode_generator) | import barcode |
| [# pipedream add-package bark\_ssg](https://pypi.org/project/bark_ssg) | import bark |
| [# pipedream add-package BarkingOwl](https://pypi.org/project/BarkingOwl) | import barking\_owl |
| [# pipedream add-package bart\_py](https://pypi.org/project/bart_py) | import bart |
| [# pipedream add-package basalt\_tasks](https://pypi.org/project/basalt_tasks) | import basalt |
| [# pipedream add-package base\_62](https://pypi.org/project/base_62) | import base62 |
| [# pipedream add-package basemap\_Jim](https://pypi.org/project/basemap_Jim) | import basemap |
| [# pipedream add-package bash\_toolbelt](https://pypi.org/project/bash_toolbelt) | import bash |
| [# pipedream add-package Python\_Bash\_Utils](https://pypi.org/project/Python_Bash_Utils) | import bashutils |
| [# pipedream add-package BasicHttp](https://pypi.org/project/BasicHttp) | import basic\_http |
| [# pipedream add-package basil\_daq](https://pypi.org/project/basil_daq) | import basil |
| [# pipedream add-package azure\_batch\_apps](https://pypi.org/project/azure_batch_apps) | import batchapps |
| [# pipedream add-package python\_bcrypt](https://pypi.org/project/python_bcrypt) | import bcrypt |
| [# pipedream add-package Beaker](https://pypi.org/project/Beaker) | import beaker |
| [# pipedream add-package beets](https://pypi.org/project/beets) | import beetsplug |
| [# pipedream add-package begins](https://pypi.org/project/begins) | import begin |
| [# pipedream add-package bench\_it](https://pypi.org/project/bench_it) | import benchit |
| [# pipedream add-package beproud.utils](https://pypi.org/project/beproud.utils) | import beproud |
| [# pipedream add-package burrito\_fillings](https://pypi.org/project/burrito_fillings) | import bfillings |
| [# pipedream add-package BigJob](https://pypi.org/project/BigJob) | import pilot |
| [# pipedream add-package billboard.py](https://pypi.org/project/billboard.py) | import billboard |
| [# pipedream add-package anaconda\_build](https://pypi.org/project/anaconda_build) | import binstar\_build\_client |
| [# pipedream add-package anaconda\_client](https://pypi.org/project/anaconda_client) | import binstar\_client |
| [# pipedream add-package biocommons.dev](https://pypi.org/project/biocommons.dev) | import biocommons |
| [# pipedream add-package birdhousebuilder.recipe.conda](https://pypi.org/project/birdhousebuilder.recipe.conda) | import birdhousebuilder |
| [# pipedream add-package birdhousebuilder.recipe.docker](https://pypi.org/project/birdhousebuilder.recipe.docker) | import birdhousebuilder |
| [# pipedream add-package birdhousebuilder.recipe.redis](https://pypi.org/project/birdhousebuilder.recipe.redis) | import birdhousebuilder |
| [# pipedream add-package birdhousebuilder.recipe.supervisor](https://pypi.org/project/birdhousebuilder.recipe.supervisor) | import birdhousebuilder |
| [# pipedream add-package pymeshio](https://pypi.org/project/pymeshio) | import blender26-meshio |
| [# pipedream add-package borg.localrole](https://pypi.org/project/borg.localrole) | import borg |
| [# pipedream add-package bagofwords](https://pypi.org/project/bagofwords) | import bow |
| [# pipedream add-package bpython](https://pypi.org/project/bpython) | import bpdb |
| [# pipedream add-package bisque\_api](https://pypi.org/project/bisque_api) | import bqapi |
| [# pipedream add-package django\_braces](https://pypi.org/project/django_braces) | import braces |
| [# pipedream add-package briefs\_caster](https://pypi.org/project/briefs_caster) | import briefscaster |
| [# pipedream add-package brisa\_media\_server\_plugins](https://pypi.org/project/brisa_media_server_plugins) | import brisa\_media\_server/plugins |
| [# pipedream add-package brkt\_sdk](https://pypi.org/project/brkt_sdk) | import brkt\_requests |
| [# pipedream add-package broadcast\_logging](https://pypi.org/project/broadcast_logging) | import broadcastlogging |
| [# pipedream add-package brocade\_tool](https://pypi.org/project/brocade_tool) | import brocadetool |
| [# pipedream add-package bronto\_python](https://pypi.org/project/bronto_python) | import bronto |
| [# pipedream add-package Brownie](https://pypi.org/project/Brownie) | import brownie |
| [# pipedream add-package browsermob\_proxy](https://pypi.org/project/browsermob_proxy) | import browsermobproxy |
| [# pipedream add-package brubeck\_mysql](https://pypi.org/project/brubeck_mysql) | import brubeckmysql |
| [# pipedream add-package brubeck\_oauth](https://pypi.org/project/brubeck_oauth) | import brubeckoauth |
| [# pipedream add-package brubeck\_service](https://pypi.org/project/brubeck_service) | import brubeckservice |
| [# pipedream add-package brubeck\_uploader](https://pypi.org/project/brubeck_uploader) | import brubeckuploader |
| [# pipedream add-package beautifulsoup4](https://pypi.org/project/beautifulsoup4) | import bs4 |
| [# pipedream add-package pymongo](https://pypi.org/project/pymongo) | import gridfs |
| [# pipedream add-package bst.pygasus.core](https://pypi.org/project/bst.pygasus.core) | import bst |
| [# pipedream add-package bst.pygasus.datamanager](https://pypi.org/project/bst.pygasus.datamanager) | import bst |
| [# pipedream add-package bst.pygasus.demo](https://pypi.org/project/bst.pygasus.demo) | import bst |
| [# pipedream add-package bst.pygasus.i18n](https://pypi.org/project/bst.pygasus.i18n) | import bst |
| [# pipedream add-package bst.pygasus.resources](https://pypi.org/project/bst.pygasus.resources) | import bst |
| [# pipedream add-package bst.pygasus.scaffolding](https://pypi.org/project/bst.pygasus.scaffolding) | import bst |
| [# pipedream add-package bst.pygasus.security](https://pypi.org/project/bst.pygasus.security) | import bst |
| [# pipedream add-package bst.pygasus.session](https://pypi.org/project/bst.pygasus.session) | import bst |
| [# pipedream add-package bst.pygasus.wsgi](https://pypi.org/project/bst.pygasus.wsgi) | import bst |
| [# pipedream add-package btable\_py](https://pypi.org/project/btable_py) | import btable |
| [# pipedream add-package bananatag\_api](https://pypi.org/project/bananatag_api) | import btapi |
| [# pipedream add-package btce\_api](https://pypi.org/project/btce_api) | import btceapi |
| [# pipedream add-package btce\_bot](https://pypi.org/project/btce_bot) | import btcebot |
| [# pipedream add-package btsync.py](https://pypi.org/project/btsync.py) | import btsync |
| [# pipedream add-package buck.pprint](https://pypi.org/project/buck.pprint) | import buck |
| [# pipedream add-package bud.nospam](https://pypi.org/project/bud.nospam) | import bud |
| [# pipedream add-package budy\_api](https://pypi.org/project/budy_api) | import budy |
| [# pipedream add-package buffer\_alpaca](https://pypi.org/project/buffer_alpaca) | import buffer |
| [# pipedream add-package bug.gd](https://pypi.org/project/bug.gd) | import buggd |
| [# pipedream add-package bugle\_sites](https://pypi.org/project/bugle_sites) | import bugle |
| [# pipedream add-package bug\_spots](https://pypi.org/project/bug_spots) | import bugspots |
| [# pipedream add-package python\_bugzilla](https://pypi.org/project/python_bugzilla) | import bugzilla |
| [# pipedream add-package bugzscout\_py](https://pypi.org/project/bugzscout_py) | import bugzscout |
| [# pipedream add-package ajk\_ios\_buildTools](https://pypi.org/project/ajk_ios_buildTools) | import buildTools |
| [# pipedream add-package BuildNotify](https://pypi.org/project/BuildNotify) | import buildnotifylib |
| [# pipedream add-package buildout.bootstrap](https://pypi.org/project/buildout.bootstrap) | import buildout |
| [# pipedream add-package buildout.disablessl](https://pypi.org/project/buildout.disablessl) | import buildout |
| [# pipedream add-package buildout.dumppickedversions](https://pypi.org/project/buildout.dumppickedversions) | import buildout |
| [# pipedream add-package buildout.dumppickedversions2](https://pypi.org/project/buildout.dumppickedversions2) | import buildout |
| [# pipedream add-package buildout.dumprequirements](https://pypi.org/project/buildout.dumprequirements) | import buildout |
| [# pipedream add-package buildout.eggnest](https://pypi.org/project/buildout.eggnest) | import buildout |
| [# pipedream add-package buildout.eggscleaner](https://pypi.org/project/buildout.eggscleaner) | import buildout |
| [# pipedream add-package buildout.eggsdirectories](https://pypi.org/project/buildout.eggsdirectories) | import buildout |
| [# pipedream add-package buildout.eggtractor](https://pypi.org/project/buildout.eggtractor) | import buildout |
| [# pipedream add-package buildout.extensionscripts](https://pypi.org/project/buildout.extensionscripts) | import buildout |
| [# pipedream add-package buildout.locallib](https://pypi.org/project/buildout.locallib) | import buildout |
| [# pipedream add-package buildout.packagename](https://pypi.org/project/buildout.packagename) | import buildout |
| [# pipedream add-package buildout.recipe.isolation](https://pypi.org/project/buildout.recipe.isolation) | import buildout |
| [# pipedream add-package buildout.removeaddledeggs](https://pypi.org/project/buildout.removeaddledeggs) | import buildout |
| [# pipedream add-package buildout.requirements](https://pypi.org/project/buildout.requirements) | import buildout |
| [# pipedream add-package buildout.sanitycheck](https://pypi.org/project/buildout.sanitycheck) | import buildout |
| [# pipedream add-package buildout.sendpickedversions](https://pypi.org/project/buildout.sendpickedversions) | import buildout |
| [# pipedream add-package buildout.threatlevel](https://pypi.org/project/buildout.threatlevel) | import buildout |
| [# pipedream add-package buildout.umask](https://pypi.org/project/buildout.umask) | import buildout |
| [# pipedream add-package buildout.variables](https://pypi.org/project/buildout.variables) | import buildout |
| [# pipedream add-package buildbot\_slave](https://pypi.org/project/buildbot_slave) | import buildslave |
| [# pipedream add-package pies2overrides](https://pypi.org/project/pies2overrides) | import xmlrpc |
| [# pipedream add-package bumper\_lib](https://pypi.org/project/bumper_lib) | import bumper |
| [# pipedream add-package bumple\_downloader](https://pypi.org/project/bumple_downloader) | import bumple |
| [# pipedream add-package bundesliga\_cli](https://pypi.org/project/bundesliga_cli) | import bundesliga |
| [# pipedream add-package bundlemanager](https://pypi.org/project/bundlemanager) | import bundlemaker |
| [# pipedream add-package burp\_ui](https://pypi.org/project/burp_ui) | import burpui |
| [# pipedream add-package busyflow.pivotal](https://pypi.org/project/busyflow.pivotal) | import busyflow |
| [# pipedream add-package buttercms\_django](https://pypi.org/project/buttercms_django) | import buttercms-django |
| [# pipedream add-package buzz\_python\_client](https://pypi.org/project/buzz_python_client) | import buzz |
| [# pipedream add-package buildout\_versions\_checker](https://pypi.org/project/buildout_versions_checker) | import bvc |
| [# pipedream add-package bvg\_grabber](https://pypi.org/project/bvg_grabber) | import bvggrabber |
| [# pipedream add-package BYONDTools](https://pypi.org/project/BYONDTools) | import byond |
| [# pipedream add-package Bugzilla\_ETL](https://pypi.org/project/Bugzilla_ETL) | import bzETL |
| [# pipedream add-package bugzillatools](https://pypi.org/project/bugzillatools) | import bzlib |
| [# pipedream add-package bzr](https://pypi.org/project/bzr) | import bzrlib |
| [# pipedream add-package bzr\_automirror](https://pypi.org/project/bzr_automirror) | import bzrlib |
| [# pipedream add-package bzr\_bash\_completion](https://pypi.org/project/bzr_bash_completion) | import bzrlib |
| [# pipedream add-package bzr\_colo](https://pypi.org/project/bzr_colo) | import bzrlib |
| [# pipedream add-package bzr\_killtrailing](https://pypi.org/project/bzr_killtrailing) | import bzrlib |
| [# pipedream add-package bzr\_pqm](https://pypi.org/project/bzr_pqm) | import bzrlib |
| [# pipedream add-package c2c.cssmin](https://pypi.org/project/c2c.cssmin) | import c2c |
| [# pipedream add-package c2c.recipe.closurecompile](https://pypi.org/project/c2c.recipe.closurecompile) | import c2c |
| [# pipedream add-package c2c.recipe.cssmin](https://pypi.org/project/c2c.recipe.cssmin) | import c2c |
| [# pipedream add-package c2c.recipe.jarfile](https://pypi.org/project/c2c.recipe.jarfile) | import c2c |
| [# pipedream add-package c2c.recipe.msgfmt](https://pypi.org/project/c2c.recipe.msgfmt) | import c2c |
| [# pipedream add-package c2c.recipe.pkgversions](https://pypi.org/project/c2c.recipe.pkgversions) | import c2c |
| [# pipedream add-package c2c.sqlalchemy.rest](https://pypi.org/project/c2c.sqlalchemy.rest) | import c2c |
| [# pipedream add-package c2c.versions](https://pypi.org/project/c2c.versions) | import c2c |
| [# pipedream add-package c2c.recipe.facts](https://pypi.org/project/c2c.recipe.facts) | import c2c\_recipe\_facts |
| [# pipedream add-package cabalgata\_silla\_de\_montar](https://pypi.org/project/cabalgata_silla_de_montar) | import cabalgata |
| [# pipedream add-package cabalgata\_zookeeper](https://pypi.org/project/cabalgata_zookeeper) | import cabalgata |
| [# pipedream add-package django\_cache\_utils](https://pypi.org/project/django_cache_utils) | import cache\_utils |
| [# pipedream add-package django\_recaptcha](https://pypi.org/project/django_recaptcha) | import captcha |
| [# pipedream add-package Cartridge](https://pypi.org/project/Cartridge) | import cartridge |
| [# pipedream add-package cassandra\_driver](https://pypi.org/project/cassandra_driver) | import cassandra |
| [# pipedream add-package CassandraLauncher](https://pypi.org/project/CassandraLauncher) | import cassandralauncher |
| [# pipedream add-package 42qucc](https://pypi.org/project/42qucc) | import cc42 |
| [# pipedream add-package Cerberus](https://pypi.org/project/Cerberus) | import cerberus |
| [# pipedream add-package cfn-lint](https://pypi.org/project/cfn-lint) | import cfnlint |
| [# pipedream add-package Chameleon](https://pypi.org/project/Chameleon) | import chameleon |
| [# pipedream add-package charm\_tools](https://pypi.org/project/charm_tools) | import charmtools |
| [# pipedream add-package PyChef](https://pypi.org/project/PyChef) | import chef |
| [# pipedream add-package c8d](https://pypi.org/project/c8d) | import chip8 |
| [# pipedream add-package python\_cjson](https://pypi.org/project/python_cjson) | import cjson |
| [# pipedream add-package django\_classy\_tags](https://pypi.org/project/django_classy_tags) | import classytags |
| [# pipedream add-package ConcurrentLogHandler](https://pypi.org/project/ConcurrentLogHandler) | import cloghandler |
| [# pipedream add-package virtualenv\_clone](https://pypi.org/project/virtualenv_clone) | import clonevirtualenv |
| [# pipedream add-package al\_cloudinsight](https://pypi.org/project/al_cloudinsight) | import cloud-insight |
| [# pipedream add-package adminapi](https://pypi.org/project/adminapi) | import cloud\_admin |
| [# pipedream add-package python\_cloudservers](https://pypi.org/project/python_cloudservers) | import cloudservers |
| [# pipedream add-package cerebrod](https://pypi.org/project/cerebrod) | import tasksitter |
| [# pipedream add-package django\_cms](https://pypi.org/project/django_cms) | import cms |
| [# pipedream add-package ba\_colander](https://pypi.org/project/ba_colander) | import colander |
| [# pipedream add-package ansicolors](https://pypi.org/project/ansicolors) | import colors |
| [# pipedream add-package bf\_lc3](https://pypi.org/project/bf_lc3) | import compile |
| [# pipedream add-package docker\_compose](https://pypi.org/project/docker_compose) | import compose |
| [# pipedream add-package django\_compressor](https://pypi.org/project/django_compressor) | import compressor |
| [# pipedream add-package futures](https://pypi.org/project/futures) | import concurrent |
| [# pipedream add-package ConfigArgParse](https://pypi.org/project/ConfigArgParse) | import configargparse |
| [# pipedream add-package PyContracts](https://pypi.org/project/PyContracts) | import contracts |
| [# pipedream add-package weblogo](https://pypi.org/project/weblogo) | import weblogolib |
| [# pipedream add-package Couchapp](https://pypi.org/project/Couchapp) | import couchapp |
| [# pipedream add-package CouchDB](https://pypi.org/project/CouchDB) | import couchdb |
| [# pipedream add-package couchdb\_python\_curl](https://pypi.org/project/couchdb_python_curl) | import couchdbcurl |
| [# pipedream add-package coursera\_dl](https://pypi.org/project/coursera_dl) | import courseradownloader |
| [# pipedream add-package cow\_framework](https://pypi.org/project/cow_framework) | import cow |
| [# pipedream add-package python\_creole](https://pypi.org/project/python_creole) | import creole |
| [# pipedream add-package Creoleparser](https://pypi.org/project/Creoleparser) | import creoleparser |
| [# pipedream add-package django\_crispy\_forms](https://pypi.org/project/django_crispy_forms) | import crispy\_forms |
| [# pipedream add-package python\_crontab](https://pypi.org/project/python_crontab) | import crontab |
| [# pipedream add-package tff](https://pypi.org/project/tff) | import ctff |
| [# pipedream add-package pycups](https://pypi.org/project/pycups) | import cups |
| [# pipedream add-package elasticsearch\_curator](https://pypi.org/project/elasticsearch_curator) | import curator |
| [# pipedream add-package pycurl](https://pypi.org/project/pycurl) | import curl |
| [# pipedream add-package python\_daemon](https://pypi.org/project/python_daemon) | import daemon |
| [# pipedream add-package DARE](https://pypi.org/project/DARE) | import dare |
| [# pipedream add-package python\_dateutil](https://pypi.org/project/python_dateutil) | import dateutil |
| [# pipedream add-package DAWG](https://pypi.org/project/DAWG) | import dawg |
| [# pipedream add-package python\_debian](https://pypi.org/project/python_debian) | import debian |
| [# pipedream add-package python-decouple](https://pypi.org/project/python-decouple) | import decouple |
| [# pipedream add-package webunit](https://pypi.org/project/webunit) | import demo |
| [# pipedream add-package PySynth](https://pypi.org/project/PySynth) | import pysynth\_samp |
| [# pipedream add-package juju\_deployer](https://pypi.org/project/juju_deployer) | import deployer |
| [# pipedream add-package filedepot](https://pypi.org/project/filedepot) | import depot |
| [# pipedream add-package tg.devtools](https://pypi.org/project/tg.devtools) | import devtools |
| [# pipedream add-package 2gis](https://pypi.org/project/2gis) | import dgis |
| [# pipedream add-package pyDHTMLParser](https://pypi.org/project/pyDHTMLParser) | import dhtmlparser |
| [# pipedream add-package python\_digitalocean](https://pypi.org/project/python_digitalocean) | import digitalocean |
| [# pipedream add-package discord.py](https://pypi.org/project/discord.py) | import discord |
| [# pipedream add-package ez\_setup](https://pypi.org/project/ez_setup) | import distribute\_setup |
| [# pipedream add-package Distutils2](https://pypi.org/project/Distutils2) | import distutils2 |
| [# pipedream add-package Django](https://pypi.org/project/Django) | import django |
| [# pipedream add-package amitu\_hstore](https://pypi.org/project/amitu_hstore) | import django\_hstore |
| [# pipedream add-package django\_bower](https://pypi.org/project/django_bower) | import djangobower |
| [# pipedream add-package django\_celery](https://pypi.org/project/django_celery) | import djcelery |
| [# pipedream add-package django\_kombu](https://pypi.org/project/django_kombu) | import djkombu |
| [# pipedream add-package djorm\_ext\_pgarray](https://pypi.org/project/djorm_ext_pgarray) | import djorm\_pgarray |
| [# pipedream add-package dnspython](https://pypi.org/project/dnspython) | import dns |
| [# pipedream add-package ansible\_docgenerator](https://pypi.org/project/ansible_docgenerator) | import docgen |
| [# pipedream add-package docker\_py](https://pypi.org/project/docker_py) | import docker |
| [# pipedream add-package dogpile.cache](https://pypi.org/project/dogpile.cache) | import dogpile |
| [# pipedream add-package dogpile.core](https://pypi.org/project/dogpile.core) | import dogpile |
| [# pipedream add-package dogapi](https://pypi.org/project/dogapi) | import dogshell |
| [# pipedream add-package pydot](https://pypi.org/project/pydot) | import dot\_parser |
| [# pipedream add-package pydot2](https://pypi.org/project/pydot2) | import dot\_parser |
| [# pipedream add-package pydot3k](https://pypi.org/project/pydot3k) | import dot\_parser |
| [# pipedream add-package python-dotenv](https://pypi.org/project/python-dotenv) | import dotenv |
| [# pipedream add-package dpkt\_fix](https://pypi.org/project/dpkt_fix) | import dpkt |
| [# pipedream add-package python\_ldap](https://pypi.org/project/python_ldap) | import ldif |
| [# pipedream add-package django\_durationfield](https://pypi.org/project/django_durationfield) | import durationfield |
| [# pipedream add-package datazilla](https://pypi.org/project/datazilla) | import dzclient |
| [# pipedream add-package easybuild\_framework](https://pypi.org/project/easybuild_framework) | import easybuild |
| [# pipedream add-package python\_editor](https://pypi.org/project/python_editor) | import editor |
| [# pipedream add-package azure\_elasticluster](https://pypi.org/project/azure_elasticluster) | import elasticluster |
| [# pipedream add-package azure\_elasticluster\_current](https://pypi.org/project/azure_elasticluster_current) | import elasticluster |
| [# pipedream add-package pyelftools](https://pypi.org/project/pyelftools) | import elftools |
| [# pipedream add-package Elixir](https://pypi.org/project/Elixir) | import elixir |
| [# pipedream add-package empy](https://pypi.org/project/empy) | import emlib |
| [# pipedream add-package pyenchant](https://pypi.org/project/pyenchant) | import enchant |
| [# pipedream add-package cssutils](https://pypi.org/project/cssutils) | import encutils |
| [# pipedream add-package python\_engineio](https://pypi.org/project/python_engineio) | import engineio |
| [# pipedream add-package enum34](https://pypi.org/project/enum34) | import enum |
| [# pipedream add-package pyephem](https://pypi.org/project/pyephem) | import ephem |
| [# pipedream add-package abl.errorreporter](https://pypi.org/project/abl.errorreporter) | import errorreporter |
| [# pipedream add-package beaker\_es\_plot](https://pypi.org/project/beaker_es_plot) | import esplot |
| [# pipedream add-package adrest](https://pypi.org/project/adrest) | import example |
| [# pipedream add-package tweepy](https://pypi.org/project/tweepy) | import examples |
| [# pipedream add-package pycassa](https://pypi.org/project/pycassa) | import ez\_setup |
| [# pipedream add-package Fabric](https://pypi.org/project/Fabric) | import fabric |
| [# pipedream add-package Faker](https://pypi.org/project/Faker) | import faker |
| [# pipedream add-package python\_fedora](https://pypi.org/project/python_fedora) | import fedora |
| [# pipedream add-package ailove\_django\_fias](https://pypi.org/project/ailove_django_fias) | import fias |
| [# pipedream add-package 51degrees\_mobile\_detector](https://pypi.org/project/51degrees_mobile_detector) | import fiftyone\_degrees |
| [# pipedream add-package five.customerize](https://pypi.org/project/five.customerize) | import five |
| [# pipedream add-package five.globalrequest](https://pypi.org/project/five.globalrequest) | import five |
| [# pipedream add-package five.intid](https://pypi.org/project/five.intid) | import five |
| [# pipedream add-package five.localsitemanager](https://pypi.org/project/five.localsitemanager) | import five |
| [# pipedream add-package five.pt](https://pypi.org/project/five.pt) | import five |
| [# pipedream add-package android\_flasher](https://pypi.org/project/android_flasher) | import flasher |
| [# pipedream add-package Flask](https://pypi.org/project/Flask) | import flask |
| [# pipedream add-package Frozen\_Flask](https://pypi.org/project/Frozen_Flask) | import flask\_frozen |
| [# pipedream add-package Flask\_And\_Redis](https://pypi.org/project/Flask_And_Redis) | import flask\_redis |
| [# pipedream add-package Flask\_Bcrypt](https://pypi.org/project/Flask_Bcrypt) | import flaskext |
| [# pipedream add-package vnc2flv](https://pypi.org/project/vnc2flv) | import flvscreen |
| [# pipedream add-package django\_followit](https://pypi.org/project/django_followit) | import followit |
| [# pipedream add-package pyforge](https://pypi.org/project/pyforge) | import forge |
| [# pipedream add-package FormEncode](https://pypi.org/project/FormEncode) | import formencode |
| [# pipedream add-package django\_formtools](https://pypi.org/project/django_formtools) | import formtools |
| [# pipedream add-package 4ch](https://pypi.org/project/4ch) | import fourch |
| [# pipedream add-package allegrordf](https://pypi.org/project/allegrordf) | import franz |
| [# pipedream add-package freetype\_py](https://pypi.org/project/freetype_py) | import freetype |
| [# pipedream add-package python\_frontmatter](https://pypi.org/project/python_frontmatter) | import frontmatter |
| [# pipedream add-package ftp\_cloudfs](https://pypi.org/project/ftp_cloudfs) | import ftpcloudfs |
| [# pipedream add-package librabbitmq](https://pypi.org/project/librabbitmq) | import funtests |
| [# pipedream add-package fusepy](https://pypi.org/project/fusepy) | import fuse |
| [# pipedream add-package Fuzzy](https://pypi.org/project/Fuzzy) | import fuzzy |
| [# pipedream add-package tiddlyweb](https://pypi.org/project/tiddlyweb) | import gabbi |
| [# pipedream add-package 3d\_wallet\_generator](https://pypi.org/project/3d_wallet_generator) | import gen\_3dwallet |
| [# pipedream add-package android\_gendimen](https://pypi.org/project/android_gendimen) | import gendimen |
| [# pipedream add-package Genshi](https://pypi.org/project/Genshi) | import genshi |
| [# pipedream add-package python\_geohash](https://pypi.org/project/python_geohash) | import quadtree |
| [# pipedream add-package GeoNode](https://pypi.org/project/GeoNode) | import geonode |
| [# pipedream add-package gsconfig](https://pypi.org/project/gsconfig) | import geoserver |
| [# pipedream add-package Geraldo](https://pypi.org/project/Geraldo) | import geraldo |
| [# pipedream add-package django\_getenv](https://pypi.org/project/django_getenv) | import getenv |
| [# pipedream add-package gevent\_websocket](https://pypi.org/project/gevent_websocket) | import geventwebsocket |
| [# pipedream add-package python\_gflags](https://pypi.org/project/python_gflags) | import gflags |
| [# pipedream add-package GitPython](https://pypi.org/project/GitPython) | import git |
| [# pipedream add-package PyGithub](https://pypi.org/project/PyGithub) | import github |
| [# pipedream add-package github3.py](https://pypi.org/project/github3.py) | import github3 |
| [# pipedream add-package git\_py](https://pypi.org/project/git_py) | import gitpy |
| [# pipedream add-package globusonline\_transfer\_api\_client](https://pypi.org/project/globusonline_transfer_api_client) | import globusonline |
| [# pipedream add-package protobuf](https://pypi.org/project/protobuf) | import google |
| [# pipedream add-package grace\_dizmo](https://pypi.org/project/grace_dizmo) | import grace-dizmo |
| [# pipedream add-package anovelmous\_grammar](https://pypi.org/project/anovelmous_grammar) | import grammar |
| [# pipedream add-package graphenelib](https://pypi.org/project/graphenelib) | import grapheneapi |
| [# pipedream add-package scales](https://pypi.org/project/scales) | import greplin |
| [# pipedream add-package grokcore.component](https://pypi.org/project/grokcore.component) | import grokcore |
| [# pipedream add-package gsutil](https://pypi.org/project/gsutil) | import gslib |
| [# pipedream add-package PyHamcrest](https://pypi.org/project/PyHamcrest) | import hamcrest |
| [# pipedream add-package HARPy](https://pypi.org/project/HARPy) | import harpy |
| [# pipedream add-package PyHawk\_with\_a\_single\_extra\_commit](https://pypi.org/project/PyHawk_with_a_single_extra_commit) | import hawk |
| [# pipedream add-package django\_haystack](https://pypi.org/project/django_haystack) | import haystack |
| [# pipedream add-package mercurial](https://pypi.org/project/mercurial) | import hgext |
| [# pipedream add-package hg\_git](https://pypi.org/project/hg_git) | import hggit |
| [# pipedream add-package python\_hglib](https://pypi.org/project/python_hglib) | import hglib |
| [# pipedream add-package pisa](https://pypi.org/project/pisa) | import sx |
| [# pipedream add-package amarokHola](https://pypi.org/project/amarokHola) | import hola |
| [# pipedream add-package Hoover](https://pypi.org/project/Hoover) | import hoover |
| [# pipedream add-package python\_hostlist](https://pypi.org/project/python_hostlist) | import hostlist |
| [# pipedream add-package nosehtmloutput](https://pypi.org/project/nosehtmloutput) | import htmloutput |
| [# pipedream add-package django\_hvad](https://pypi.org/project/django_hvad) | import hvad |
| [# pipedream add-package hydra-core](https://pypi.org/project/hydra-core) | import hydra |
| [# pipedream add-package 199Fix](https://pypi.org/project/199Fix) | import i99fix |
| [# pipedream add-package python\_igraph](https://pypi.org/project/python_igraph) | import igraph |
| [# pipedream add-package IMDbPY](https://pypi.org/project/IMDbPY) | import imdb |
| [# pipedream add-package impyla](https://pypi.org/project/impyla) | import impala |
| [# pipedream add-package ambition\_inmemorystorage](https://pypi.org/project/ambition_inmemorystorage) | import inmemorystorage |
| [# pipedream add-package backport\_ipaddress](https://pypi.org/project/backport_ipaddress) | import ipaddress |
| [# pipedream add-package jaraco.timing](https://pypi.org/project/jaraco.timing) | import jaraco |
| [# pipedream add-package jaraco.util](https://pypi.org/project/jaraco.util) | import jaraco |
| [# pipedream add-package Jinja2](https://pypi.org/project/Jinja2) | import jinja2 |
| [# pipedream add-package jira\_cli](https://pypi.org/project/jira_cli) | import jiracli |
| [# pipedream add-package johnny\_cache](https://pypi.org/project/johnny_cache) | import johnny |
| [# pipedream add-package JPype1](https://pypi.org/project/JPype1) | import jpypex |
| [# pipedream add-package django\_jsonfield](https://pypi.org/project/django_jsonfield) | import jsonfield |
| [# pipedream add-package aino\_jstools](https://pypi.org/project/aino_jstools) | import jstools |
| [# pipedream add-package jupyter\_pip](https://pypi.org/project/jupyter_pip) | import jupyterpip |
| [# pipedream add-package PyJWT](https://pypi.org/project/PyJWT) | import jwt |
| [# pipedream add-package asana\_kazoo](https://pypi.org/project/asana_kazoo) | import kazoo |
| [# pipedream add-package line\_profiler](https://pypi.org/project/line_profiler) | import kernprof |
| [# pipedream add-package python\_keyczar](https://pypi.org/project/python_keyczar) | import keyczar |
| [# pipedream add-package django\_keyedcache](https://pypi.org/project/django_keyedcache) | import keyedcache |
| [# pipedream add-package python\_keystoneclient](https://pypi.org/project/python_keystoneclient) | import keystoneclient |
| [# pipedream add-package kickstart](https://pypi.org/project/kickstart) | import kickstarter |
| [# pipedream add-package krbV](https://pypi.org/project/krbV) | import krbv |
| [# pipedream add-package kss.core](https://pypi.org/project/kss.core) | import kss |
| [# pipedream add-package Kuyruk](https://pypi.org/project/Kuyruk) | import kuyruk |
| [# pipedream add-package AdvancedLangConv](https://pypi.org/project/AdvancedLangConv) | import langconv |
| [# pipedream add-package lava\_utils\_interface](https://pypi.org/project/lava_utils_interface) | import lava |
| [# pipedream add-package lazr.authentication](https://pypi.org/project/lazr.authentication) | import lazr |
| [# pipedream add-package lazr.restfulclient](https://pypi.org/project/lazr.restfulclient) | import lazr |
| [# pipedream add-package lazr.uri](https://pypi.org/project/lazr.uri) | import lazr |
| [# pipedream add-package adpasswd](https://pypi.org/project/adpasswd) | import ldaplib |
| [# pipedream add-package 2or3](https://pypi.org/project/2or3) | import lib2or3 |
| [# pipedream add-package 3to2](https://pypi.org/project/3to2) | import lib3to2 |
| [# pipedream add-package Aito](https://pypi.org/project/Aito) | import libaito |
| [# pipedream add-package bugs\_everywhere](https://pypi.org/project/bugs_everywhere) | import libbe |
| [# pipedream add-package bucket](https://pypi.org/project/bucket) | import libbucket |
| [# pipedream add-package apache\_libcloud](https://pypi.org/project/apache_libcloud) | import libcloud |
| [# pipedream add-package future](https://pypi.org/project/future) | import winreg |
| [# pipedream add-package generateDS](https://pypi.org/project/generateDS) | import libgenerateDS |
| [# pipedream add-package mitmproxy](https://pypi.org/project/mitmproxy) | import libmproxy |
| [# pipedream add-package 7lk\_ocr\_deploy](https://pypi.org/project/7lk_ocr_deploy) | import libsvm |
| [# pipedream add-package lisa\_server](https://pypi.org/project/lisa_server) | import lisa |
| [# pipedream add-package aspose\_words\_java\_for\_python](https://pypi.org/project/aspose_words_java_for_python) | import loadingandsaving |
| [# pipedream add-package locustio](https://pypi.org/project/locustio) | import locust |
| [# pipedream add-package Logbook](https://pypi.org/project/Logbook) | import logbook |
| [# pipedream add-package buildbot\_status\_logentries](https://pypi.org/project/buildbot_status_logentries) | import logentries |
| [# pipedream add-package logilab\_mtconverter](https://pypi.org/project/logilab_mtconverter) | import logilab |
| [# pipedream add-package python\_magic](https://pypi.org/project/python_magic) | import magic |
| [# pipedream add-package Mako](https://pypi.org/project/Mako) | import mako |
| [# pipedream add-package ManifestDestiny](https://pypi.org/project/ManifestDestiny) | import manifestparser |
| [# pipedream add-package marionette\_client](https://pypi.org/project/marionette_client) | import marionette |
| [# pipedream add-package Markdown](https://pypi.org/project/Markdown) | import markdown |
| [# pipedream add-package pytest\_marks](https://pypi.org/project/pytest_marks) | import marks |
| [# pipedream add-package MarkupSafe](https://pypi.org/project/MarkupSafe) | import markupsafe |
| [# pipedream add-package pymavlink](https://pypi.org/project/pymavlink) | import mavnative |
| [# pipedream add-package python\_memcached](https://pypi.org/project/python_memcached) | import memcache |
| [# pipedream add-package AllPairs](https://pypi.org/project/AllPairs) | import metacomm |
| [# pipedream add-package Metafone](https://pypi.org/project/Metafone) | import metaphone |
| [# pipedream add-package metlog\_py](https://pypi.org/project/metlog_py) | import metlog |
| [# pipedream add-package Mezzanine](https://pypi.org/project/Mezzanine) | import mezzanine |
| [# pipedream add-package sqlalchemy\_migrate](https://pypi.org/project/sqlalchemy_migrate) | import migrate |
| [# pipedream add-package python\_mimeparse](https://pypi.org/project/python_mimeparse) | import mimeparse |
| [# pipedream add-package minitage.paste](https://pypi.org/project/minitage.paste) | import minitage |
| [# pipedream add-package minitage.recipe.common](https://pypi.org/project/minitage.recipe.common) | import minitage |
| [# pipedream add-package android\_missingdrawables](https://pypi.org/project/android_missingdrawables) | import missingdrawables |
| [# pipedream add-package 2lazy2rest](https://pypi.org/project/2lazy2rest) | import mkrst\_themes |
| [# pipedream add-package mockredispy](https://pypi.org/project/mockredispy) | import mockredis |
| [# pipedream add-package python\_modargs](https://pypi.org/project/python_modargs) | import modargs |
| [# pipedream add-package django\_model\_utils](https://pypi.org/project/django_model_utils) | import model\_utils |
| [# pipedream add-package asposebarcode](https://pypi.org/project/asposebarcode) | import models |
| [# pipedream add-package asposestorage](https://pypi.org/project/asposestorage) | import models |
| [# pipedream add-package moksha.common](https://pypi.org/project/moksha.common) | import moksha |
| [# pipedream add-package moksha.hub](https://pypi.org/project/moksha.hub) | import moksha |
| [# pipedream add-package moksha.wsgi](https://pypi.org/project/moksha.wsgi) | import moksha |
| [# pipedream add-package py\_moneyed](https://pypi.org/project/py_moneyed) | import moneyed |
| [# pipedream add-package MongoAlchemy](https://pypi.org/project/MongoAlchemy) | import mongoalchemy |
| [# pipedream add-package MonthDelta](https://pypi.org/project/MonthDelta) | import monthdelta |
| [# pipedream add-package Mopidy](https://pypi.org/project/Mopidy) | import mopidy |
| [# pipedream add-package MoPyTools](https://pypi.org/project/MoPyTools) | import mopytools |
| [# pipedream add-package django\_mptt](https://pypi.org/project/django_mptt) | import mptt |
| [# pipedream add-package python-mpv](https://pypi.org/project/python-mpv) | import mpv |
| [# pipedream add-package mr.bob](https://pypi.org/project/mr.bob) | import mrbob |
| [# pipedream add-package msgpack\_python](https://pypi.org/project/msgpack_python) | import msgpack |
| [# pipedream add-package aino\_mutations](https://pypi.org/project/aino_mutations) | import mutations |
| [# pipedream add-package amazon\_mws](https://pypi.org/project/amazon_mws) | import mws |
| [# pipedream add-package mysql\_connector\_repackaged](https://pypi.org/project/mysql_connector_repackaged) | import mysql |
| [# pipedream add-package django\_native\_tags](https://pypi.org/project/django_native_tags) | import native\_tags |
| [# pipedream add-package ndg\_httpsclient](https://pypi.org/project/ndg_httpsclient) | import ndg |
| [# pipedream add-package trytond\_nereid](https://pypi.org/project/trytond_nereid) | import nereid |
| [# pipedream add-package baojinhuan](https://pypi.org/project/baojinhuan) | import nested |
| [# pipedream add-package Amauri](https://pypi.org/project/Amauri) | import nester |
| [# pipedream add-package abofly](https://pypi.org/project/abofly) | import nester |
| [# pipedream add-package bssm\_pythonSig](https://pypi.org/project/bssm_pythonSig) | import nester |
| [# pipedream add-package python\_novaclient](https://pypi.org/project/python_novaclient) | import novaclient |
| [# pipedream add-package alauda\_django\_oauth](https://pypi.org/project/alauda_django_oauth) | import oauth2\_provider |
| [# pipedream add-package oauth2client](https://pypi.org/project/oauth2client) | import oauth2client |
| [# pipedream add-package odfpy](https://pypi.org/project/odfpy) | import odf |
| [# pipedream add-package Parsley](https://pypi.org/project/Parsley) | import ometa |
| [# pipedream add-package python\_openid](https://pypi.org/project/python_openid) | import openid |
| [# pipedream add-package ali\_opensearch](https://pypi.org/project/ali_opensearch) | import opensearchsdk |
| [# pipedream add-package oslo.i18n](https://pypi.org/project/oslo.i18n) | import oslo\_i18n |
| [# pipedream add-package oslo.serialization](https://pypi.org/project/oslo.serialization) | import oslo\_serialization |
| [# pipedream add-package oslo.utils](https://pypi.org/project/oslo.utils) | import oslo\_utils |
| [# pipedream add-package alioss](https://pypi.org/project/alioss) | import oss |
| [# pipedream add-package aliyun\_python\_sdk\_oss](https://pypi.org/project/aliyun_python_sdk_oss) | import oss |
| [# pipedream add-package aliyunoss](https://pypi.org/project/aliyunoss) | import oss |
| [# pipedream add-package cashew](https://pypi.org/project/cashew) | import output |
| [# pipedream add-package OWSLib](https://pypi.org/project/OWSLib) | import owslib |
| [# pipedream add-package nwdiag](https://pypi.org/project/nwdiag) | import rackdiag |
| [# pipedream add-package paho\_mqtt](https://pypi.org/project/paho_mqtt) | import paho |
| [# pipedream add-package django\_paintstore](https://pypi.org/project/django_paintstore) | import paintstore |
| [# pipedream add-package django\_parler](https://pypi.org/project/django_parler) | import parler |
| [# pipedream add-package PasteScript](https://pypi.org/project/PasteScript) | import paste |
| [# pipedream add-package forked\_path](https://pypi.org/project/forked_path) | import path |
| [# pipedream add-package path.py](https://pypi.org/project/path.py) | import path |
| [# pipedream add-package patricia-trie](https://pypi.org/project/patricia-trie) | import patricia |
| [# pipedream add-package Paver](https://pypi.org/project/Paver) | import paver |
| [# pipedream add-package ProxyTypes](https://pypi.org/project/ProxyTypes) | import peak |
| [# pipedream add-package anderson.picasso](https://pypi.org/project/anderson.picasso) | import picasso |
| [# pipedream add-package django-picklefield](https://pypi.org/project/django-picklefield) | import picklefield |
| [# pipedream add-package pivotal\_py](https://pypi.org/project/pivotal_py) | import pivotal |
| [# pipedream add-package peewee](https://pypi.org/project/peewee) | import pwiz |
| [# pipedream add-package plivo](https://pypi.org/project/plivo) | import plivoxml |
| [# pipedream add-package plone.alterego](https://pypi.org/project/plone.alterego) | import plone |
| [# pipedream add-package plone.api](https://pypi.org/project/plone.api) | import plone |
| [# pipedream add-package plone.app.blob](https://pypi.org/project/plone.app.blob) | import plone |
| [# pipedream add-package plone.app.collection](https://pypi.org/project/plone.app.collection) | import plone |
| [# pipedream add-package plone.app.content](https://pypi.org/project/plone.app.content) | import plone |
| [# pipedream add-package plone.app.contentlisting](https://pypi.org/project/plone.app.contentlisting) | import plone |
| [# pipedream add-package plone.app.contentmenu](https://pypi.org/project/plone.app.contentmenu) | import plone |
| [# pipedream add-package plone.app.contentrules](https://pypi.org/project/plone.app.contentrules) | import plone |
| [# pipedream add-package plone.app.contenttypes](https://pypi.org/project/plone.app.contenttypes) | import plone |
| [# pipedream add-package plone.app.controlpanel](https://pypi.org/project/plone.app.controlpanel) | import plone |
| [# pipedream add-package plone.app.customerize](https://pypi.org/project/plone.app.customerize) | import plone |
| [# pipedream add-package plone.app.dexterity](https://pypi.org/project/plone.app.dexterity) | import plone |
| [# pipedream add-package plone.app.discussion](https://pypi.org/project/plone.app.discussion) | import plone |
| [# pipedream add-package plone.app.event](https://pypi.org/project/plone.app.event) | import plone |
| [# pipedream add-package plone.app.folder](https://pypi.org/project/plone.app.folder) | import plone |
| [# pipedream add-package plone.app.i18n](https://pypi.org/project/plone.app.i18n) | import plone |
| [# pipedream add-package plone.app.imaging](https://pypi.org/project/plone.app.imaging) | import plone |
| [# pipedream add-package plone.app.intid](https://pypi.org/project/plone.app.intid) | import plone |
| [# pipedream add-package plone.app.layout](https://pypi.org/project/plone.app.layout) | import plone |
| [# pipedream add-package plone.app.linkintegrity](https://pypi.org/project/plone.app.linkintegrity) | import plone |
| [# pipedream add-package plone.app.locales](https://pypi.org/project/plone.app.locales) | import plone |
| [# pipedream add-package plone.app.lockingbehavior](https://pypi.org/project/plone.app.lockingbehavior) | import plone |
| [# pipedream add-package plone.app.multilingual](https://pypi.org/project/plone.app.multilingual) | import plone |
| [# pipedream add-package plone.app.portlets](https://pypi.org/project/plone.app.portlets) | import plone |
| [# pipedream add-package plone.app.querystring](https://pypi.org/project/plone.app.querystring) | import plone |
| [# pipedream add-package plone.app.redirector](https://pypi.org/project/plone.app.redirector) | import plone |
| [# pipedream add-package plone.app.registry](https://pypi.org/project/plone.app.registry) | import plone |
| [# pipedream add-package plone.app.relationfield](https://pypi.org/project/plone.app.relationfield) | import plone |
| [# pipedream add-package plone.app.textfield](https://pypi.org/project/plone.app.textfield) | import plone |
| [# pipedream add-package plone.app.theming](https://pypi.org/project/plone.app.theming) | import plone |
| [# pipedream add-package plone.app.users](https://pypi.org/project/plone.app.users) | import plone |
| [# pipedream add-package plone.app.uuid](https://pypi.org/project/plone.app.uuid) | import plone |
| [# pipedream add-package plone.app.versioningbehavior](https://pypi.org/project/plone.app.versioningbehavior) | import plone |
| [# pipedream add-package plone.app.viewletmanager](https://pypi.org/project/plone.app.viewletmanager) | import plone |
| [# pipedream add-package plone.app.vocabularies](https://pypi.org/project/plone.app.vocabularies) | import plone |
| [# pipedream add-package plone.app.widgets](https://pypi.org/project/plone.app.widgets) | import plone |
| [# pipedream add-package plone.app.workflow](https://pypi.org/project/plone.app.workflow) | import plone |
| [# pipedream add-package plone.app.z3cform](https://pypi.org/project/plone.app.z3cform) | import plone |
| [# pipedream add-package plone.autoform](https://pypi.org/project/plone.autoform) | import plone |
| [# pipedream add-package plone.batching](https://pypi.org/project/plone.batching) | import plone |
| [# pipedream add-package plone.behavior](https://pypi.org/project/plone.behavior) | import plone |
| [# pipedream add-package plone.browserlayer](https://pypi.org/project/plone.browserlayer) | import plone |
| [# pipedream add-package plone.caching](https://pypi.org/project/plone.caching) | import plone |
| [# pipedream add-package plone.contentrules](https://pypi.org/project/plone.contentrules) | import plone |
| [# pipedream add-package plone.dexterity](https://pypi.org/project/plone.dexterity) | import plone |
| [# pipedream add-package plone.event](https://pypi.org/project/plone.event) | import plone |
| [# pipedream add-package plone.folder](https://pypi.org/project/plone.folder) | import plone |
| [# pipedream add-package plone.formwidget.namedfile](https://pypi.org/project/plone.formwidget.namedfile) | import plone |
| [# pipedream add-package plone.formwidget.recurrence](https://pypi.org/project/plone.formwidget.recurrence) | import plone |
| [# pipedream add-package plone.i18n](https://pypi.org/project/plone.i18n) | import plone |
| [# pipedream add-package plone.indexer](https://pypi.org/project/plone.indexer) | import plone |
| [# pipedream add-package plone.intelligenttext](https://pypi.org/project/plone.intelligenttext) | import plone |
| [# pipedream add-package plone.keyring](https://pypi.org/project/plone.keyring) | import plone |
| [# pipedream add-package plone.locking](https://pypi.org/project/plone.locking) | import plone |
| [# pipedream add-package plone.memoize](https://pypi.org/project/plone.memoize) | import plone |
| [# pipedream add-package plone.namedfile](https://pypi.org/project/plone.namedfile) | import plone |
| [# pipedream add-package plone.outputfilters](https://pypi.org/project/plone.outputfilters) | import plone |
| [# pipedream add-package plone.portlet.collection](https://pypi.org/project/plone.portlet.collection) | import plone |
| [# pipedream add-package plone.portlet.static](https://pypi.org/project/plone.portlet.static) | import plone |
| [# pipedream add-package plone.portlets](https://pypi.org/project/plone.portlets) | import plone |
| [# pipedream add-package plone.protect](https://pypi.org/project/plone.protect) | import plone |
| [# pipedream add-package plone.recipe.zope2install](https://pypi.org/project/plone.recipe.zope2install) | import plone |
| [# pipedream add-package plone.registry](https://pypi.org/project/plone.registry) | import plone |
| [# pipedream add-package plone.resource](https://pypi.org/project/plone.resource) | import plone |
| [# pipedream add-package plone.resourceeditor](https://pypi.org/project/plone.resourceeditor) | import plone |
| [# pipedream add-package plone.rfc822](https://pypi.org/project/plone.rfc822) | import plone |
| [# pipedream add-package plone.scale](https://pypi.org/project/plone.scale) | import plone |
| [# pipedream add-package plone.schema](https://pypi.org/project/plone.schema) | import plone |
| [# pipedream add-package plone.schemaeditor](https://pypi.org/project/plone.schemaeditor) | import plone |
| [# pipedream add-package plone.session](https://pypi.org/project/plone.session) | import plone |
| [# pipedream add-package plone.stringinterp](https://pypi.org/project/plone.stringinterp) | import plone |
| [# pipedream add-package plone.subrequest](https://pypi.org/project/plone.subrequest) | import plone |
| [# pipedream add-package plone.supermodel](https://pypi.org/project/plone.supermodel) | import plone |
| [# pipedream add-package plone.synchronize](https://pypi.org/project/plone.synchronize) | import plone |
| [# pipedream add-package plone.theme](https://pypi.org/project/plone.theme) | import plone |
| [# pipedream add-package plone.transformchain](https://pypi.org/project/plone.transformchain) | import plone |
| [# pipedream add-package plone.uuid](https://pypi.org/project/plone.uuid) | import plone |
| [# pipedream add-package plone.z3cform](https://pypi.org/project/plone.z3cform) | import plone |
| [# pipedream add-package plonetheme.barceloneta](https://pypi.org/project/plonetheme.barceloneta) | import plonetheme |
| [# pipedream add-package pypng](https://pypi.org/project/pypng) | import png |
| [# pipedream add-package django\_polymorphic](https://pypi.org/project/django_polymorphic) | import polymorphic |
| [# pipedream add-package python\_postmark](https://pypi.org/project/python_postmark) | import postmark |
| [# pipedream add-package bash\_powerprompt](https://pypi.org/project/bash_powerprompt) | import powerprompt |
| [# pipedream add-package django-prefetch](https://pypi.org/project/django-prefetch) | import prefetch |
| [# pipedream add-package AndrewList](https://pypi.org/project/AndrewList) | import printList |
| [# pipedream add-package progressbar2](https://pypi.org/project/progressbar2) | import progressbar |
| [# pipedream add-package progressbar33](https://pypi.org/project/progressbar33) | import progressbar |
| [# pipedream add-package django\_oauth2\_provider](https://pypi.org/project/django_oauth2_provider) | import provider |
| [# pipedream add-package pure\_sasl](https://pypi.org/project/pure_sasl) | import puresasl |
| [# pipedream add-package pylzma](https://pypi.org/project/pylzma) | import py7zlib |
| [# pipedream add-package pyAMI\_core](https://pypi.org/project/pyAMI_core) | import pyAMI |
| [# pipedream add-package arsespyder](https://pypi.org/project/arsespyder) | import pyarsespyder |
| [# pipedream add-package asdf](https://pypi.org/project/asdf) | import pyasdf |
| [# pipedream add-package aspell\_python\_ctypes](https://pypi.org/project/aspell_python_ctypes) | import pyaspell |
| [# pipedream add-package pybbm](https://pypi.org/project/pybbm) | import pybb |
| [# pipedream add-package pybloomfiltermmap](https://pypi.org/project/pybloomfiltermmap) | import pybloomfilter |
| [# pipedream add-package Pyccuracy](https://pypi.org/project/Pyccuracy) | import pyccuracy |
| [# pipedream add-package PyCK](https://pypi.org/project/PyCK) | import pyck |
| [# pipedream add-package python\_crfsuite](https://pypi.org/project/python_crfsuite) | import pycrfsuite |
| [# pipedream add-package PyDispatcher](https://pypi.org/project/PyDispatcher) | import pydispatch |
| [# pipedream add-package pygeocoder](https://pypi.org/project/pygeocoder) | import pygeolib |
| [# pipedream add-package Pygments](https://pypi.org/project/Pygments) | import pygments |
| [# pipedream add-package python\_graph\_core](https://pypi.org/project/python_graph_core) | import pygraph |
| [# pipedream add-package pyjon.utils](https://pypi.org/project/pyjon.utils) | import pyjon |
| [# pipedream add-package python\_jsonrpc](https://pypi.org/project/python_jsonrpc) | import pyjsonrpc |
| [# pipedream add-package Pykka](https://pypi.org/project/Pykka) | import pykka |
| [# pipedream add-package PyLogo](https://pypi.org/project/PyLogo) | import pylogo |
| [# pipedream add-package adhocracy\_Pylons](https://pypi.org/project/adhocracy_Pylons) | import pylons |
| [# pipedream add-package libmagic](https://pypi.org/project/libmagic) | import pymagic |
| [# pipedream add-package Amalwebcrawler](https://pypi.org/project/Amalwebcrawler) | import pymycraawler |
| [# pipedream add-package AbakaffeNotifier](https://pypi.org/project/AbakaffeNotifier) | import pynma |
| [# pipedream add-package Pyphen](https://pypi.org/project/Pyphen) | import pyphen |
| [# pipedream add-package AEI](https://pypi.org/project/AEI) | import pyrimaa |
| [# pipedream add-package adhocracy\_pysqlite](https://pypi.org/project/adhocracy_pysqlite) | import pysqlite2 |
| [# pipedream add-package pysqlite](https://pypi.org/project/pysqlite) | import pysqlite2 |
| [# pipedream add-package python\_gettext](https://pypi.org/project/python_gettext) | import pythongettext |
| [# pipedream add-package python\_json\_logger](https://pypi.org/project/python_json_logger) | import pythonjsonlogger |
| [# pipedream add-package PyUtilib](https://pypi.org/project/PyUtilib) | import pyutilib |
| [# pipedream add-package Cython](https://pypi.org/project/Cython) | import pyximport |
| [# pipedream add-package qserve](https://pypi.org/project/qserve) | import qs |
| [# pipedream add-package django\_quickapi](https://pypi.org/project/django_quickapi) | import quickapi |
| [# pipedream add-package nose\_quickunit](https://pypi.org/project/nose_quickunit) | import quickunit |
| [# pipedream add-package radical.pilot](https://pypi.org/project/radical.pilot) | import radical |
| [# pipedream add-package radical.utils](https://pypi.org/project/radical.utils) | import radical |
| [# pipedream add-package readability\_lxml](https://pypi.org/project/readability_lxml) | import readability |
| [# pipedream add-package gnureadline](https://pypi.org/project/gnureadline) | import readline |
| [# pipedream add-package django\_recaptcha\_works](https://pypi.org/project/django_recaptcha_works) | import recaptcha\_works |
| [# pipedream add-package RelStorage](https://pypi.org/project/RelStorage) | import relstorage |
| [# pipedream add-package django\_reportapi](https://pypi.org/project/django_reportapi) | import reportapi |
| [# pipedream add-package Requests](https://pypi.org/project/Requests) | import requests |
| [# pipedream add-package requirements\_parser](https://pypi.org/project/requirements_parser) | import requirements |
| [# pipedream add-package djangorestframework](https://pypi.org/project/djangorestframework) | import rest\_framework |
| [# pipedream add-package py\_restclient](https://pypi.org/project/py_restclient) | import restclient |
| [# pipedream add-package async\_retrial](https://pypi.org/project/async_retrial) | import retrial |
| [# pipedream add-package django\_reversion](https://pypi.org/project/django_reversion) | import reversion |
| [# pipedream add-package rhaptos2.common](https://pypi.org/project/rhaptos2.common) | import rhaptos2 |
| [# pipedream add-package robotframework](https://pypi.org/project/robotframework) | import robot |
| [# pipedream add-package django\_robots](https://pypi.org/project/django_robots) | import robots |
| [# pipedream add-package rosdep](https://pypi.org/project/rosdep) | import rosdep2 |
| [# pipedream add-package RSFile](https://pypi.org/project/RSFile) | import rsbackends |
| [# pipedream add-package ruamel.base](https://pypi.org/project/ruamel.base) | import ruamel |
| [# pipedream add-package pysaml2](https://pypi.org/project/pysaml2) | import xmlenc |
| [# pipedream add-package saga\_python](https://pypi.org/project/saga_python) | import saga |
| [# pipedream add-package aws-sam-translator](https://pypi.org/project/aws-sam-translator) | import samtranslator |
| [# pipedream add-package libsass](https://pypi.org/project/libsass) | import sassutils |
| [# pipedream add-package alex\_sayhi](https://pypi.org/project/alex_sayhi) | import sayhi |
| [# pipedream add-package scalr](https://pypi.org/project/scalr) | import scalrtools |
| [# pipedream add-package scikits.talkbox](https://pypi.org/project/scikits.talkbox) | import scikits |
| [# pipedream add-package scratchpy](https://pypi.org/project/scratchpy) | import scratch |
| [# pipedream add-package pyScss](https://pypi.org/project/pyScss) | import scss |
| [# pipedream add-package dict.sorted](https://pypi.org/project/dict.sorted) | import sdict |
| [# pipedream add-package android\_sdk\_updater](https://pypi.org/project/android_sdk_updater) | import sdk\_updater |
| [# pipedream add-package django\_sekizai](https://pypi.org/project/django_sekizai) | import sekizai |
| [# pipedream add-package pysendfile](https://pypi.org/project/pysendfile) | import sendfile |
| [# pipedream add-package pyserial](https://pypi.org/project/pyserial) | import serial |
| [# pipedream add-package astor](https://pypi.org/project/astor) | import setuputils |
| [# pipedream add-package pyshp](https://pypi.org/project/pyshp) | import shapefile |
| [# pipedream add-package Shapely](https://pypi.org/project/Shapely) | import shapely |
| [# pipedream add-package ahonya\_sika](https://pypi.org/project/ahonya_sika) | import sika |
| [# pipedream add-package pysingleton](https://pypi.org/project/pysingleton) | import singleton |
| [# pipedream add-package scikit\_bio](https://pypi.org/project/scikit_bio) | import skbio |
| [# pipedream add-package scikit\_learn](https://pypi.org/project/scikit_learn) | import sklearn |
| [# pipedream add-package slackclient](https://pypi.org/project/slackclient) | import slack |
| [# pipedream add-package unicode\_slugify](https://pypi.org/project/unicode_slugify) | import slugify |
| [# pipedream add-package smk\_python\_sdk](https://pypi.org/project/smk_python_sdk) | import smarkets |
| [# pipedream add-package ctypes\_snappy](https://pypi.org/project/ctypes_snappy) | import snappy |
| [# pipedream add-package gevent\_socketio](https://pypi.org/project/gevent_socketio) | import socketio |
| [# pipedream add-package sockjs\_tornado](https://pypi.org/project/sockjs_tornado) | import sockjs |
| [# pipedream add-package SocksiPy\_branch](https://pypi.org/project/SocksiPy_branch) | import socks |
| [# pipedream add-package solrpy](https://pypi.org/project/solrpy) | import solr |
| [# pipedream add-package Solution](https://pypi.org/project/Solution) | import solution |
| [# pipedream add-package sorl\_thumbnail](https://pypi.org/project/sorl_thumbnail) | import sorl |
| [# pipedream add-package South](https://pypi.org/project/South) | import south |
| [# pipedream add-package Sphinx](https://pypi.org/project/Sphinx) | import sphinx |
| [# pipedream add-package ATD\_document](https://pypi.org/project/ATD_document) | import sphinx\_pypi\_upload |
| [# pipedream add-package sphinxcontrib\_programoutput](https://pypi.org/project/sphinxcontrib_programoutput) | import sphinxcontrib |
| [# pipedream add-package SQLAlchemy](https://pypi.org/project/SQLAlchemy) | import sqlalchemy |
| [# pipedream add-package atlas](https://pypi.org/project/atlas) | import src |
| [# pipedream add-package auto\_mix\_prep](https://pypi.org/project/auto_mix_prep) | import src |
| [# pipedream add-package bw\_stats\_toolkit](https://pypi.org/project/bw_stats_toolkit) | import stats\_toolkit |
| [# pipedream add-package dogstatsd\_python](https://pypi.org/project/dogstatsd_python) | import statsd |
| [# pipedream add-package python\_stdnum](https://pypi.org/project/python_stdnum) | import stdnum |
| [# pipedream add-package StoneageHTML](https://pypi.org/project/StoneageHTML) | import stoneagehtml |
| [# pipedream add-package django\_storages](https://pypi.org/project/django_storages) | import storages |
| [# pipedream add-package mox](https://pypi.org/project/mox) | import stubout |
| [# pipedream add-package suds\_jurko](https://pypi.org/project/suds_jurko) | import suds |
| [# pipedream add-package python\_swiftclient](https://pypi.org/project/python_swiftclient) | import swiftclient |
| [# pipedream add-package pytabix](https://pypi.org/project/pytabix) | import test |
| [# pipedream add-package django\_taggit](https://pypi.org/project/django_taggit) | import taggit |
| [# pipedream add-package django\_tastypie](https://pypi.org/project/django_tastypie) | import tastypie |
| [# pipedream add-package teamcity\_messages](https://pypi.org/project/teamcity_messages) | import teamcity |
| [# pipedream add-package pyTelegramBotAPI](https://pypi.org/project/pyTelegramBotAPI) | import telebot |
| [# pipedream add-package Tempita](https://pypi.org/project/Tempita) | import tempita |
| [# pipedream add-package Tenjin](https://pypi.org/project/Tenjin) | import tenjin |
| [# pipedream add-package python\_termstyle](https://pypi.org/project/python_termstyle) | import termstyle |
| [# pipedream add-package treeherder\_client](https://pypi.org/project/treeherder_client) | import thclient |
| [# pipedream add-package django\_threaded\_multihost](https://pypi.org/project/django_threaded_multihost) | import threaded\_multihost |
| [# pipedream add-package 3color\_Press](https://pypi.org/project/3color_Press) | import threecolor |
| [# pipedream add-package pytidylib](https://pypi.org/project/pytidylib) | import tidylib |
| [# pipedream add-package 3lwg](https://pypi.org/project/3lwg) | import tlw |
| [# pipedream add-package toredis\_fork](https://pypi.org/project/toredis_fork) | import toredis |
| [# pipedream add-package tornado\_redis](https://pypi.org/project/tornado_redis) | import tornadoredis |
| [# pipedream add-package ansible\_tower\_cli](https://pypi.org/project/ansible_tower_cli) | import tower\_cli |
| [# pipedream add-package Trac](https://pypi.org/project/Trac) | import tracopt |
| [# pipedream add-package android\_localization\_helper](https://pypi.org/project/android_localization_helper) | import translation\_helper |
| [# pipedream add-package django\_treebeard](https://pypi.org/project/django_treebeard) | import treebeard |
| [# pipedream add-package trytond\_stock](https://pypi.org/project/trytond_stock) | import trytond |
| [# pipedream add-package tsuru\_circus](https://pypi.org/project/tsuru_circus) | import tsuru |
| [# pipedream add-package python\_tvrage](https://pypi.org/project/python_tvrage) | import tvrage |
| [# pipedream add-package tw2.core](https://pypi.org/project/tw2.core) | import tw2 |
| [# pipedream add-package tw2.d3](https://pypi.org/project/tw2.d3) | import tw2 |
| [# pipedream add-package tw2.dynforms](https://pypi.org/project/tw2.dynforms) | import tw2 |
| [# pipedream add-package tw2.excanvas](https://pypi.org/project/tw2.excanvas) | import tw2 |
| [# pipedream add-package tw2.forms](https://pypi.org/project/tw2.forms) | import tw2 |
| [# pipedream add-package tw2.jit](https://pypi.org/project/tw2.jit) | import tw2 |
| [# pipedream add-package tw2.jqplugins.flot](https://pypi.org/project/tw2.jqplugins.flot) | import tw2 |
| [# pipedream add-package tw2.jqplugins.gritter](https://pypi.org/project/tw2.jqplugins.gritter) | import tw2 |
| [# pipedream add-package tw2.jqplugins.ui](https://pypi.org/project/tw2.jqplugins.ui) | import tw2 |
| [# pipedream add-package tw2.jquery](https://pypi.org/project/tw2.jquery) | import tw2 |
| [# pipedream add-package tw2.sqla](https://pypi.org/project/tw2.sqla) | import tw2 |
| [# pipedream add-package Twisted](https://pypi.org/project/Twisted) | import twisted |
| [# pipedream add-package python\_twitter](https://pypi.org/project/python_twitter) | import twitter |
| [# pipedream add-package transifex\_client](https://pypi.org/project/transifex_client) | import txclib |
| [# pipedream add-package 115wangpan](https://pypi.org/project/115wangpan) | import u115 |
| [# pipedream add-package Unidecode](https://pypi.org/project/Unidecode) | import unidecode |
| [# pipedream add-package ansible\_universe](https://pypi.org/project/ansible_universe) | import universe |
| [# pipedream add-package pyusb](https://pypi.org/project/pyusb) | import usb |
| [# pipedream add-package useless.pipes](https://pypi.org/project/useless.pipes) | import useless |
| [# pipedream add-package auth\_userpass](https://pypi.org/project/auth_userpass) | import userpass |
| [# pipedream add-package automakesetup.py](https://pypi.org/project/automakesetup.py) | import utilities |
| [# pipedream add-package aino\_utkik](https://pypi.org/project/aino_utkik) | import utkik |
| [# pipedream add-package uWSGI](https://pypi.org/project/uWSGI) | import uwsgidecorators |
| [# pipedream add-package ab](https://pypi.org/project/ab) | import valentine |
| [# pipedream add-package configobj](https://pypi.org/project/configobj) | import validate |
| [# pipedream add-package chartio](https://pypi.org/project/chartio) | import version |
| [# pipedream add-package ar\_virtualenv\_api](https://pypi.org/project/ar_virtualenv_api) | import virtualenvapi |
| [# pipedream add-package brocade\_plugins](https://pypi.org/project/brocade_plugins) | import vyatta |
| [# pipedream add-package WebOb](https://pypi.org/project/WebOb) | import webob |
| [# pipedream add-package websocket\_client](https://pypi.org/project/websocket_client) | import websocket |
| [# pipedream add-package WebTest](https://pypi.org/project/WebTest) | import webtest |
| [# pipedream add-package Werkzeug](https://pypi.org/project/Werkzeug) | import werkzeug |
| [# pipedream add-package wheezy.caching](https://pypi.org/project/wheezy.caching) | import wheezy |
| [# pipedream add-package wheezy.core](https://pypi.org/project/wheezy.core) | import wheezy |
| [# pipedream add-package wheezy.http](https://pypi.org/project/wheezy.http) | import wheezy |
| [# pipedream add-package tiddlywebwiki](https://pypi.org/project/tiddlywebwiki) | import wikklytext |
| [# pipedream add-package pywinrm](https://pypi.org/project/pywinrm) | import winrm |
| [# pipedream add-package Alfred\_Workflow](https://pypi.org/project/Alfred_Workflow) | import workflow |
| [# pipedream add-package WSME](https://pypi.org/project/WSME) | import wsmeext |
| [# pipedream add-package WTForms](https://pypi.org/project/WTForms) | import wtforms |
| [# pipedream add-package wtf\_peewee](https://pypi.org/project/wtf_peewee) | import wtfpeewee |
| [# pipedream add-package pyxdg](https://pypi.org/project/pyxdg) | import xdg |
| [# pipedream add-package pytest\_xdist](https://pypi.org/project/pytest_xdist) | import xdist |
| [# pipedream add-package xmpppy](https://pypi.org/project/xmpppy) | import xmpp |
| [# pipedream add-package XStatic\_Font\_Awesome](https://pypi.org/project/XStatic_Font_Awesome) | import xstatic |
| [# pipedream add-package XStatic\_jQuery](https://pypi.org/project/XStatic_jQuery) | import xstatic |
| [# pipedream add-package XStatic\_jquery\_ui](https://pypi.org/project/XStatic_jquery_ui) | import xstatic |
| [# pipedream add-package PyYAML](https://pypi.org/project/PyYAML) | import yaml |
| [# pipedream add-package z3c.autoinclude](https://pypi.org/project/z3c.autoinclude) | import z3c |
| [# pipedream add-package z3c.caching](https://pypi.org/project/z3c.caching) | import z3c |
| [# pipedream add-package z3c.form](https://pypi.org/project/z3c.form) | import z3c |
| [# pipedream add-package z3c.formwidget.query](https://pypi.org/project/z3c.formwidget.query) | import z3c |
| [# pipedream add-package z3c.objpath](https://pypi.org/project/z3c.objpath) | import z3c |
| [# pipedream add-package z3c.pt](https://pypi.org/project/z3c.pt) | import z3c |
| [# pipedream add-package z3c.relationfield](https://pypi.org/project/z3c.relationfield) | import z3c |
| [# pipedream add-package z3c.traverser](https://pypi.org/project/z3c.traverser) | import z3c |
| [# pipedream add-package z3c.zcmlhook](https://pypi.org/project/z3c.zcmlhook) | import z3c |
| [# pipedream add-package pyzmq](https://pypi.org/project/pyzmq) | import zmq |
| [# pipedream add-package zopyx.textindexng3](https://pypi.org/project/zopyx.textindexng3) | import zopyx |
# Pause, resume, and rerun a workflow
Source: https://pipedream.com/docs/workflows/building-workflows/code/python/rerun
You can use `pd.flow.suspend` and `pd.flow.rerun` to pause a workflow and resume it later.
This is useful when you want to:
* Pause a workflow until someone manually approves it
* Poll an external API until some job completes, and proceed with the workflow when it’s done
* Trigger an external API to start a job, pause the workflow, and resume it when the external API sends an HTTP callback
We’ll cover all of these examples below.
## `pd.flow.suspend`
Use `pd.flow.suspend` when you want to pause a workflow and proceed with the remaining steps only when manually approved or cancelled.
For example, you can suspend a workflow and send yourself a link to manually resume or cancel the rest of the workflow:
```python
def handler(pd: 'pipedream'):
urls = pd.flow.suspend()
pd.send.email(
subject="Please approve this important workflow",
text=f"Click here to approve the workflow: ${urls["resume_url"]}, and cancel here: ${urls["cancel_url"]}"
)
# Pipedream suspends your workflow at the end of the step
```
You’ll receive an email like this:
And can resume or cancel the rest of the workflow by clicking on the appropriate link.
### `resume_url` and `cancel_url`
In general, calling `pd.flow.suspend` returns a `cancel_url` and `resume_url` that lets you cancel or resume paused executions. Since Pipedream pauses your workflow at the *end* of the step, you can pass these URLs to any external service before the workflow pauses. If that service accepts a callback URL, it can trigger the `resume_url` when its work is complete.
These URLs are specific to a single execution of your workflow. While the workflow is paused, you can load these in your browser or send any HTTP request to them:
* Sending an HTTP request to the `cancel_url` will cancel that execution
* Sending an HTTP request to the `resume_url` will resume that execution
If you resume a workflow, any data sent in the HTTP request is passed to the workflow and returned in the `$resume_data` [step export](/docs/workflows/#step-exports) of the suspended step. For example, if you call `pd.flow.suspend` within a step named `code`, the `$resume_data` export should contain the data sent in the `resume_url` request:
### Default timeout of 24 hours
By default, `pd.flow.suspend` will automatically cancel the workflow after 24 hours. You can set your own timeout (in milliseconds) as the first argument:
```python
def handler(pd: 'pipedream'):
# 7 days
TIMEOUT = 1000 * 60 * 60 * 24 * 7
pd.flow.suspend(TIMEOUT)
```
## `pd.flow.rerun`
Use `pd.flow.rerun` when you want to run a specific step of a workflow multiple times. This is useful when you need to start a job in an external API and poll for its completion, or have the service call back to the step and let you process the HTTP request within the step.
### Polling for the status of an external job
Sometimes you need to poll for the status of an external job until it completes. `pd.flow.rerun` lets you rerun a specific step multiple times:
```python
import requests
def handler(pd: 'pipedream'):
MAX_RETRIES = 3
# 10 seconds
DELAY = 1000 * 10
run = pd.context['run']
print(pd.context)
# pd.context.run.runs starts at 1 and increments when the step is rerun
if run['runs'] == 1:
# pd.flow.rerun(delay, context (discussed below), max retries)
pd.flow.rerun(DELAY, None, MAX_RETRIES)
elif run['runs'] == MAX_RETRIES + 1:
raise Exception("Max retries exceeded")
else:
# Poll external API for status
response = requests.get("https://example.com/status")
# If we're done, continue with the rest of the workflow
if response.json().status == "DONE":
return response.json()
# Else retry later
pd.flow.rerun(DELAY, None, MAX_RETRIES)
```
`pd.flow.rerun` accepts the following arguments:
```python
pd.flow.rerun(
delay, # The number of milliseconds until the step will be rerun
context, # JSON-serializable data you need to pass between runs
maxRetries, # The total number of times the step will rerun. Defaults to 10
)
```
### Accept an HTTP callback from an external service
When you trigger a job in an external service, and that service can send back data in an HTTP callback to Pipedream, you can process that data within the same step using `pd.flow.retry`:
```python
import requests
def handler(pd: 'pipedream'):
TIMEOUT = 86400 * 1000
run = pd.context['run']
# pd.context['run']['runs'] starts at 1 and increments when the step is rerun
if run['runs'] == 1:
links = pd.flow.rerun(TIMEOUT, None, 1)
# links contains a dictionary with two entries: resume_url and cancel_url
# Send resume_url to external service
await request.post("your callback URL", json=links)
# When the external service calls back into the resume_url, you have access to
# the callback data within pd.context.run['callback_request']
elif 'callback_request' in run:
return run['callback_request']
```
### Passing `context` to `pd.flow.rerun`
Within a Python code step, `pd.context.run.context` contains the `context` passed from the prior call to `rerun`. This lets you pass data from one run to another. For example, if you call:
```javascript
pd.flow.rerun(1000, { "hello": "world" })
```
`pd.context.run.context` will contain:
### `maxRetries`
By default, `maxRetries` is **10**.
When you exceed `maxRetries`, the workflow proceeds to the next step. If you need to handle this case with an exception, `raise` an Exception from the step:
```python
def handler(pd: 'pipedream'):
MAX_RETRIES = 3
run = pd.context['run']
if run['runs'] == 1:
pd.flow.rerun(1000, None, MAX_RETRIES)
else if (run['runs'] == MAX_RETRIES + 1):
raise Exception("Max retries exceeded")
```
## Behavior when testing
When you’re building a workflow and test a step with `pd.flow.suspend` or `pd.flow.rerun`, it will not suspend the workflow, and you’ll see a message like the following:
> Workflow execution canceled — this may be due to `pd.flow.suspend()` usage (not supported in test)
These functions will only suspend and resume when run in production.
## Credits usage when using `pd.flow.suspend` / `pd.flow.rerun`
You are not charged for the time your workflow is suspended during a `pd.flow.suspend` or `pd.flow.rerun`. Only when workflows are resumed will compute time count toward [credit usage](/docs/pricing/#credits-and-billing).
When a suspended workflow reawakens, it will reset the credit counter.
Each rerun or reawakening from a suspension will count as a new fresh credit.
# Using Data Stores
Source: https://pipedream.com/docs/workflows/building-workflows/code/python/using-data-stores
You can store and retrieve data from [Data Stores](/docs/workflows/data-management/data-stores/) in Python without connecting to a 3rd party database.
Add a data store as a input to a Python step, then access it in your Python `handler` with `pd.inputs["data_store"]`.
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Store a value under a key
data_store["key"] = "Hello World"
# Retrieve the value and print it to the step's Logs
print(data_store["key"])
```
## Adding a Data Store
Click *Add Data Store* near the top of a Python step:
This will add the selected data store to your Python code step.
## Saving data
Data stores are key-value stores. Saving data within a data store is just like setting a property on a dictionary:
```python
from datetime import datetime
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Store a timestamp
data_store["last_ran_at"] = datetime.now().isoformat()
```
### Setting expiration (TTL) for records
You can set an expiration time for a record by passing a TTL (Time-To-Live) option as the third argument to the `set` method. The TTL value is specified in seconds:
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Store a temporary value that will expire after 1 hour (3600 seconds)
data_store.set("temporaryToken", "abc123", ttl=3600)
# Store a value that will expire after 1 day
data_store.set("dailyMetric", 42, ttl=86400)
```
When the TTL period elapses, the record will be automatically deleted from the data store.
### Updating TTL for existing records
You can update the TTL for an existing record using the `set_ttl` method:
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Update an existing record to expire after 30 minutes
data_store.set_ttl("temporaryToken", ttl=1800)
# Remove expiration from a record
data_store.set_ttl("temporaryToken", ttl=None)
```
This is useful for extending the lifetime of temporary data or removing expiration from records that should now be permanent.
## Retrieving keys
Fetch all the keys in a given data store using the `keys` method:
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Retrieve all keys in the data store
keys = pd.inputs["data_store"].keys()
# Print a comma separated string of all keys
print(*keys, sep=",")
```
The `datastore.keys()` method does not return a list, but instead it returns a `Keys` iterable object. You cannot export a `data_store` or `data_store.keys()` from a Python code step at this time.
Instead, build a dictionary or list when using the `data_store.keys()` method.
## Checking for the existence of specific keys
If you need to check whether a specific `key` exists in a data store, use `if` and `in` as a conditional:
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Search for a key in a conditional
if "last_ran_at" in data_store:
print(f"Last ran at {data_store['last_ran_at']}")
```
## Retrieving data
Data stores are very performant at retrieving single records by keys. However you can also use key iteration to retrieve all records within a Data Store as well.
Data stores are intended to be a fast and convienent data storage option for quickly adding data storage capability to your workflows without adding another database dependency.
However, if you need more advanced querying capabilities for querying records with nested dictionaries or filtering based on a record value - consider using a full fledged database. Pipedream can integrate with MySQL, Postgres, DynamoDb, MongoDB and more.
### Get a single record
You can retrieve single records from a data store by key:
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Retrieve the timestamp value by the key name
last_ran_at = data_store["last_ran_at"]
# Print the timestamp
print(f"Last ran at {last_ran_at}")
```
Alternatively, use the `data_store.get()` method to retrieve a specific key’s contents:
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Retrieve the timestamp value by the key name
last_ran_at = data_store.get("last_ran_at")
# Print the timestamp
print(f"Last ran at {last_ran_at}")
```
What’s the difference between `data_store["key"]` and `data_store.get("key")`?
* `data_store["key"]` will throw a `TypeError` if the key doesn’t exist in the data store.
* `data_store.get("key")` will instead return `None` if the key doesn’t exist in the data store.
* `data_store.get("key", "default_value")` will return `"default_value"` if the key doesn’t exist on the data store.
### Retrieving all records
You can retrieve all records within a data store by using an async iterator:
```python
def handler(pd: "pipedream"):
data_store = pd.inputs["data_store"]
records = {}
for k,v in data_store.items():
records[k] = v
return records
```
This code step example exports all records within the data store as a dictionary.
## Deleting or updating values within a record
To delete or update the *value* of an individual record, assign `key` a new value or `''` to remove the value but retain the key.
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Assign a new value to the key
data_store["myKey"] = "newValue"
# Remove the value but retain the key
data_store["myKey"] = ""
```
### Working with nested dictionaries
You can store dictionaries within a record. This allows you to create complex records.
However, to update specific attributes within a nested dictionary, you’ll need to replace the record entirely.
For example, the code the below will **not** update the `name` attribute on the stored dictionary stored under the key `pokemon`:
```python
def handler(pd: "pipedream"):
# The current dictionary looks like this:
# pokemon: {
# "name": "Charmander"
# "type": "fire"
# }
# You'll see "Charmander" in the logs
print(pd.inputs['data_store']['pokemon']['name'])
# attempting to overwrite the pokemon's name will not apply
pd.inputs['data_store']['pokemon']['name'] = 'Bulbasaur'
# Exports "Charmander"
return pd.inputs['data_store']['pokemon']['name']
```
Instead, *overwrite* the entire record to modify attributes:
```python
def handler(pd: "pipedream"):
# retrieve the record item by it's key first
pokemon = pd.inputs['data_store']['pokemon']
# now update the record's attribute
pokemon['name'] = 'Bulbasaur'
# and out right replace the record with the new modified dictionary
pd.inputs['data_store']['pokemon'] = pokemon
# Now we'll see "Bulbasaur" exported
return pd.inputs['data_store']['pokemon']['name']
```
## Deleting specific records
To delete individual records in a data store, use the `del` operation for a specific `key`:
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Delete the last_ran_at timestamp key
del data_store["last_ran_at"]
```
## Deleting all records from a specific data store
If you need to delete all records in a given data store, you can use the `clear` method.
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# Delete the entire contents of the datas store
data_store.clear()
```
`data_store.clear()` is an **irreversible** change, **even when testing code** in the workflow builder.
## Viewing store data
You can view the contents of your data stores in your [Pipedream dashboard](https://pipedream.com/stores).
From here you can also manually edit your data store’s data, rename stores, delete stores or create new stores.
## Workflow counter example
You can use a data store as a counter. For example, this code counts the number of times the workflow runs:
```python
def handler(pd: "pipedream"):
# Access the data store under the pd.inputs
data_store = pd.inputs["data_store"]
# if the counter doesn't exist yet, start it at one
if data_store.get("counter") == None:
data_store["counter"] = 1
# Otherwise, increment it by one
else:
count = data_store["counter"]
data_store["counter"] = count + 1
```
## Dedupe data example
Data Stores are also useful for storing data from prior runs to prevent acting on duplicate data, or data that’s been seen before.
For example, this workflow’s trigger contains an email address from a potential new customer. But we want to track all emails collected so we don’t send a welcome email twice:
```python
def handler(pd: "pipedream"):
# Access the data store
data_store = pd.inputs["data_store"]
# Reference the incoming email from the HTTP request
new_email = pd.steps["trigger"]["event"]["body"]["new_customer_email"]
# Retrieve the emails stored in our data store
emails = data_store.get('emails', [])
# If this email has been seen before, exit early
if new_email in emails:
print(f"Already seen {new_email}, exiting")
return False
# This email is new, append it to our list
else:
print(f"Adding new email to data store {new_email}")
emails.append(new_email)
data_store["emails"] = emails
return new_email
```
## TTL use case: temporary caching and rate limiting
TTL functionality is particularly useful for implementing temporary caching and rate limiting. Here’s an example of a simple rate limiter that prevents a user from making more than 5 requests per hour:
```python
def handler(pd: "pipedream"):
# Access the data store
data_store = pd.inputs["data_store"]
user_id = pd.steps["trigger"]["event"]["user_id"]
rate_key = f"ratelimit:{user_id}"
# Try to get current rate limit counter
requests_num = data_store.get("rate_key")
if not requests_num:
# First request from this user in the time window
data_store.set(rate_key, 1, ttl=3600) # Expire after 1 hour
return { "allowed": True, "remaining": 4 }
if requests_num >= 5:
# Rate limit exceeded
return { "allowed": False, "error": "Rate limit exceeded", "retry_after": "1 hour" }
# Increment the counter
data_store["rate_key"] = requests_num + 1
return { "allowed": True, "remaining": 4 - requests_num }
```
This pattern can be extended for various temporary caching scenarios like:
* Session tokens with automatic expiration
* Short-lived feature flags
* Temporary access grants
* Time-based promotional codes
### Supported data types
Data stores can hold any JSON-serializable data within the storage limits. This includes data types including:
* Strings
* Dictionaries
* Lists
* Integers
* Floats
But you cannot serialize Modules, Functions, Classes, or other more complex objects.
# Working With The Filesystem In Python
Source: https://pipedream.com/docs/workflows/building-workflows/code/python/working-with-files
export const TMP_SIZE_LIMIT = '2GB';
You can work with files within a workflow. For instance, downloading content from some service to upload to another. Here are some sample code for common file operations.
## The `/tmp` directory
Within a workflow, you have full read-write access to the `/tmp` directory. You have {TMP_SIZE_LIMIT} of available space in `/tmp` to save any file.
### Managing `/tmp` across workflow runs
The `/tmp` directory is stored on the virtual machine that runs your workflow. We call this the execution environment (“EE”). More than one EE may be created to handle high-volume workflows. And EEs can be destroyed at any time (for example, after about 10 minutes of receiving no events). This means that you should not expect to have access to files across executions. At the same time, files *may* remain, so you should clean them up to make sure that doesn’t affect your workflow. **Use [the `tempfile` module](https://docs.python.org/3/library/tempfile.html) to cleanup files after use, or [delete the files manually](/docs/workflows/building-workflows/code/python/working-with-files/#deleting-a-file).**
## Writing a file to `/tmp`
```python
import requests
def handler(pd: "pipedream"):
# Download the Python logo
r = requests.get("https://www.python.org/static/img/python-logo@2x.png")
# Create a new file python-logo.png in the /tmp/data directory
with open("/tmp/python-logo.png", "wb") as f:
# Save the content of the HTTP response into the file
f.write(r.content)
```
Now `/tmp/python-logo.png` holds the official Python logo.
## Reading a file from `/tmp`
You can also open files you have previously stored in the `/tmp` directory. Let’s open the `python-logo.png` file.
```python
import os
def handler(pd: "pipedream"):
with open("/tmp/python-logo.png") as f:
# Store the contents of the file into a variable
file_data = f.read()
```
## Listing files in `/tmp`
If you need to check what files are currently in `/tmp` you can list them and print the results to the **Logs** section of **Results**:
```python
import os
def handler(pd: "pipedream"):
# Prints the files in the tmp directory
print(os.listdir("/tmp"))
```
## Deleting a file
```python
import os
def handler(pd: "pipedream"):
print(os.unlink("/tmp/your-file"))
```
## Downloading a file to `/tmp`
[See this example](/docs/workflows/building-workflows/code/python/http-requests/#downloading-a-file-to-the-tmp-directory) to learn how to download a file to `/tmp`.
## Uploading a file from `/tmp`
[See this example](/docs/workflows/building-workflows/code/python/http-requests/#uploading-a-file-from-the-tmp-directory) to learn how to upload a file from `/tmp` in an HTTP request.
## Downloading a file, uploading it in another `multipart/form-data` request
```python
import requests
from requests_toolbelt.multipart.encoder import MultipartEncoder
import os
def handler(pd: "pipedream"):
download_url = "https://example.com"
upload_url = "http://httpbin.org/post"
file_path = "/tmp/index.html"
content_type = "text/html"
# DOWNLOAD
with requests.get(download_url, stream=True) as response:
response.raise_for_status()
with open(file_path, "wb") as file:
for chunk in response.iter_content(chunk_size=8192):
file.write(chunk)
# UPLOAD
multipart_data = MultipartEncoder(fields={
'file': (os.path.basename(file_path), open(file_path, 'rb'), content_type)
})
response = requests.post(
upload_url,
data=multipart_data,
headers={'Content-Type': multipart_data.content_type}
)
```
## `/tmp` limitations
The `/tmp` directory can store up to {TMP_SIZE_LIMIT} of storage. Also the storage may be wiped or may not exist between workflow executions.
To avoid errors, assume that the `/tmp` directory is empty between workflow runs. Please refer to the [disk limits](/docs/workflows/limits/#disk) for details.
Are File Stores helpers available for Python to download, upload and manage files?
At this time no, only Node.js includes a helper to interact with the [File Store](/docs/workflows/data-management/file-stores/) programmatically within workflows.
# Overview
Source: https://pipedream.com/docs/workflows/building-workflows/control-flow
Pipedream is adding powerful control flow operators so you can build and run non-linear workflows to unlock use cases that require advanced orchestration.
## Operators
| Operator | Description |
| ----------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------- |
| [If/Else (beta)](/docs/workflows/building-workflows/control-flow/ifelse/) | Supports single-path, logical branching orchestration. |
| [Delay](/docs/workflows/building-workflows/control-flow/delay/) | Add a delay from 1 millisecond to 1 year before the next step of your workflow proceeds. |
| [Filter](/docs/workflows/building-workflows/control-flow/filter/) | Define rules to stop or continue workflow execution. |
| [End Workflow](/docs/workflows/building-workflows/control-flow/end-workflow/) | Terminate the workflow prior to the last step. |
More operators (including parallel and looping) are coming soon.
## Key Capabilities
* Orchestrate execution of linear and non-linear workflows
* Normalize results and continue after a non-linear operation
* Nest control flow operators for advanced use cases
* Return HTTP responses during or after most non-linear operations
* Execute long running workflows (workflow timeout resets at each control flow boundary)
## Execution Path
### Context
The execution path represents the specific steps (and the order of steps) that run when a workflow is triggered.
* Simple linear workflows are executed from top to bottom — every step is in the execution path.
* With the introduction of non-linear workflows, steps may or may not be executed depending on the rules configured for control flow operators and the results exported from prior steps.
Therefore, we introduced new patterns to signal the execution path and help you build, test and inspect workflows.
### Executed Path
Step borders, backgrounds and connectors now highlight the **executed path** — the steps that are executed on the execution path. If a non-execution path step is tested, it will not be reflected as being on the execution path.
### Building and Testing in an Unknown or Non-Execution Path
You may add and test steps in any path. However, Pipedream highlights that the results may not be reliable if the step is outside the executed path; the results may not match the outcome if the steps were in a known execution path and may lead to invalid or misleading results.
### Signaling Steps are “Out of Date”
If prior steps in a workflow are modified or retested, Pipedream marks later steps in the execution path as *stale* to signal that the results may be out of date. In the non-linear model, Pipedream only marks steps that are in the confirmed execution path as stale.
* If a change is made to a prior step, then the executed path is cleared.
* Steps in the known execution path are immediately marked as stale
* State within conditional blocks is not updated until the start phase is tested and execution path is identified.
### Test State vs Execution Path
Steps may be tested whether or not they are in the execution path. The test state for a step reflects whether a step was successfully tested or needs attention (the step may have errored, the results may be out of date, etc) and is denoted by the icon at the top left of each step.
* Last test was successful
* Results may be stale, step may be untested, etc
* **Step has an error or is not configured**
## Workflow Segments
### Context
Workflow segments are a linear series of steps that with no control flow operators.
* A simple linear workflow is composed of a single workflow segment.
* When a control flow operator is introduced, then the workflow contains multiple segments. For example, when a Delay operator is added to the simple linear workflow above the workflow goes from 1 to 2 segements.
* The following example using If/Else contains 5 workflow segments. However, since only 1 branch within the If/Else control flow block is run on each workflow execution, the maximum number of segments that will be executed for each trigger event is 3.
### Billing
Pipedream compiles each workflow segment into an executable function to optimize performance and reduce credit usage; credit usage is calculated independently for each workflow segment independent of the number of steps (rather than per step like many other platforms).
* For example, the two workflow segments below both use a single credit:
* **Trigger + 1 step workflow segment (1 credit)**
* **Trigger + 5 step workflow segment (1 credit)**
* The If/Else example above that contains 5 workflow segments, but only 3 workflow segments in any given execution path will only incur 3 credits of usage per execution.
### Timeout and Memory
For the preview, all workflow segments inherit the global timeout and memory settings. In the future, Pipedream will support customization of timeout and memory settings for each segment. For example, if you need expanded memory for select steps, you will be able to restrict higher memory execution to that segment instead of having to run the entire workflow with the higher memory settings. This can help you reduce credit usage.
### Long Running Workflows
Users may create long running workflows that greatly exceed the upper bound timeout of 12 minutes for current workflow (each workflow segment has an upper bound of 12 minutes).
### `/tmp` directory access
`tmp` directory access is scoped per workflow segment (since each segment is executed as an independent function). If you need to persist files across multiple segments (or workflow executions) use File Stores.
### Warm Workers
Warm workers are globally configured per segment. For example, if you have a workflow with 3 segments and you configure your workflow to use 1 warm worker per segment, 3 warm workers will be used.
### Segment Relationships
Steps may only reference prior steps in the same workflow segment or it’s direct ancestors.
| Type | Description |
| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Root | The root segment is the top level for a workflow — it may have children but no parents. If you do not include any control flow blocks in your workflow, your entire workflow definition is contained within the root segment. |
| Parent | A segment that has a child. |
| Child | A flow that has a parent. |
## Blocks
### Context
**Blocks** are compound steps that are composed of a **start** and an **end** phase. Blocks may contain one or more workflow segments between the phases.
* Most non-linear control flow operators will be structured as blocks (vs. standard steps)
* You may add steps or blocks to [workflow segments](/docs/workflows/building-workflows/control-flow/#workflow-segments) between start and end phases of a block
* The start and end phases are independently testable
* The start phase evaluates the rules/configuration for a block; the results may influence the execution path
* The end phase exports results from the control flow block that can be referenced in future workflow steps
* For example, for the If/Else control flow operator, the start phase evaluates the branching rules while the end phase exports the results from the executed branch.
### Testing
When building a workflow with a control flow block, we recommend testing the start phase, followed by steps in the execution path followed by the end phase.
For a conditional operator like if/else, we then recommend generating events that trigger alternate conditions and testing those.
#### Passing data to steps in a control flow block
Steps may only reference prior steps in the same workflow segment or it’s direct ancestors. In the following example, `step_c` and `step_d` (within the if/else control flow block) can directly reference any exports from `trigger`, `step_a`, or `step_b` (in the parent/root workflow segment) via the steps object. `step_c` and `step_d` are siblings and cannot reference exports from each other.
#### Referencing data from steps in a previous block
Steps after the end phase may not directly reference steps within a control flow block (between the start and end phases). E.g., in the following workflow there are two branches:
In this example, `step_f` is executed after a control flow block. It can directly reference prior steps in the root workflow segment (`trigger`, `step_a` and `step_b` using the `steps` object).
However, `step_f` cannot reference directly reference data exported by `step_c` or `step_d`. The reason is that due to the non-linear execution, `step_c` and `step_d` are not guaranteed to execute for every event. **To reference data from a control flow block, reference the exports of the end phase.** Refer to the documentation to understand how data is exported for each control flow operator (e.g., for if/else, the exports of the last step in the branch are returned as the exports for the end phase; you can easily normalize the results across branches using a code step).
In this example, `step_f` can reference the exported data for an executed branch by referencing `steps.ifelse.$return_value`.
### Nesting
Control flow blocks may be nested within other control flow blocks:
* If/Else blocks may be nested within other If/Else blocks
* In the future, Loops may be nested within If/Else blocks and vice versa.
There is currently no limit on the number of nested elements.
## Rule Builder
### Context
Pipedream is introducing a rule builder for comparative operations. The rule builder is currently only supported by the If/Else operator, but it will be extended to other operators including Switch and Filter.
### Simple conditions
Compare values using supported operators.
### Combine multiple conditions using AND / OR
Click “Add condition” using the menu on the right to add multiple conditions. Click on AND / OR to toggle the operator.
### Test for multiple conditions using Groups
Create and manage groups using the menu options to “Add group”, “Make condition into group”, “Nest group” and “Remove group”.
### Supported Operators
* Exists
* Doesn’t exist
* String
* Equals
* Doesn’t equal
* Is blank
* Is not blank
* Starts with
* Contains
* Ends with
* Number
* Equals
* Does not equal
* Is greater than
* Is greater than or equal to
* Is less than
* Is less than or equal to
* Boolean (equals)
* Type checks
* Is null
* Is not null
* Is string
* Is not a string
* Is a number
* Is not a number
* Is a boolean
* Is not a boolean
* Value checks
* Is true
* Is false
# Delay
Source: https://pipedream.com/docs/workflows/building-workflows/control-flow/delay
export const DELAY_MIN_MAX_TIME = 'You can pause your workflow for as little as one millisecond, or as long as one year';
### Delay
Sometimes you need to wait a specific amount of time before the next step of your workflow proceeds. Pipedream supports this in one of two ways:
1. The built-in **Delay** actions
2. The `$.flow.delay` function in Node.js
{DELAY_MIN_MAX_TIME}. For example, we at Pipedream use this functionality to delay welcome emails to our customers until they’ve had a chance to use the product.
#### Delay actions
You can pause your workflow without writing code using the **Delay** actions:
1. Click the **+** button below any step
2. Search for the **Delay** app
3. Select the **Delay Workflow** action
4. Configure the action to delay any amount of time, up to one year
#### `$.flow.delay`
If you need to delay a workflow within Node.js code, or you need detailed control over how delays occur, [see the docs on `$.flow.delay`](/docs/workflows/building-workflows/code/nodejs/delay/).
#### The state of delayed executions
Delayed executions can hold one of three states:
* **Paused**: The execution is within the delay window, and the workflow is still paused
* **Resumed**: The workflow has been resumed at the end of its delay window automatically, or resumed manually
* **Cancelled**: The execution was cancelled manually
You’ll see the current state of an execution by [viewing its event data](/docs/workflows/building-workflows/inspect/).
#### Cancelling or resuming execution manually
The [**Delay** actions](/docs/workflows/building-workflows/control-flow/delay/#delay-actions) and [`$.flow.delay`](/docs/workflows/building-workflows/code/nodejs/delay/) return two URLs each time they run:
These URLs are specific to a single execution of your workflow. While the workflow is paused, you can load these in your browser or send an HTTP request to either:
* Hitting the `cancel_url` will immediately cancel that execution
* Hitting the `resume_url` will immediately resume that execution early
If you use [`$.flow.delay`](/docs/workflows/building-workflows/code/nodejs/delay/), you can send these URLs to your own system to handle cancellation / resumption. You can even email your customers to let them cancel / resume workflows that run on their behalf.
# End Workflow
Source: https://pipedream.com/docs/workflows/building-workflows/control-flow/end-workflow
To terminate the workflow prior to the last step, use the **End Workflow** pre-built action or `$.flow.exit()` in code.
## End Workflow Using a Pre-Built Action
* Select and configure the End Workflow action from the step selector
* When the step runs, the workflow execution will stop
* You may configure an optional reason for ending the workflow execution. This reason will be surfaced when inspecting the event execution.
## End Workflow in Code
Check the reference for your preferred language to learn how to end the workflow execution in code.
* [Ending a workflow in Node.js](/docs/workflows/building-workflows/code/nodejs/#ending-a-workflow-early)
* [Ending a workflow in Python](/docs/workflows/building-workflows/code/python/#ending-a-workflow-early)
# Filter
Source: https://pipedream.com/docs/workflows/building-workflows/control-flow/filter
### Filter
Use the Filter action to quickly stop or continue workflows on certain conditions.
Add a filter action to your workflow by searching for the **Filter** app in a new step.
The **Filter** app includes several built-in actions: Continue Workflow on Condition, Exit Workflow on Condition and Exit Workflow on Custom Condition.
In each of these actions, the **Value** is the subject of the condition, and the **Condition** is the operand to compare the value against.
For example, to only process orders with a `status = ready`
#### Continue Workflow on Condition
With this action, only when values that *pass* a set condition will the workflow continue to execute steps after this filter.
# If/Else
Source: https://pipedream.com/docs/workflows/building-workflows/control-flow/ifelse
## Overview
**If/Else** is single path branching operator. You can create multiple execution branches, but Pipedream will execute the **first** branch that matches the configured rules. **The order in which rules are defined will affect the path of execution.**
If/Else operator is useful when you need to branch based on the value of **multiple input variables**. You must define both the input variable and condition to evaluate for every rule. If you only need to test for the value of a **single input variable** (e.g., if you are branching based on the path of an inbound request), the [Switch operator](/docs/workflows/building-workflows/control-flow/switch/) may be a better choice.
## Capabilities
* Define rules to conditionally execute one of many branches
* Evaluate one or more expressions for each condition (use boolean operators to combine muliple rules)
* Use the **Else** condition as a fallback
* Merge and continue execution in the parent flow after the branching operation
If you disable the **Else** branch and there are no matching cases, the workflow will continue execution in the parent workflow after the **end** phase of the If/Else block
The If/Else operator is a control flow **Block** with **start** and **end** phases. [Learn more about Blocks](/docs/workflows/building-workflows/control-flow/#blocks).
## Demo
## Getting Started
Add a trigger and generate an event to help you build and test your workflow:
Click the + button to add a step to the canvas and select If/Else from the Control Flow section on the right. In the “start” phase, configure rules for each branch (optionally toggle the else branch) and then test the step.
**IMPORTANT:** If you disable the **Else** condition and an event does not match any of the rules, the workflow will continue to the next step after the **If/Else** section. If you want to end workflow execution if no other conditions evaluate to `true`, enable the Else condition and add a **Terminate Workflow** action.
Add a step to the success branch and test it
Test the end phase to export results from the If/Else control flow block.
Add a step and reference the exports from `ifelse` using the steps object.
Generate or select an alternate event to generate data to help you test other branches as you build. When you select a new event, the steps in the root workflow segments go stale. Steps in control flow blocks will only go stale if they are in the known path of execution; i.e., if you test a start phase, the steps in the success path will become stale.
Build, test and deploy the workflow.
Generate test events to trigger the deployed workflow and inspect the executions.
# Parallel
Source: https://pipedream.com/docs/workflows/building-workflows/control-flow/parallel
## Overview
**Parallel** is multi-path branching operator. It allows you to create multiple execution branches with optional filtering rules and Pipedream will execute **all** matching branches. Unlike [Switch](/docs/workflows/building-workflows/control-flow/switch/) and [If/Else](/docs/workflows/building-workflows/control-flow/ifelse/), the order in which rules are defined will not affect the path of execution.
## Capabilities
* Create non-linear workflows that execute steps in parallel branches
* Define when branches run — always, conditionally or never (to disable a branch)
* Merge and continue execution in the parent flow after the branching operation
The Parallel operator is a control flow **Block** with **start** and **end** phases. [Learn more about Blocks](/docs/workflows/building-workflows/control-flow/#blocks).
### Add Parallel operator to workflow
Select **Parallel** from the **Control Flow** section of the step selector:
### Create Branches
To create new branches, click the `+` button:
### Rename Branches
Edit the branch’s nameslug on the canvas or in the right pane after selecting the **Start** phase of the parallel block. The nameslug communicates the branch’s purpose and affects workflow execution—the end phase exports an object, with each key corresponding to a branch name.
### Export Data to the Parent Flow
You can export data from a parallel operation and continue execution in the parent flow.
* The parallel block exports data as a JSON object
* Branch exports are assigned to a key corresponding to the branch name slug (in the object exported from the block)
* Only the exports from the last step of each executed branch are included in the parallel block’s return value
* To preview the exported data, test the **End** phase of the parallel block
### Beta Limitations
Workflow queue settings (concurrency, execution rate) may not work as expected with workflows using the parallel operator.
## Getting Started
Add a trigger and generate an event to help you build and test your workflow:
Click the + button to add a step to the canvas and select Parallel from the Control Flow section on the right. You can optionally add or remove branches and configure conditions defining when each branch should run.
Test the **Start** phase to identify which branches will execute for the current event.
Add steps to the branches. These steps will be executed in parallel when the workflow runs.
Test the **End** phase to export the results of the last step of each branch that was executed. This makes data from the branches available to reference in the parent flow.
Optionally add steps after the parallel block and use data from individual branches by referencing the return value of the **End** phase.
Deploy the workflow and trigger it to inspect the executions.
# Switch
Source: https://pipedream.com/docs/workflows/building-workflows/control-flow/switch
## Overview
**Switch** is single path branching operator. You can create multiple execution branches, but Pipedream will execute the **first** branch that matches the configured rules. **The order in which rules are defined will affect the path of execution.**
Switch is useful when you need to make a branching decision based on the value of a **single input variable** (e.g., based on the path of an inbound request). You can define the input variable once and then branch based on the value(s). If you need to branch based on the values of **multiple input variables** use the [If/Else operator](/docs/workflows/building-workflows/control-flow/ifelse/).
## Capabilities
* Define cases to conditionally execute one of many branches
* Define the expression to evaluate once and configure cases to compare values (use boolean operators to combine muliple rules for each case)
* Use the **Default** case as a fallback
* Merge and continue execution in the parent flow after the branching operation
If you disable the **Default** branch and there are no matching cases, the workflow will continue execution in the parent workflow after the **end** phase of the Switch block
The Switch operator is a control flow **Block** with **start** and **end** phases. [Learn more about Blocks](/docs/workflows/building-workflows/control-flow/#blocks).
## Getting Started
Add a trigger and generate an event to help you build and test your workflow:
Click the + button to add a step to the canvas and select Switch from the Control Flow section on the right. In the “start” phase, configure rules for a case.
**IMPORTANT:** If you disable the **Default** condition and an event does not match any of the rules, the workflow will continue to the next step after the **Switch** section. If you want to end workflow execution if no other conditions evaluate to `true`, enable the Default condition and add a **Terminate Workflow** action.
To add additional cases, click the **+** button.
Test the **start** phase and add a step to the branch in the execution path,
Test the **end** phase to export the results of the last step in the execution path. This makes them available to reference in the parent flow.
You may add steps to alternate paths and test them. Pipedream will signal that the results may not be reliable if the branch is not in the execution path.
Generate or select alternate events to trigger and validate alternate paths.
Deploy the workflow and trigger it to inspect the executions.
# Handling Errors
Source: https://pipedream.com/docs/workflows/building-workflows/errors
Two types of errors are raised in Pipedream workflows:
* **Workflow errors** — Errors in the workflow execution environment, like [Timeouts](/docs/troubleshooting/#timeout) or [Out of Memory](/docs/troubleshooting/#out-of-memory) errors. Often, you can change your workflow’s configuration to fix them. You can find more details on these errors [in our troubleshooting guide](/docs/troubleshooting/).
* **Step errors** — Errors raised by individual [code](/docs/workflows/building-workflows/code/) or [action](/docs/workflows/building-workflows/actions/) steps. These can be syntax errors, errors raised by the Node or Python runtime, errors with input data, and more. Pipedream will surface details about the error and the stack trace, and you can even [debug these errors with AI](/docs/workflows/building-workflows/errors/#debug-with-ai).
Both types of errors will trigger [error notifications](/docs/workflows/building-workflows/errors/#error-notifications), can be handled by [custom error handlers](/docs/workflows/building-workflows/errors/#handle-errors-with-custom-logic), and will show up in [the REST API](/docs/workflows/building-workflows/errors/#poll-the-rest-api-for-workflow-errors).
## Auto-retry
You can [automatically retry events](/docs/workflows/building-workflows/settings/#auto-retry-errors) that yield an error. This can help for transient errors that occur when making API requests, like when a service is down or your request times out.
## Apply conditional logic
Many errors result from the data you’re processing. You might only receive certain data from a webhook under certain conditions, or have malformed data in the payload that causes an error.
You can apply conditional logic in code, or using the [If / Else operator](/docs/workflows/building-workflows/control-flow/ifelse/), handling these conditions accordingly.
## Error notifications
By default, [Pipedream sends an email](/docs/workflows/building-workflows/errors/#default-system-emails) when a workflow throws an unhandled error. But you can:
* Send error notifications to Slack
* Handle errors from one workflow in a specific way
* Fetch errors asynchronously using the REST API, instead of handling the event in real-time
These docs describe the default error behavior, and how to handle custom use cases like these.
Email notifications are sent to the address specified in your [workspace settings](https://pipedream.com/settings/account) under the **Notifications** section. We recommend using a group email address so everyone can monitor workflow errors.
Before you jump into the examples below, remember that all Pipedream workflows are just code. You can always use the built-in error handling logic native to your programming language, for example: using JavaScript’s [`try / catch` statement](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/try...catch). In the future, Pipedream plans to support this kind of error-handling for built-in actions, as well.
### Default system emails
Any time your workflow throws an unhandled error, you’ll receive an email like this:
This email includes a link to the error so you can see the data and logs associated with the run. When you inspect the data in the Pipedream UI, you’ll see details on the error below the step that threw the error, e.g. the full stack trace.
#### Duplicate errors do not trigger duplicate emails
High-volume errors can lead to lots of notifications, so Pipedream only sends at most one email, per error, per workflow, per 24 hour period.
For example, if your workflow throws a `TypeError`, we’ll send you an email, but if it continues to throw that same `TypeError`, we won’t email you about the duplicate errors until the next day. If a different workflow throws a `TypeError`, you **will** receive an email about that.
### Test mode vs. live mode
When you’re editing and testing your workflow, any unhandled errors will **not** raise errors as emails, nor are they forwarded to [error listeners](/docs/workflows/building-workflows/errors/#handle-errors-with-custom-logic). Error notifications are only sent when a deployed workflow encounters an error on a live event.
## Debug with AI
You can debug errors in [code](/docs/workflows/building-workflows/code/) or [action](/docs/workflows/building-workflows/actions/) steps with AI by pressing the **Debug with AI** button at the bottom of any error.
### Data we send with errors
When you debug an error with AI, Pipedream sends the following information to OpenAI:
* The error code, message, and stack trace
* The step’s code
* The input added to the step configuration. This **does not** contain the event data that triggered your workflow, just the static input entered in the step configuration, like the URL of an HTTP request, or the names of [step exports](/docs/workflows/#step-exports).
We explicitly **do not** send the event data that triggered the error, or any other information about your account or workflow.
## Handle errors with custom logic
Pipedream exposes a global stream of all errors, raised from all workflows. You can subscribe to this stream, triggering a workflow on every event. This lets you handle errors in a custom way. Instead of sending all errors to email, you can send them to Slack, Discord, AWS, or any other service, and handle them in any custom way.
To do this:
1. Create a new workflow.
2. Add a new trigger. Search for the `Pipedream` app.
3. Select the custom source `Workspace $error events`.
4. Generate an error in a live version of any workflow (errors raised while you’re testing your workflow [do not send errors to the `$errors` stream](/docs/workflows/building-workflows/errors/#test-mode-vs-live-mode)). You should see this error trigger the workflow in step #1. From there, you can build any logic you want to handle errors across workflows.
### Duplicate errors *do* trigger duplicate error events on custom workflows
Unlike [the default system emails](/docs/workflows/building-workflows/errors/#duplicate-errors-do-not-trigger-duplicate-emails), duplicate errors are sent to any workflow listeners.
## Poll the REST API for workflow errors
Pipedream provides a REST API endpoint to [list the most recent 100 workflow errors](/docs/rest-api/#get-workflow-errors) for any given workflow. For example, to list the errors from workflow `p_abc123`, run:
```sh
curl 'https://api.pipedream.com/v1/workflows/p_abc123/$errors/event_summaries?expand=event' \
-H 'Authorization: Bearer '
```
By including the `expand=event` query string param, Pipedream will return the full error data, along with the original event that triggered your workflow:
```json
{
"page_info": {
"total_count": 100,
"start_cursor": "1606370816223-0",
"end_cursor": "1606370816223-0",
"count": 1
},
"data": [
{
"id": "1606370816223-0",
"indexed_at_ms": 1606370816223,
"event": {
"original_event": {
"name": "Luke",
"title": "Jedi"
},
"original_context": {
"id": "2po8fyMMKF4SZFrOThm0Ex4zv6M",
"ts": "2024-12-05T17:52:54.117Z",
"pipeline_id": null,
"workflow_id": "p_abc1234",
"deployment_id": "d_abc1234",
"source_type": "COMPONENT",
"verified": false,
"hops": null,
"test": false,
"replay": false,
"owner_id": "o_abc1234",
"platform_version": "3.50.4",
"workflow_name": "error",
"resume": null,
"emitter_id": "hi_abc1234",
"external_user_id": null,
"external_user_environment": null,
"trace_id": "2po8fwtzKHVr0VZpJc3EUmdTAms",
"project_id": "proj_abc1234"
},
"error": {
"code": "InternalFailure",
"cellId": "c_abc123",
"ts": "2020-11-26T06:06:56.077Z",
"stack": " at Request.extractError ..."
}
},
"metadata": {
"emitter_id": "p_abc123",
"emit_id": "1kodKnAdWGeJyhqYbqyW6lEXVAo",
"name": "$errors"
}
}
]
}
```
By listing these errors, you may be able to replay them against your workflow programmatically. For example, if your workflow is triggered by HTTP requests, you can send an HTTP request with the data found in `event.original_event` (see the example above) for every event that errored.
# HTTP
Source: https://pipedream.com/docs/workflows/building-workflows/http
Integrate and automate any API using Pipedream workflows. Use app specific pre-built actions, or an HTTP request action for a no code interface. If you need more control over error handling, then use your same connected accounts with code in Node.js or Python.
## Pre-built actions
Pre-built actions are the most convenient option for integrating your workflow with an API. Pre-built actions can use your connected accounts to perform API requests, and are configured through props.
Pre-built actions are the fastest way to get started building workflows, but they may not fit your use case if a prop is missing or is handling data in a way that doesn’t fit your needs.
For example, to send a message using Slack just search for Slack and use the **Send Message to a Public Channel** action:
Then connect your Slack account, select a channel and write your message:
Now with a few clicks and some text you’ve integrated Slack into a Pipedream workflow.
Pre-built actions are open source
All pre-built actions are published from the [Pipedream Component Registry](/docs/components/contributing/), so you can read and modify their source code. You can even publish your own from [Node.js code steps privately to your own workspace](/docs/workflows/building-workflows/code/nodejs/sharing-code/).
## HTTP Request Action
The HTTP request action is the next most convenient option. Use a Postman-like interface to configure an HTTP request - including the headers, body, and even connecting an account.
Selecting this action will display an HTTP request builder, with the Slack app slot to connect your account with.
Once you connect your account to the step, it will automatically configure the authorization headers to match.
For example, the Slack API expects a Bearer token with the `Authorization` header. So Pipedream automatically configures this HTTP request to pass your token to that specific header:
The configuration of the request and management of your token is automatically handled for you. So you can simply modify the request to match the API endpoint you’re seeking to interact with.
### Adding apps to an HTTP request builder action
You can also attach apps to the *Send any HTTP Request* action from the action selection menu. After adding a new step to your workflow, select the *Send any HTTP Request* action:
Then within the HTTP request builder, click the *Autorization Type* dropdown to select a method, and click **Select an app**:
Then you can choose **Slack** as the service to connect the HTTP request with:
The HTTP request action will automatically be configured with the Slack connection, you’ll just need to select your account to finish the authentication.
Then it’s simply updating the URL to send a message which is [`https://slack.com/api/chat.postMessage`](https://api.slack.com/methods/chat.postMessage):
Finally modify the body of the request to specify the `channel` and `message` for the request:
HTTP Request actions can be used to quickly scaffold API requests, but are not as flexible as code for a few reasons:
* Conditionally sending requests - The HTTP request action will always request, to send requests conditionally you’ll need to use code.
* Workflow execution halts - if an HTTP request fails, the entire workflow cancels
* Automatically retrying - `$.flow.retry` isn’t available in the HTTP Request action to retry automatically if the request fails
* Error handling - It’s not possible to set up a secondary action if an HTTP request fails.
## HTTP Requests in code
When you need more control, use code. You can use your connected accounts with Node.js or Python code steps.
This gives you the flexibility to catch errors, use retries, or send multiple API requests in a single step.
First, connect your account to the code step:
* [Connecting any account to a Node.js step](/docs/workflows/building-workflows/code/nodejs/auth/#accessing-connected-account-data-with-thisappnameauth)
* [Connecting any account to a Python step](/docs/workflows/building-workflows/code/python/auth/)
### Conditionally sending an API Request
You may only want to send a Slack message on a certain condition, in this example we’ll only send a Slack message if the HTTP request triggering the workflow passes a special variable: `steps.trigger.event.body.send_message`
```javascript
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
slack: {
type: "app",
app: "slack",
}
},
async run({steps, $}) {
// only send the Slack message if the HTTP request has a `send_message` property in the body
if(steps.trigger.body.send_message) {
return await axios($, {
headers: {
Authorization: `Bearer ${this.slack.$auth.oauth_access_token}`,
},
url: `https://slack.com/api/chat.postMessage`,
method: 'post',
data: {
channel: 'C123456',
text: 'Hi from a Pipedream Node.js code step'
}
})
}
},
})
```
### Error Handling
The other advantage of using code is handling error messages using `try...catch` blocks. In this example, we’ll only send a Slack message if another API request fails:
```php
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
openai: {
type: "app",
app: "openai"
},
slack: {
type: "app",
app: "slack",
}
},
async run({steps, $}) {
try {
return await axios($, {
url: `https://api.openai.com/v1/completions`,
method: 'post',
headers: {
Authorization: `Bearer ${this.openai.$auth.api_key}`,
},
data: {
"model": "text-davinci-003",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0
}
})
} catch(error) {
return await axios($, {
url: `https://slack.com/api/chat.postMessage`,
method: 'post',
headers: {
Authorization: `Bearer ${this.slack.$auth.oauth_access_token}`,
},
data: {
channel: 'C123456',
text: `OpenAI returned an error: ${error}`
}
})
}
},
})
```
Subscribing to all errors
[You can use a subscription](/docs/rest-api/#subscriptions) to subscribe a workflow to all errors through the `$errors` channel, instead of handling each error individually.
### Automatically retrying an HTTP request
You can leverage `$.flow.rerun` within a `try...catch` block in order to retry a failed API request.
[See the example in the `$.flow.rerun` docs](/docs/workflows/building-workflows/code/nodejs/rerun/#pause-resume-and-rerun-a-workflow) for Node.js.
## Platform axios
### Why `@pipedream/platform` axios?
`axios` is an HTTP client for Node.js ([see these docs](/docs/workflows/building-workflows/code/nodejs/http-requests/) for usage examples).
`axios` has a simple programming API and works well for most use cases. But its default error handling behavior isn’t easy to use. When you make an HTTP request and the server responds with an error code in the 4XX or 5XX range of status codes, `axios` returns this stack trace:
This only communicates the error code, and not any other information (like the body or headers) returned from the server.
Pipedream publishes an `axios` wrapper as a part of [the `@pipedream/platform` package](https://github.com/PipedreamHQ/pipedream/tree/master/platform). This presents the same programming API as `axios`, but implements two helpful features:
1. When the HTTP request succeeds (response code \< `400`), it returns only the `data` property of the response object — the HTTP response body. This is typically what users want to see when they make an HTTP request:
2. When the HTTP request *fails* (response code >= `400`), it displays a detailed error message in the Pipedream UI (the HTTP response body), and returns the whole `axios` response object so users can review details on the HTTP request and response:
### Using `@pipedream/platform` axios in component actions
To use `@pipedream/platform` axios in component actions, import it:
```javascript
import { axios } from "@pipedream/platform"
```
`@pipedream/platform` axios uses methods [provided by the `$` object](/docs/components/contributing/api/#actions), so you’ll need to pass that as the first argument to `axios` when making HTTP requests, and pass the [standard `axios` request config](https://github.com/axios/axios#request-config) as the second argument.
Here’s an example action:
```javascript
import { axios } from "@pipedream/platform"
export default {
key: "my-test-component",
name: "My Test component",
version: "0.0.1",
type: "action",
async run({ $ }) {
return await axios($, {
url: "https://httpstat.us/200",
})
}
}
```
# Inspect Events
Source: https://pipedream.com/docs/workflows/building-workflows/inspect
[The inspector](/docs/workflows/building-workflows/inspect/#the-inspector) lists the events you send to a [workflow](/docs/workflows/building-workflows/). Once you choose a [trigger](/docs/workflows/building-workflows/triggers/) and send events to it, you’ll see those events in the inspector, to the left of your workflow.
Clicking on an event from the list lets you [review the incoming event data and workflow execution logs](/docs/workflows/building-workflows/triggers/#examining-event-data) for that event.
You can use the inspector to replay events, delete them, and more.
## The inspector
The inspector lists your workflow’s events:
## Event Duration
The duration shown when clicking an individual event notes the time it took to run your code, in addition to the time it took Pipedream to handle the execution of that code and manage its output. Specifically,
**Duration = Time Your Code Ran + Pipedream Execution Time**
## Replaying and deleting events
Hover over an event, and you’ll see two buttons:
The blue button with the arrow **replays** the event against the newest version of your workflow. The red button with the X **deletes** the event.
## Messages
Any `console.log()` statements or other output of code steps is attached to the associated code cells. But [`$.flow.exit()`](/docs/workflows/building-workflows/code/nodejs/#ending-a-workflow-early) or [errors](/docs/workflows/building-workflows/code/nodejs/#errors) end a workflow’s execution, so their details appear in the inspector.
## Limits
Pipedream retains a limited history of events for a given workflow. See the [limits docs](/docs/workflows/limits/#event-history) for more information.
# Settings
Source: https://pipedream.com/docs/workflows/building-workflows/settings
export const WARM_WORKERS_CREDITS_PER_INTERVAL = '5';
export const WARM_WORKERS_INTERVAL = '10 minutes';
export const MEMORY_ABSOLUTE_LIMIT = '10GB';
export const MEMORY_LIMIT = '256MB';
You can control workflow-specific settings in your workflow’s **Settings**:
1. Visit your workflow
2. Click on Workflow Settings on the top left:
You can also open the workflow settings using `CMD` + `,` on Mac, or `Ctrl` + `,` on Windows.
## Enable Workflow
If you’d like to pause your workflow from executing completely, you can disable it or reenable it here.
## Error Handling
By default, you’ll receive notifications when your workflow throws an unhandled error. See the [error docs](/docs/workflows/building-workflows/errors/) for more detail on these notifications.
You can disable these notifications for your workflow by disabling the **Notify me on errors** toggle:
## Auto-retry Errors
**Out of Memory and Timeout Errors**
Pipedream will not automatically retry if an execution fails due to an Out of Memory (OOM) error or a timeout. If you encounter these errors frequently, consider increasing the configuration settings for your workflow.
Customers on the [Advanced Plan](https://pipedream.com/pricing) can automatically retry workflows on errors. If any step in your workflow throws an error, Pipedream will retry the workflow from that failed step, re-rerunning the step up to 8 times over a 10 hour span with an [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) strategy.
On error, the step will export a `$summary` property that tells you how many times the step has been retried, and an `$attempt` object with the following properties:
1. `error` — All the details of the error the step threw — the error, the stack, etc.
2. `cancel_url` — You can call this URL to cancel the retry
3. `rerun_url` — You can call this URL to proceed with the execution immediately
4. `resume_ts` — An ISO 8601 timestamp that tells you the timestamp of the next retry
If the step execution succeeds during any retry, the execution will proceed to the next step of the workflow.
If the step fails on all 8 retries and throws a final error, you’ll receive [an error notification](/docs/workflows/building-workflows/errors/) through your standard notification channel.
### Send error notifications on the first error
By default, if a step fails on all 8 retries, and throws a final error, you’ll receive [an error notification](/docs/workflows/building-workflows/errors/) through your standard notification channel. But sometimes you need to investigate errors as soon as they happen. If you’re connecting to your database, and receive an error that the DB is down, you may want to investigate that immediately.
On any workflow with auto-retry enabled, you can optionally choose to **Send notification on first error**. This is disabled by default so you don’t get emails for transient errors, but you can enable for critical workflows where you want visibility into all errors.
For custom control over error handling, you can implement error logic in code steps (e.g. `try` / `catch` statements in Node.js code), or [create your own custom error workflow](/docs/workflows/building-workflows/errors/#handle-errors-with-custom-logic).
## Data Retention Controls
By default, Pipedream stores exports, logs, and other data tied to workflow executions. You can view these logs in two places:
1. [The workflow inspector](/docs/workflows/building-workflows/inspect/#the-inspector)
2. [Event History](/docs/workflows/event-history/)
But if you’re processing sensitive data, you may not want to store those logs. You can **Disable data retention** in your workflow settings to disable **all** logging. Since Pipedream stores no workflow logs with this setting enabled, you’ll see no logs in the inspector or event history UI.
Refer to our [pricing page](https://pipedream.com/pricing) to understand the latest limits based on your plan.
### Constraints
* **Data Retention Controls do not apply to sources**: Even with data retention disabled on your workflow, Pipedream will still log inbound events for the source.
* **No events will be shown in the UI**: When data retention is disabled for your workflow, the Pipedream UI will not show any new events in the inspector or Event History for that workflow.
**Avoid surfacing events in the builder**
Even with data retention disabled on your workflow, the builder will still surface inbound events when in build mode. To avoid surfacing potentially sensitive data here as well, refer to [these docs](/docs/workflows/building-workflows/triggers/#pipedream-specific-request-parameters).
## Execution Controls
### Execution Timeout Limit
Workflows have a default [execution limit](/docs/workflows/limits/#time-per-execution), which defines the time the workflow can run for a single execution until it’s timed out.
If your workflow times out, and needs to run for longer than the [default limit](/docs/workflows/limits/#time-per-execution), you can change that limit here.
### Memory
By default, workflows run with {MEMORY_LIMIT} of memory. If you’re processing a lot of data in memory, you might need to raise that limit. Here, you can increase the memory of your workflow up to {MEMORY_ABSOLUTE_LIMIT}.
Increasing your workflow’s memory gives you a proportional increase in CPU, so increasing your workflow’s memory can reduce its overall runtime and make it more performant.
**How can my workflow run faster?**
See [our guide on running workflows faster](/docs/troubleshooting/#how-can-my-workflow-run-faster).
**Pipedream charges credits proportional to your memory configuration**. When you modify your memory settings, Pipedream will show you the number of credits you’ll be charged per execution. [Read more here](/docs/pricing/faq/#how-does-workflow-memory-affect-credits).
### Concurrency and Throttling
[Manage the concurrency and rate](/docs/workflows/building-workflows/settings/concurrency-and-throttling/) at which events from a source trigger your workflow code.
## Eliminate cold starts
A **cold start** refers to the delay between the invocation of workflow and the execution of the workflow code. Cold starts happen when Pipedream spins up a new [execution environment](/docs/privacy-and-security/#execution-environment) to handle incoming events.
Specifically, cold starts occur on the first request to your workflow after a period of inactivity (roughly 5 minutes), or if your initial worker is already busy and a new worker needs to be initialized. In these cases, Pipedream creates a new execution environment to process your event. **Initializing this environment takes a few seconds, which delays the execution of this first event**.
You can reduce cold starts by configuring a number of dedicated **workers**:
1. Visit your workflow’s **Settings**
2. Under **Execution Controls**, select the toggle to **Eliminate cold starts**
3. Configure [the appropriate number of workers](/docs/workflows/building-workflows/settings/#how-many-workers-should-i-configure) for your use case
When you configure workers for a specific workflow, Pipedream initializes dedicated workers — virtual machines that run Pipedream’s [execution environment](/docs/privacy-and-security/#execution-environment). [It can take a few minutes](/docs/workflows/building-workflows/settings/#how-long-does-it-take-to-spin-up-a-dedicated-worker) for new dedicated workers to deploy. Once deployed, these workers are available at all times to respond to workflow executions, with no cold starts.
You may need to configure [multiple dedicated workers](/docs/workflows/building-workflows/settings/#how-many-workers-should-i-configure) to handle multiple, concurrent requests.
Pipedream also performs some initialization operations on new workflow runs, so you may still observe a small startup time (typically around 50ms per workflow step) on dedicated workers.
### When should I configure dedicated workers?
You should configure dedicated workers when you need to process requests as soon as possible, with no latency.
For example, you may build an HTTP-triggered workflow that returns a synchronous HTTP response to a user, without delay. Or you may be building a Slack bot and need to respond to Slack’s webhook within a few seconds. Since these workflows need to respond quickly, they’re good cases to use dedicated workers.
### How many workers should I configure?
Incoming requests are handled by a single worker, one at a time. If you only receive one request a minute, and the workflow finishes execution in a few seconds, you may only need one worker.
But you might have a higher-volume app that receives two concurrent requests. In that case, Pipedream spins up **two** workers to handle each request.
For many user-facing (even internal) applications, the number of requests over time can be modeled with a [Poisson distrubution](https://en.wikipedia.org/wiki/Poisson_distribution). You can use that distribution to estimate the number of workers you need at an average time, or set it higher if you want to ensure a specific percentage of requests hit a dedicated worker. You can also save a record of all workflow runs to your own database, with the timestamp they ran ([see `steps.trigger.context.ts`](/docs/workflows/building-workflows/triggers/#stepstriggercontext)), and look at your own pattern of requests, to compute the optimal number of workers.
### Do compute budgets apply to dedicated workers?
No, compute budgets do not apply to dedicated workers, they only apply to credits incurred by compute from running workflows, sources, etc.
### How long does it take to spin up a dedicated worker?
It can take 5-10 minutes for Pipedream to fully configure a new dedicated worker. Before that time, you may still observe cold starts with new incoming requests.
### Pricing for dedicated workers
You’re charged {WARM_WORKERS_CREDITS_PER_INTERVAL} credits for every {WARM_WORKERS_INTERVAL} a dedicated worker is live, per worker, per {MEMORY_LIMIT} memory. You can view the credits used by dedicated workers [in your billing settings](https://pipedream.com/settings/billing):
For example, if you run a single dedicated worker for 24 hours, that would cost 720 credits:
```python
5 credits per 10 min
* 6 10-min periods per hour
* 24 hours
= 720 credits
```
{WARM_WORKERS_INTERVAL} is the *minimum* interval that Pipedream charges for usage. If you have a dedicated worker live for 1 minute, Pipedream will still charge {WARM_WORKERS_CREDITS_PER_INTERVAL} credits.
Additionally, any change to dedicated worker configuration, (including worklow deploys) will result in an extra {WARM_WORKERS_CREDITS_PER_INTERVAL} credits of usage.
### Limits
Each attachment is limited to `25MB` in size. The total size of all attachments within a single workflow cannot exceed `200MB`.
# Concurrency and Throttling
Source: https://pipedream.com/docs/workflows/building-workflows/settings/concurrency-and-throttling
export const MAX_WORKFLOW_QUEUE_SIZE = '10,000';
export const DEFAULT_WORKFLOW_QUEUE_SIZE = '100';
Pipedream makes it easy to manage the concurrency and rate at which events trigger your workflow code using execution controls.
## Overview
Workflows listen for events and execute as soon as they are triggered. While this behavior is expected for many use cases, there can be unintended consequences.
### Concurrency
Without restricting concurrency, events can be processed in parallel and there is no guarantee that they will execute in the order in which they were received. This can cause race conditions.
For example, if two workflow events try to add data to Google Sheets simultaneously, they may both attempt to write data to the same row. As a result, one event can overwrite data from another event. The diagram below illustrates this example — both `Event 1` and `Event 2` attempt to write data to Google Sheets concurrently — as a result, they will both write to the same row and the data for one event will be overwritten and lost (and no error will be thrown).
You can avoid race conditions like this by limiting workflow concurrency to a single “worker”. What this means is that only one event will be processed at a time, and the next event will not start processing until the first is complete (unprocesssed events will maintained in a queue and processed by the workflow in order). The following diagram illustrates how the events in the last diagram would executed if concurrency was limited to a single worker.
While the first example resulted in only two rows of data in Google Sheets, this time data for all three events are recorded to three separate rows.
### Throttling
If your workflow integrates with any APIs, then you may need to limit the rate at which your workflow executes to avoid hitting rate limits from your API provider. Since event-driven workflows are stateless, you can’t manage the rate of execution from within your workflow code. Pipedream’s execution controls solve this problem by allowing you to control the maximum number of times a workflow may be invoked over a specific period of time (e.g., up to 1 event every second).
Pipedream controls the frequency of workflow invocations over a specified time interval using fixed window throttling. For example, if the execution rate limit is set at 1 execution every 5 seconds, this means that your workflow will be invoked no more than once within a fixed 5-second time box. This doesn’t mean that executions will occur 5 seconds apart.
## Usage
Events emitted from a source to a workflow are placed in a queue, and Pipedream triggers your workflow with events from the queue based on your concurrency and throttling settings. These settings may be customized per workflow (so the same events may be processed at different rates by different workflows).
The maximum number of events Pipedream will queue per workflow depends on your account type.
* Up to 100 events will be queued per workflow for the workspaces on the free tier.
* Workflows owned by paid plans may have custom limits. If you need a larger queue size, [see here](/docs/workflows/building-workflows/settings/concurrency-and-throttling/#increasing-the-queue-size-for-a-workflow).
**IMPORTANT:** If the number of events emitted to a workflow exceeds the queue size, events will be lost. If that happens, you’ll see an error in your workflow, and you’ll receive an error email.
To learn more about how the feature works and technical details, check out our [engineering blog post](https://blog.pipedream.com/concurrency-controls-design/).
### Where Do I Manage Concurrency and Throttling?
Concurrency and throttling can be managed in the **Execution Controls** section of your **Workflow Settings**. Event queues are currently supported for any workflow that is triggered by an event source. Event queues are not currently supported for native workflow triggers (native HTTP, cron, SDK and email).
### Managing Event Concurrency
Concurrency controls define how many events can be executed in parallel. To enforce serialized, in-order execution, limit concurrency to `1` worker. This guarantees that each event will only be processed once the execution for the previous event is complete.
To execute events in parallel, increase the number of workers (the number of workers defines the maximum number of concurrent events that may be processed), or disable concurrency controls for unlimited parallelization.
### Throttling Workflow Execution
To throttle workflow execution, enable it in your workflow settings and configure the **limit** and **interval**.
The limit defines how many events (from `0-10000`) to process in a given time period.
The interval defines the time period over which the limit will be enforced. You may specify the time period as a number of seconds, minutes or hours (ranging from `1-10000`)
### Applying Concurrency and Throttling Together
The conditions for both concurrency and throttling must be met in order for a new event to trigger a workflow execution. Here are some examples:
| Concurrency | Throttling | Result |
| ----------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Off | Off | Events will trigger your workflow **as soon as they are received**. Events may execute in parallel. |
| 1 Worker | Off | Events will trigger your workflow in a **serialized pattern** (a maximum of 1 event will be processed at a time). As soon as one event finishes processing, the next event in the queue will be processed. |
| 1 Worker | 1 Event per Second | Events will trigger your workflow in a **serialized pattern** at a **maximum rate** of 1 event per second. If an event takes *less* than one second to finish processing, the next event in the queue will not being processing until 1 second from the start of the most recently processed event. If an event takes *longer* than one second to process, the next event in the queue will begin processing immediately. |
| 1 Worker | 10 Events per Minute | Events will trigger your workflow in a **serialized pattern** at a **maximum rate** of 10 events per minute. If an event takes *less* than one minute to finish processing, the next event in the queue immediately begin processing. If 10 events been processed in less than one minute, the remaining events will be queued until 1 minute from the start of the initial event. |
| 5 Workers | Off | Up to 5 events will trigger your workflow in parallel as soon as they are received. If more events arrive while 5 events are being processed, they will be queued and executed in order as soon as an event completes processing. |
### Pausing Workflow Execution
To stop the queue from invoking your workflow, throttle workflow execution and set the limit to `0`.
### Increasing the queue size for a workflow
By default, your workflow can hold up to {DEFAULT_WORKFLOW_QUEUE_SIZE} events in its queue at once. Any events that arrive once the queue is full will be dropped, and you’ll see an [Event Queue Full](/docs/troubleshooting/#event-queue-full) error.
For example, if you serialize the execution of your workflow by setting a concurrency of `1`, but receive 200 events from your workflow’s event source at once, the workflow’s queue can only hold the first 100 events. The last 100 events will be dropped.
Users on [paid tiers](https://pipedream.com/pricing) can increase their queue size up to {MAX_WORKFLOW_QUEUE_SIZE} events for a given workflow, just below the **Concurrency** section of your **Execution Controls** settings:
# Sharing Workflows
Source: https://pipedream.com/docs/workflows/building-workflows/sharing
Pipedream provides a few ways to share your workflow:
1. [Share a workflow as a link with anyone](/docs/workflows/building-workflows/sharing/#creating-a-share-link-for-a-workflow)
2. [Publish as a public template](/docs/workflows/building-workflows/sharing/#publish-to-the-templates-gallery)
You can share your workflows as templates with other Pipedream accounts with a unique shareable link.
Creating a share link for your workflow will allow anyone with the link to create a template version of your workflow in their own Pipedream account. This will allow others to use your workflow with their own Pipedream account and also their own connected accounts.
[Here’s an example of a workflow](https://pipedream.com/new?h=tch_OYWfjz) that sends you a daily SMS message with today’s schedule:
Click **Deploy to Pipedream** below to create a copy of this workflow in your own Pipedream account.
Deploy to Pipedream
The copied workflow includes the same trigger, steps, and connected account configuration, but it has a separate event history and versioning from the original.
Steps that are paused within your workflow will be omitted from the generated share link.
## Creating a share link for a workflow
To share a workflow, open the workflow in your browser. Then in the top right menu, select **Create Share Link**.
Now you can define which prop values should be included in this shareable link.
### Including props
Optionally, you can include the actual individual prop configurations as well. This helps speed up workflow development if the workflow relies on specific prop values to function properly.
You can choose to **Include all** prop values if you’d like, or only select specific props.
For the daily schedule reminder workflow, we included the props for filtering Google Calendar events, but we did *not* include the SMS number to send the message to. This is because the end user of this workflow will use their own phone number instead:
**Your connected accounts are not shared.** When other users configure your workflow from the shared link they’ll be prompted to connect their own accounts.
### Versioning
* When you create a shared link for your workflow, that link is frozen to the version of your workflow at the time the link was created
* If you make changes to the original workflow, those changes will *not* be included in the shared workflow link, nor in any workflows copied from the original shared link
* To push out new changes to a workflow, you’ll need to generate a new share link
**Share links persist**. You can create multiple share links for the same workflow with different prop configurations, or even different steps. Share links do not expire, nor do newly created link overwrite existing ones.
## Publish to the templates gallery
We’re excited to highlight the various use cases our community and partners have enabled using Pipedream. To do this, we’ve created a [Templates Gallery](https://pipedream.com/templates/) with a growing number of high-quality templates to help you discover your next workflow.
The process to publish your own workflow to the Templates Gallery is similar to [creating a share link](/docs/workflows/building-workflows/sharing/#creating-a-share-link-for-a-workflow):
To get started, open the workflow in your browser. Then in the top right menu, select **Publish as a template**.
Follow the same steps as above to select the prop input values you’d like to include in the template, then click **Continue**
On the next screen, you’ll be prompted for additional information about your template:
* **Developer name**: This is probably you —this name will be displayed as the author of the template.
* **Template name**: The name of the template.
* **Brief description**: A short description of the template, which will be displayed on the listing page (maximum 256 characters). [See here](https://pipedream.com/templates) for examples.
* **Longer description**: Use Markdown to create a more in-depth description. We recommend including distinct sections as H2s, for you to provide an **Overview**, **Steps**, and **Use Cases**. This will be displayed on the details page for the template. Here’s an example: [Notion Voice Notes (Google Drive)](https://pipedream.com/templates/notion-voice-notes-google-drive-version-mpt_2WRFKY).
* **Use cases**: Select one or more categories that align with the use cases for your template to help users discover it.
* **Affiliate token**: If you’re a [Pipedream affiliate](https://pipedream.com/affiliates), you can enter your unique token here to earn commissions on any users who sign up for Pipedream after using your template.
* Once you’ve filled out the required information, click **Submit**.
* We’ll review your template and will email you once it goes live!
## FAQ
### If changes are made to the original workflow, will copied versions of the workflow from the shared link also change?
No, workflows copied from a shared link will have separate version histories from the original workflow. You can modify your original workflow and it will not affect copied workflows.
### Will my connected accounts be shared with the workflow?
No, your connected accounts are not shared. Instead, copied workflows display a slot in actions that require a connected account, so the user of the copied workflow can provide their own accounts instead.
For example, if one of your steps relies on a Slack connected account to send a message, then the copied workflow will display the need to connect a Slack account.
### I haven’t made any changes to my workflow, but if I generate another shared link will it override my original link?
No, if the steps and prop configuration of the workflow is exactly the same, then the shared link URL will also be exactly the same.
The shared workflow link is determined by the configuration of your workflow, it’s not a randomly generated ID.
### Will generating new shared links disable or delete old links?
No, each link you generate will be available even if you create new versions based on changes or included props from the original workflow.
### What plan is this feature available on?
Sharing workflows via link is available on all plans, including the Free plan.
### Do users of my workflow need to have a subscription?
To copy a workflow, a subscription is not required. However, the copied workflow is subject to the current workspace’s plan limits.
For example, if a workflow requires more connected accounts than what’s available on the [Free plan](/docs/pricing/#free-plan), then users of your workflow will require a plan to run the workflow properly.
### Will copies of my workflow use my credits?
No. Copied workflows have entirely separate versioning, connected accounts, and billing. Sharing workflow copies is free, and the user of the copy usage is responsible for credit usage. Your original workflow is entirely separate from the copy.
### How can I transfer all of my workflows from one account to another?
It’s only possible to share a single workflow at time with a link at this time.
If you’re trying to migrate all resources from one workspace to another [please contact us for help](mailto:support@pipedream.com).
### Are step notes included when I share a workflow?
Yes any [step notes](/docs/workflows/#step-notes) you’ve added to your workflow are included in the copied version.
# Triggers
Source: https://pipedream.com/docs/workflows/building-workflows/triggers
export const FUNCTION_PAYLOAD_LIMIT = '6MB';
export const EMAIL_PAYLOAD_SIZE_LIMIT = '30MB';
export const PAYLOAD_SIZE_LIMIT = '512KB';
export const ENDPOINT_BASE_URL = '*.m.pipedream.net';
**Triggers** define the type of event that runs your workflow. For example, HTTP triggers expose a URL where you can send any HTTP requests. Pipedream will run your workflow on each request. The Scheduler trigger runs your workflow on a schedule, like a cron job.
Today, we support the following triggers:
* [Triggers for apps like Twitter, GitHub, and more](/docs/workflows/building-workflows/triggers/#app-based-triggers)
* [HTTP / Webhook](/docs/workflows/building-workflows/triggers/#http)
* [Schedule](/docs/workflows/building-workflows/triggers/#schedule)
* [Email](/docs/workflows/building-workflows/triggers/#email)
* [RSS](/docs/workflows/building-workflows/triggers/#rss)
If there’s a specific trigger you’d like supported, please [let us know](https://pipedream.com/support/).
## App-based Triggers
You can trigger a workflow on events from apps like Twitter, Google Calendar, and more using [event sources](/docs/workflows/building-workflows/triggers/). Event sources run as separate resources from your workflow, which allows you to trigger *multiple* workflows using the same source. Here, we’ll refer to event sources and workflow triggers interchangeably.
When you create a workflow, click **Add Trigger** to view the available triggers:
This will open a new menu to search and choose a trigger for your workflow:
Search by **app name** to find triggers associated with your app. For Google Calendar, for example, you can run your workflow every time a new event is **added** to your calendar, each time an event **starts**, **ends**, and more:
Once you select your trigger, you’ll be asked to connect any necessary accounts (for example, Google Calendar sources require you authorize Pipedream access to your Google account), and enter the values for any configuration settings.
Some sources are configured to retrieve an initial set of events when they’re created. Others require you to generate events in the app to trigger your workflow. If your source generates an initial set of events, you’ll see them appear in the **Select events** dropdown in the **Select Event** step:
Then you can select a specific test event and manually trigger your workflow with that event data by clicking **Send Test Event**. Now you’re ready to build your workflow with the selected test event.
### What’s the difference between an event source and a trigger?
You’ll notice the docs use the terms **event source** and **trigger** interchangeably above. It’s useful to clarify the distinction in the context of workflows.
[Event sources](/docs/workflows/building-workflows/triggers/) run code that collects events from some app or service and emits events as the source produces them. An event source can be used to **trigger** any number of workflows.
For example, you might create a single source to listen for new Twitter mentions for a keyword, then trigger multiple workflows each time a new tweet is found: one to [send new tweets to Slack](https://pipedream.com/@pravin/twitter-mentions-slack-p_dDCA5e/edit), another to [save those tweets to an Amazon S3 bucket](https://pipedream.com/@dylan/twitter-to-s3-p_KwCZGA/readme), etc.
**This model allows you to separate the data produced by a service (the event source) from the logic to process those events in different contexts (the workflow)**.
Moreover, you can access events emitted by sources using Pipedream’s [SSE](/docs/workflows/data-management/destinations/sse/) and [REST APIs](/docs/rest-api/). This allows you to access these events in your own app, outside Pipedream’s platform.
### Can I add multiple triggers to a workflow?
Yes, you can add any number of triggers to a workflow. Click the top right menu in the trigger step and select **Add trigger**.
### Shape of the `steps.trigger.event` object
In all workflows, you have access to [event data](/docs/workflows/building-workflows/triggers/#event-format) using the variable `steps.trigger.event`.
The shape of the event is specific to the source. For example, RSS sources produce events with a `url` and `title` property representing the data provided by new items from a feed. Google Calendar sources produce events with a meeting title, start date, etc.
## HTTP
When you select the **HTTP** trigger:
Pipedream creates a URL endpoint specific to your workflow:
You can send any HTTP requests to this endpoint, from anywhere on the web. You can configure the endpoint as the destination URL for a webhook or send HTTP traffic from your application - we’ll accept any [valid HTTP request](/docs/workflows/building-workflows/triggers/#valid-requests).
Pipedream also supports [custom domains](/docs/workflows/domains/). This lets you host endpoints on `https://endpoint.yourdomain.com` instead of the default \`{ENDPOINT_BASE_URL}\` domain.
### Accessing HTTP request data
You can access properties of the HTTP request, like the method, payload, headers, and more, in [the `event` object](/docs/workflows/building-workflows/triggers/#event-format), accessible in any [code](/docs/workflows/building-workflows/code/) or [action](/docs/components/contributing/#actions) step.
### Valid Requests
You can send a request to your endpoint using any valid HTTP method: `GET`, `POST`, `HEAD`, and more.
We default to generating HTTPS URLs in the UI, but will accept HTTP requests against the same endpoint URL.
You can send data to any path on this host, with any query string parameters. You can access the full URL in the `event` object if you’d like to write code that interprets requests with different URLs differently.
You can send data of any [Media Type](https://www.iana.org/assignments/media-types/media-types.xhtml) in the body of your request.
The primary limit we impose is on the size of the request body: we’ll issue a `413 Payload Too Large` status when the body [exceeds our specified limit](/docs/workflows/building-workflows/triggers/#request-entity-too-large).
### Authorizing HTTP requests
By default, HTTP triggers are public and require no authorization to invoke. Anyone with the endpoint URL can trigger your workflow. When possible, we recommend adding authorization.
HTTP triggers support two built-in authorization types in the **Authorization** section of the HTTP trigger configuration: a [static, custom token](/docs/workflows/building-workflows/triggers/#custom-token) and [OAuth](/docs/workflows/building-workflows/triggers/#oauth).
#### Custom token
To configure a static, custom token for HTTP auth:
1. Open the **Configure** section of the HTTP trigger
2. Select **Custom token**.
3. Enter whatever secret you’d like and click **Save and Continue**.
When making HTTP requests, pass the custom token as a `Bearer` token in the `Authorization` header:
```javascript
curl -H 'Authorization: Bearer ' https://myendpoint.m.pipedream.net
```
#### OAuth
You can also authorize requests using [Pipedream OAuth clients](/docs/rest-api/auth/#oauth):
1. Open the **Configure** section of the HTTP trigger.
2. Select **OAuth**.
3. If you don’t have an existing OAuth client, [create one in your workspace’s API settings](/docs/rest-api/auth/#creating-an-oauth-client).
Next, you’ll need to [generate an OAuth access token](/docs/rest-api/auth/#how-to-get-an-access-token).
When making HTTP requests, pass the OAuth access token as a `Bearer` token in the `Authorization` header:
```javascript
curl -H 'Authorization: Bearer ' https://myendpoint.m.pipedream.net
```
You can use the Pipedream SDK to automatically refresh access tokens and invoke workflows, or make HTTP requests directly to the workflow’s URL:
TypeScriptNode.jsHTTP (cURL)
```javascript
import { PipedreamClient } from "@pipedream/sdk";
// These secrets should be saved securely and passed to your environment
const client = new PipedreamClient({
clientId: "{oauth_client_id}",
clientSecret: "{oauth_client_secret}",
projectId: "{project_id}",
projectEnvironment: "development" // or "production"
});
await client.workflows.invokeForExternalUser(
"enabc123", // pass the endpoint ID or full URL here
"{external_user_id}", // The end user's ID in your system
"POST", // HTTP method
{
key: "value",
} // request body
)
```
```javascript
import { PipedreamClient } from "@pipedream/sdk";
// These secrets should be saved securely and passed to your environment
const client = new PipedreamClient({
clientId: "{oauth_client_id}",
clientSecret: "{oauth_client_secret}",
projectId: "{project_id}",
projectEnvironment: "development" // or "production"
});
await client.workflows.invokeForExternalUser(
"enabc123", // pass the endpoint ID or full URL here
"{external_user_id}", // The end user's ID in your system
"POST", // HTTP method
{
key: "value",
} // request body
)
```
```cpp
# First, obtain an OAuth access token
curl -X POST https://api.pipedream.com/v1/oauth/token \
-H "Content-Type: application/json" \
-d '{
"grant_type": "client_credentials",
"client_id": "{oauth_client_id}",
"client_secret": "{oauth_client_secret}"
}'
# The response will include an access_token. Use it in the Authorization header below.
curl -X POST https://{your-endpoint-url} \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {access_token}" \
-d '{
"message": "Hello, world"
}'
```
#### Implement your own authorization logic
Since you have access to the entire request object, and can issue any HTTP response from a workflow, you can implement custom logic to validate requests.
For example, you could require JWT tokens and validate those tokens using the [`jsonwebtoken` package](https://www.npmjs.com/package/jsonwebtoken) at the start of your workflow.
### Custom domains
To configure endpoints on your own domain, e.g. `endpoint.yourdomain.com` instead of the default `*.m.pipedream.net` domain, see the [custom domains](/docs/workflows/domains/) docs.
### How Pipedream handles JSON payloads
When you send JSON in the HTTP payload, or when JSON data is sent in the payload from a webhook provider, **Pipedream converts that JSON to its equivalent JavaScript object**. The trigger data can be referenced using [the `steps` object](/docs/workflows/building-workflows/triggers/#shape-of-the-stepstriggerevent-object).
In the [Inspector](/docs/workflows/building-workflows/inspect/), we present `steps.trigger.event` cleanly, indenting nested properties, to make the payload easy to read. Since `steps.trigger.event` is a JavaScript object, it’s easy to reference and manipulate properties of the payload using dot-notation.
### How Pipedream handles `multipart/form-data`
When you send [form data](https://ec.haxx.se/http/http-multipart) to Pipedream using a `Content-Type` of `multipart/form-data`, Pipedream parses the payload and converts it to a JavaScript object with a property per form field. For example, if you send a request with two fields:
```powershell
curl -F 'name=Leia' -F 'title=General' https://myendpoint.m.pipedream.net
```
Pipedream will convert that to a JavaScript object, `event.body`, with the following shape:
```javascript
{
name: "Leia",
title: "General",
}
```
### How Pipedream handles HTTP headers
HTTP request headers will be available in the `steps.trigger.event.headers` steps export in your downstream steps.
Pipedream will automatically lowercase header keys for consistency.
### Pipedream-specific request parameters
These params can be set as headers or query string parameters on any request to a Pipedream HTTP endpoint.
#### `x-pd-nostore`
Set to `1` to prevent logging any data for this execution. Pipedream will execute all steps of the workflow, but no data will be logged to Pipedream. No event will show up in the inspector or the Event History UI.
If you need to disable logging for *all* requests, use the workflow’s [Data Retention controls](/docs/workflows/building-workflows/settings/#data-retention-controls), instead.
#### `x-pd-notrigger`
Set to `1` to send an event to the workflow for testing. Pipedream will **not** trigger the production version of the workflow, but will display the event in the [list of test events](/docs/workflows/building-workflows/triggers/#selecting-a-test-event) on the HTTP trigger.
#### Limits
You can send any content, up to the [HTTP payload size limit](/docs/workflows/limits/#http-request-body-size), as a part of the form request. The content of uploaded images or other binary files does not contribute to this limit — the contents of the file will be uploaded at a Pipedream URL you have access to within your source or workflow. See the section on [Large File Support](/docs/workflows/building-workflows/triggers/#large-file-support) for more detail.
### Sending large payloads
*If you’re uploading files, like images or videos, you should use the [large file upload interface](/docs/workflows/building-workflows/triggers/#large-file-support), instead*.
By default, the body of HTTP requests sent to a source or workflow is limited to {PAYLOAD_SIZE_LIMIT}. **But you can send an HTTP payload of any size to a [workflow](/docs/workflows/building-workflows/) or an [event source](/docs/workflows/building-workflows/triggers/) by including the `pipedream_upload_body=1` query string or an `x-pd-upload-body: 1` HTTP header in your request**.
```swift
curl -d '{ "name": "Yoda" }' \
https://endpoint.m.pipedream.net\?pipedream_upload_body\=1
curl -d '{ "name": "Yoda" }' \
-H "x-pd-upload-body: 1" \
https://endpoint.m.pipedream.net
```
In workflows, Pipedream saves the raw payload data in a file whose URL you can reference in the variable `steps.trigger.event.body.raw_body_url`.
Within your workflow, you can download the contents of this data using the **Send HTTP Request** action, or [by saving the data as a file to the `/tmp` directory](/docs/workflows/building-workflows/code/nodejs/working-with-files/).
#### Example: Download the HTTP payload using the Send HTTP Request action
*Note: you can only download payloads at most {FUNCTION_PAYLOAD_LIMIT} in size using this method. Otherwise, you may encounter a [Function Payload Limit Exceeded](/docs/troubleshooting/#function-payload-limit-exceeded) error.*
You can download the large HTTP payload using the **Send HTTP Request** action. [Copy this workflow to see how this works](https://pipedream.com/new?h=tch_egfAby).
The payload from the trigger of the workflow is exported to the variable `steps.retrieve_large_payload.$return_value`:
#### Example: Download the HTTP payload to the `/tmp` directory
[This workflow](https://pipedream.com/new?h=tch_5ofXkX) downloads the HTTP payload, saving it as a file to the [`/tmp` directory](/docs/workflows/building-workflows/code/nodejs/working-with-files/#the-tmp-directory).
```javascript
import stream from "stream";
import { promisify } from "util";
import fs from "fs";
import got from "got";
export default defineComponent({
async run({ steps, $ }) {
const pipeline = promisify(stream.pipeline);
await pipeline(
got.stream(steps.trigger.event.body.raw_body_url),
fs.createWriteStream(`/tmp/raw_body`)
);
},
})
```
You can [read this file](/docs/workflows/building-workflows/code/nodejs/working-with-files/#reading-a-file-from-tmp) in subsequent steps of your workflow.
#### How the payload data is saved
Your raw payload is saved to a Pipedream-owned [Amazon S3 bucket](https://aws.amazon.com/s3/). Pipedream generates a [signed URL](https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html) that allows you to access to that file for up to 30 minutes. After 30 minutes, the signed URL will be invalidated, and the file will be deleted.
#### Limits
**You can upload payloads up to 5TB in size**. However, payloads that large may trigger [other Pipedream limits](/docs/workflows/limits/). Please [reach out](https://pipedream.com/support/) with any specific questions or issues.
### Large File Support
*This interface is best used for uploading large files, like images or videos. If you’re sending JSON or other data directly in the HTTP payload, and encountering a **Request Entity Too Large** error, review the section above for [sending large payloads](/docs/workflows/building-workflows/triggers/#sending-large-payloads)*.
You can upload any file to a [workflow](/docs/workflows/building-workflows/) or an [event source](/docs/workflows/building-workflows/triggers/) by making a `multipart/form-data` HTTP request with the file as one of the form parts. **Pipedream saves that file to a Pipedream-owned [Amazon S3 bucket](https://aws.amazon.com/s3/), generating a [signed URL](https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html) that allows you to access to that file for up to 30 minutes**. After 30 minutes, the signed URL will be invalidated, and the file will be deleted.
In workflows, these file URLs are provided in the `steps.trigger.event.body` variable, so you can download the file using the URL within your workflow, or pass the URL on to another third-party system for it to process.
Within your workflow, you can download the contents of this data using the **Send HTTP Request** action, or [by saving the data as a file to the `/tmp` directory](/docs/workflows/building-workflows/code/nodejs/working-with-files/).
#### Example: upload a file using `cURL`
For example, you can upload an image to a workflow using `cURL`:
```powershell
curl -F 'image=@my_image.png' https://myendpoint.m.pipedream.net
```
The `-F` tells `cURL` we’re sending form data, with a single “part”: a field named `image`, with the content of the image as the value (the `@` allows `cURL` to reference a local file).
When you send this image to a workflow, Pipedream [parses the form data](/docs/workflows/building-workflows/triggers/#how-pipedream-handles-multipartform-data) and converts it to a JavaScript object, `event.body`. Select the event from the [inspector](/docs/workflows/building-workflows/inspect/#the-inspector), and you’ll see the `image` property under `event.body`:
When you upload a file as a part of the form request, Pipedream saves it to a Pipedream-owned [Amazon S3 bucket](https://aws.amazon.com/s3/), generating a [signed URL](https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html) that allows you to access to that file for up to 30 minutes. After 30 minutes, the signed URL will be invalidated, and the file will be deleted.
Within the `image` property of `event.body`, you’ll see the value of this URL in the `url` property, along with the `filename` and `mimetype` of the file. Within your workflow, you can download the file, or pass the URL to a third party system to handle, and more.
#### Example: Download this file to the `/tmp` directory
[This workflow](https://pipedream.com/@dylburger/example-download-an-image-to-tmp-p_KwC2Ad/edit) downloads an image passed in the `image` field in the form request, saving it to the [`/tmp` directory](/docs/workflows/building-workflows/code/nodejs/working-with-files/#the-tmp-directory).
```javascript
import stream from "stream";
import { promisify } from "util";
import fs from "fs";
import got from "got";
const pipeline = promisify(stream.pipeline);
await pipeline(
got.stream(steps.trigger.event.body.image.url),
fs.createWriteStream(`/tmp/${steps.trigger.event.body.image.filename}`)
);
```
#### Example: Upload image to your own Amazon S3 bucket
[This workflow](https://pipedream.com/@dylburger/example-save-uploaded-file-to-amazon-s3-p_o7Cm9z/edit) streams the uploaded file to an Amazon S3 bucket you specify, allowing you to save the file to long-term storage.
#### Limits
Since large files are uploaded using a `Content-Type` of `multipart/form-data`, the limits that apply to [form data](/docs/workflows/building-workflows/triggers/#how-pipedream-handles-multipartform-data) also apply here.
The content of the file itself does not contribute to the HTTP payload limit imposed for forms. **You can upload files up to 5TB in size**. However, files that large may trigger [other Pipedream limits](/docs/workflows/limits/). Please [reach out](https://pipedream.com/support/) with any specific questions or issues.
### Cross-Origin HTTP Requests
We return the following headers on HTTP `OPTIONS` requests:
```
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET,HEAD,PUT,PATCH,POST,DELETE
```
Thus, your endpoint will accept [cross-origin HTTP requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) from any domain, using any standard HTTP method.
### HTTP Responses
#### Default HTTP response
By default, when you send a [valid HTTP request](/docs/workflows/building-workflows/triggers/#valid-requests) to your endpoint URL, you should expect to receive a `200 OK` status code with the following payload:
```html
Success!
To customize this response, check out our docs here
```
When you’re processing HTTP requests, you often don’t need to issue any special response to the client. We issue this default response so you don’t have to write any code to do it yourself.
**How can my workflow run faster?**
See [our guide on running workflows faster](/docs/troubleshooting/#how-can-my-workflow-run-faster).
#### Customizing the HTTP response
If you need to issue a custom HTTP response from a workflow, you can either:
* Use the **Return HTTP response** action, available on the **HTTP / Webhook** app, or
* **Use the `$.respond()` function in a Code or Action step**.
#### Using the HTTP Response Action
The HTTP Response action lets you return HTTP responses without the need to write code. You can customize the response status code, and optionally specify response headers and body.
This action uses `$.respond()` and will always [respond immediately](/docs/workflows/building-workflows/triggers/#returning-a-response-immediately) when called in your workflow. A [response error](/docs/workflows/building-workflows/triggers/#errors-with-http-responses) will still occur if your workflow throws an Error before this action runs.
#### Using custom code with `$.respond()`
You can return HTTP responses in Node.js code with the `$.respond()` function.
`$.respond()` takes a single argument: an object with properties that specify the body, headers, and HTTP status code you’d like to respond with:
```javascript
defineComponent({
async run({ steps, $ }) {
await $.respond({
status: 200,
headers: { "my-custom-header": "value" },
body: { message: "My custom response" }, // This can be any string, object, Buffer, or Readable stream
});
},
});
```
The value of the `body` property can be either a string, object, a [Buffer](https://nodejs.org/api/buffer.html#buffer_buffer) (binary data), or a [Readable stream](https://nodejs.org/api/stream.html#stream_readable_streams). Attempting to return any other data may yield an error.
In the case where you return a Readable stream:
* You must `await` the `$.respond` function (`await $.respond({ ... }`)
* The stream must close and be finished reading within your [workflow execution timeout](/docs/workflows/limits/#time-per-execution).
* You cannot return a Readable and use the [`immediate: true`](/docs/workflows/building-workflows/triggers/#returning-a-response-immediately) property of `$.respond`.
#### Timing of `$.respond()` execution
You may notice some response latency calling workflows that use `$.respond()` from your HTTP client. By default, `$.respond()` is called at the end of your workflow, after all other code is done executing, so it may take some time to issue the response back.
If you need to issue an HTTP response in the middle of a workflow, see the section on [returning a response immediately](/docs/workflows/building-workflows/triggers/#returning-a-response-immediately).
#### Returning a response immediately
You can issue an HTTP response within a workflow, and continue the rest of the workflow execution, by setting the `immediate` property to `true`:
```javascript
defineComponent({
async run({ steps, $ }) {
await $.respond({
immediate: true,
status: 200,
headers: { "my-custom-header": "value" },
body: { message: "My custom response" },
});
},
});
```
Passing `immediate: true` tells `$.respond()` to issue a response back to the client at this point in the workflow. After the HTTP response has been issued, the remaining code in your workflow runs.
This can be helpful, for example, when you’re building a Slack bot. When you send a message to a bot, Slack requires a `200 OK` response be issued immediately, to confirm receipt:
```javascript
defineComponent({
async run({ steps, $ }) {
await $.respond({
immediate: true,
status: 200,
body: "",
});
},
});
```
Once you issue the response, you’ll probably want to process the message from the user and respond back with another message or data requested by the user.
[Here’s an example workflow](https://pipedream.com/@dylburger/issue-http-response-immediately-continue-running-workflow-p_pWCWGJ) that shows how to use `immediate: true` and run code after the HTTP response is issued.
#### Errors with HTTP Responses
If you use `$.respond()` in a workflow, **you must always make sure `$.respond()` is called in your code**. If you make an HTTP request to a workflow, and run code where `$.respond()` is *not* called, your endpoint URL will issue a `400 Bad Request` error with the following body:
```javascript
No $.respond called in workflow
```
This might happen if:
* You call `$.respond()` conditionally, where it does not run under certain conditions.
* Your workflow throws an Error before you run `$.respond()`.
* You return data in the `body` property that isn’t a string, object, or Buffer.
If you can’t handle the `400 Bad Request` error in the application calling your workflow, you can implement `try` / `finally` logic to ensure `$.respond()` always gets called with some default message. For example:
```javascript
defineComponent({
async run({ steps, $ }) {
try {
// Your code here that might throw an exception or not run
throw new Error("Whoops, something unexpected happened.");
} finally {
await $.respond({
status: 200,
body: {
msg: "Default response",
},
});
}
},
});
```
### Errors
Occasionally, you may encounter errors when sending requests to your endpoint:
#### Request Entity Too Large
The endpoint will issue a `413 Payload Too Large` status code when the body of your request exceeds {PAYLOAD_SIZE_LIMIT}.
In this case, the request will still appear in the inspector, with information on the error.
#### API key does not exist
Your API key is the host part of the endpoint, e.g. the `eniqtww30717` in `eniqtww30717.m.pipedream.net`. If you attempt to send a request to an endpoint that does not exist, we’ll return a `404 Not Found` error.
We’ll also issue a 404 response on workflows with an HTTP trigger that have been disabled.
#### Too Many Requests
If you send too many requests to your HTTP source within a small period of time, we may issue a `429 Too Many Requests` response. [Review our limits](/docs/workflows/limits/) to understand the conditions where you might be throttled.
You can also [reach out](https://pipedream.com/support/) to inquire about raising this rate limit.
If you control the application sending requests, you should implement [a backoff strategy](https://medium.com/clover-platform-blog/conquering-api-rate-limiting-dcac5552714d) to temporarily slow the rate of events.
## Schedule
Pipedream allows you to run hosted scheduled jobs — commonly-referred to as a “cron job” — [for free](/docs/pricing/). You can think of workflows like scripts that run on a schedule.
You can write scheduled job to send an HTTP request, send a scheduled email, run any Node.js or Python code, connect to any API, and much more. Pipedream manages the servers where these jobs run, so you don’t have to worry about setting up a server of your own or operating some service just to run code on a schedule. You write the workflow, we take care of the rest.
### Choosing a Schedule trigger
To create a new scheduled job, create a new workflow and search for the **Schedule** trigger:
By default, your trigger will be turned **Off**. **To enable it, select either of the scheduling options**:
* **Every** : run the job every N days, hours, minutes (e.g. every 1 day, every 3 hours).
* **Cron Expression** : schedule your job using a cron expression. For example, the expression `0 0 * * *` will run the job every day at midnight. Cron expressions can be tied to any timezone.
### Testing a scheduled job
If you’re running a scheduled job once a day, you probably don’t want to wait until the next day’s run to test your new code. You can manually run the workflow associated with a scheduled job at any time by pressing the **Run Now** button.
### Job History
You’ll see the history of job executions under the **Job History** section of the [Inspector](/docs/workflows/building-workflows/inspect/).
Clicking on a specific job shows the execution details for that job — all the logs and observability associated with that run of the workflow.
### Trigger a notification to an external service (email, Slack, etc.)
You can send yourself a notification — for example, an email or a Slack message — at any point in a workflow by using the relevant [Action](/docs/components/contributing/#actions) or [Destination](/docs/workflows/data-management/destinations/).
If you’d like to email yourself when a job finishes successfully, you can use the [Email Destination](/docs/workflows/data-management/destinations/email/). You can send yourself a Slack message using the Slack Action, or trigger an [HTTP request](/docs/workflows/data-management/destinations/http/) to an external service.
You can also [write code](/docs/workflows/building-workflows/code/) to trigger any complex notification logic you’d like.
### Troubleshooting your scheduled jobs
When you run a scheduled job, you may need to troubleshoot errors or other execution issues. Pipedream offers built-in, step-level logs that show you detailed execution information that should aid troubleshooting.
Any time a scheduled job runs, you’ll see a new execution appear in the [Inspector](/docs/workflows/building-workflows/inspect/). This shows you when the job ran, how long it took to run, and any errors that might have occurred. **Click on any of these lines in the Inspector to view the details for a given run**.
Code steps show [logs](/docs/workflows/building-workflows/code/nodejs/#logs) below the step itself. Any time you run `console.log()` or other functions that print output, you should see the logs appear directly below the step where the code ran.
[Actions](/docs/components/contributing/#actions) and [Destinations](/docs/workflows/data-management/destinations/) also show execution details relevant to the specific Action or Destination. For example, when you use the [HTTP Destination](/docs/workflows/data-management/destinations/http/) to make an HTTP request, you’ll see the HTTP request and response details tied to that Destination step:
## Email
When you select the **Email** trigger:
Pipedream creates an email address specific to your workflow. Any email sent to this address triggers your workflow:
As soon as you send an email to the workflow-specific address, Pipedream parses its body, headers, and attachments into a JavaScript object it exposes in the `steps.trigger.event` variable that you can access within your workflow. This transformation can take a few seconds to perform. Once done, Pipedream will immediately trigger your workflow with the transformed payload.
[Read more about the shape of the email trigger event](/docs/workflows/building-workflows/triggers/#email).
### Sending large emails
By default, you can send emails up to {EMAIL_PAYLOAD_SIZE_LIMIT} in total size (content, headers, attachments). Emails over this size will be rejected, and you will not see them appear in your workflow.
**You can send emails up to `{EMAIL_PAYLOAD_SIZE_LIMIT}` in size by sending emails to `[YOUR EMAIL ENDPOINT]@upload.pipedream.net`**. If your workflow-specific email address is `endpoint@pipedream.net`, your “large email address” is `endpoint@upload.pipedream.net`.
Emails delivered to this address are uploaded to a private URL you have access to within your workflow, at the variable `steps.trigger.event.mail.content_url`. You can download and parse the email within your workflow using that URL. This content contains the *raw* email. Unlike the standard email interface, you must parse this email on your own - see the examples below.
#### Example: Download the email using the Send HTTP Request action
*Note: you can only download emails at most {FUNCTION_PAYLOAD_LIMIT} in size using this method. Otherwise, you may encounter a [Function Payload Limit Exceeded](/docs/troubleshooting/#function-payload-limit-exceeded) error.*
You can download the email using the **Send HTTP Request** action. [Copy this workflow to see how this works](https://pipedream.com/new?h=tch_1AfMyl).
This workflow also parses the contents of the email and exposes it as a JavaScript object using the [`mailparser` library](https://nodemailer.com/extras/mailparser/):
```javascript
import { simpleParser } from "mailparser";
export default defineComponent({
async run({ steps, $ }) {
return await simpleParser(steps.get_large_email_content.$return_value);
},
});
```
#### Example: Download the email to the `/tmp` directory, read it and parse it
[This workflow](https://pipedream.com/new?h=tch_jPfaEJ) downloads the email, saving it as a file to the [`/tmp` directory](/docs/workflows/building-workflows/code/nodejs/working-with-files/#the-tmp-directory). Then it reads the same file (as an example), and parses it using the [`mailparser` library](https://nodemailer.com/extras/mailparser/):
```javascript
import stream from "stream";
import { promisify } from "util";
import fs from "fs";
import got from "got";
import { simpleParser } from "mailparser";
// To use previous step data, pass the `steps` object to the run() function
export default defineComponent({
async run({ steps, $ }) {
const pipeline = promisify(stream.pipeline);
await pipeline(
got.stream(steps.trigger.event.mail.content_url),
fs.createWriteStream(`/tmp/raw_email`)
);
// Now read the file and parse its contents into the `parsed` variable
// See https://nodemailer.com/extras/mailparser/ for parsing options
const f = fs.readFileSync(`/tmp/raw_email`);
return await simpleParser(f);
},
});
```
#### How the email is saved
Your email is saved to a Pipedream-owned [Amazon S3 bucket](https://aws.amazon.com/s3/). Pipedream generates a [signed URL](https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html) that allows you to access to that file for up to 30 minutes. After 30 minutes, the signed URL will be invalidated, and the file will be deleted.
### Email attachments
You can attach any files to your email, up to [the total email size limit](/docs/workflows/limits/#email-triggers).
Attachments are stored in `steps.trigger.event.attachments`, which provides an array of attachment objects. Each attachment in that array exposes key properties:
* `contentUrl`: a URL that hosts your attachment. You can [download this file to the `/tmp` directory](/docs/workflows/building-workflows/code/nodejs/http-requests/#download-a-file-to-the-tmp-directory) and process it in your workflow.
* `content`: If the attachment contains text-based content, Pipedream renders the attachment in `content`, up to 10,000 bytes.
* `contentTruncated`: `true` if the attachment contained text-based content larger than 10,000 bytes. If `true`, the data in `content` will be truncated, and you should fetch the full attachment from `contentUrl`.
### Appending metadata to the incoming email address with `+data`
Pipedream provides a way to append metadata to incoming emails by adding a `+` sign to the incoming email key, followed by any arbitrary string:
```java
myemailaddr+test@pipedream.net
```
Any emails sent to your workflow-specific email address will resolve to that address, triggering your workflow, no matter the data you add after the `+` sign. Sending an email to both of these addresses triggers the workflow with the address `myemailaddr@pipedream.net`:
```java
myemailaddr+test@pipedream.net
myemailaddr+unsubscribe@pipedream.net
```
This allows you implement conditional logic in your workflow based on the data in that string.
### Troubleshooting
#### I’m receiving an `Expired Token` error when trying to read an email attachment
Email attachments are saved to S3, and are accessible in your workflows over [pre-signed URLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html).
If the presigned URL for the attachment has expired, then you’ll need to send another email to create a brand new pre-signed URL.
If you’re using email attachments in combination with [`$.flow.delay`](/docs/workflows/building-workflows/code/nodejs/delay/) or [`$.flow.rerun`](/docs/workflows/building-workflows/code/nodejs/rerun/) which introduces a gap of time between steps in your workflow, then there’s a chance the email attachment’s URL will expire.
To overcome this, we suggest uploading your email attachments to your Project’s [File Store](/docs/workflows/data-management/file-stores/) for persistent storage.
## RSS
Choose the RSS trigger to watch an RSS feed for new items:
This will create an RSS [event source](/docs/workflows/building-workflows/triggers/) that polls the feed for new items on the schedule you select. Every time a new item is found, your workflow will run.
## Events
Events trigger workflow executions. The event that triggers your workflow depends on the trigger you select for your workflow:
* [HTTP triggers](/docs/workflows/building-workflows/triggers/#http) invoke your workflow on HTTP requests.
* [Cron triggers](/docs/workflows/building-workflows/triggers/#schedule) invoke your workflow on a time schedule (e.g., on an interval).
* [Email triggers](/docs/workflows/building-workflows/triggers/#email) invoke your workflow on inbound emails.
* [Event sources](/docs/workflows/building-workflows/triggers/#app-based-triggers) invoke your workflow on events from apps like Twitter, Google Calendar, and more.
### Selecting a test event
When you test any step in your workflow, Pipedream passes the test event you select in the trigger step:
You can select any event you’ve previously sent to your trigger as your test event, or send a new one.
### Examining event data
When you select an event, you’ll see [the incoming event data](/docs/workflows/building-workflows/triggers/#event-format) and the [event context](/docs/workflows/building-workflows/triggers/#stepstriggercontext) for that event:
Pipedream parses your incoming data and exposes it in the variable [`steps.trigger.event`](/docs/workflows/building-workflows/triggers/#event-format), which you can access in any [workflow step](/docs/workflows/#steps).
### Copying references to event data
When you’re [examining event data](/docs/workflows/building-workflows/triggers/#examining-event-data), you’ll commonly want to copy the name of the variable that points to the data you need to reference in another step.
Hover over the property whose data you want to reference, and click the **Copy Path** button to its right:
### Copying the values of event data
You can also copy the value of specific properties of your event data. Hover over the property whose data you want to copy, and click the **Copy Value** button to its right:
### Event format
When you send an event to your workflow, Pipedream takes the trigger data — for example, the HTTP payload, headers, etc. — and adds our own Pipedream metadata to it.
**This data is exposed in the `steps.trigger.event` variable. You can reference this variable in any step of your workflow**.
You can reference your event data in any [code](/docs/workflows/building-workflows/code/) or [action](/docs/components/contributing/#actions) step. See those docs or the general [docs on passing data between steps](/docs/workflows/#steps) for more information.
The specific shape of `steps.trigger.event` depends on the trigger type:
#### HTTP
| Property | Description |
| ----------- | ----------------------------------------------------- |
| `body` | A string or object representation of the HTTP payload |
| `client_ip` | IP address of the client that made the request |
| `headers` | HTTP headers, represented as an object |
| `method` | HTTP method |
| `path` | HTTP request path |
| `query` | Query string |
| `url` | Request host + path |
#### Cron Scheduler
| Property | Description |
| --------------------- | ----------------------------------------------------------------------------------------------- |
| `interval_seconds` | The number of seconds between scheduled executions |
| `cron` | When you’ve configured a custom cron schedule, the cron string |
| `timestamp` | The epoch timestamp when the workflow ran |
| `timezone_configured` | An object with formatted datetime data for the given execution, tied to the schedule’s timezone |
| `timezone_utc` | An object with formatted datetime data for the given execution, tied to the UTC timezone |
#### Email
We use Amazon SES to receive emails for the email trigger. You can find the shape of the event in the [SES docs](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-notifications-contents.html).
### `steps.trigger.context`
`steps.trigger.event` contain your event’s **data**. `steps.trigger.context` contains *metadata* about the workflow and the execution tied to this event.
You can use the data in `steps.trigger.context` to uniquely identify the Pipedream event ID, the timestamp at which the event invoked the workflow, and more:
| Property | Description |
| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `deployment_id` | A globally-unique string representing the current version of the workflow |
| `emitter_id` | The ID of the workflow trigger that emitted this event, e.g. the [event source](/docs/workflows/building-workflows/triggers/) ID. |
| `id` | A unique, Pipedream-provided identifier for the event that triggered this workflow |
| `owner_id` | The Pipedream-assigned [workspace ID](/docs/workspaces/#finding-your-workspaces-id) that owns the workflow |
| `platform_version` | The version of the Pipedream execution environment this event ran on |
| `replay` | A boolean, whether the event was replayed via the UI |
| `trace_id` | Holds the same value for all executions tied to an original event. [See below for more details](/docs/workflows/building-workflows/triggers/#how-do-i-retrieve-the-execution-id-for-a-workflow). |
| `ts` | The ISO 8601 timestamp at which the event invoked the workflow |
| `workflow_id` | The workflow ID |
| `workflow_name` | The workflow name |
#### How do I retrieve the execution ID for a workflow?
Pipedream exposes two identifies for workflow executions: one for the execution, and one for the “trace”.
`steps.trigger.context.id` should be unique for every execution of a workflow.
`steps.trigger.context.trace_id` will hold the same value for all executions tied to the same original event, e.g. if you have auto-retry enabled and it retries a workflow three times, the `id` will change, but the `trace_id` will remain the same. For example, if you call `$.flow.suspend()` on a workflow, we run a new execution after the suspend, so you’d see two total executions: `id` will be unique before and after the suspend, but `trace_id` will be the same.
You may notice other properties in `context`. These are used internally by Pipedream, and are subject to change.
### Event retention
On the Free and Basic plans, each workflow retains at most 100 events or 7 days of history.
* After 100 events have been processed, Pipedream will delete the oldest event data as new events arrive, keeping only the last 100 events.
* Or if an event is older than 7 days, Pipedream will delete the event data.
Other paid plans have longer retention. [See the pricing page](https://pipedream.com/pricing) for details.
Events are also stored in [event history](/docs/workflows/event-history/) for up to 30 days, depending on your plan. [See the pricing page](https://pipedream.com/pricing) for the retention on your plan.
Events that are [delayed](/docs/workflows/building-workflows/control-flow/delay/) or [suspended](/docs/glossary/#suspend) are retained for the duration of the delay. After the delay, the workflow is executed, and the event data is retained according to the rules above.
For an extended history of events across all of your workflows, included processed events, with the ability to filter by status and time range, please see the [Event History](/docs/workflows/event-history/).
## Don’t see a trigger you need?
If you don’t see a trigger you’d like us to support, please [let us know](https://pipedream.com/support/).
# Using Props
Source: https://pipedream.com/docs/workflows/building-workflows/using-props
Props are fields that can be added to code steps in a workflow to abstract data from the code and improve reusability. Most actions use props to capture user input (e.g., to allow users to customize the URL, method and payload for the Send HTTP Request action). Props support the entry of simple values (e.g., `hello world` or `123`) or expressions in `{{ }}` that can reference objects in scope or run basic Node.js code.
## Entering Expressions
Expressions make it easy to pass data exported from previous steps into a code step or action via props. For example, if your workflow is triggered on new Tweets and you want to send the Tweet content to an HTTP or webhook destination, you would reference `{{steps.trigger.event.body}}` to do that.
While the data expected by each input depends on the data type (e.g., string, integer, array, etc) and the data entry mode (structured or non-structured — if applicable), the format for entering expressions is always the same; expressions are always enclosed in `{{ }}`.
There are three ways to enter expressions in a prop field — you can use the object explorer, enter it manually, or paste a reference from a step export.
### Use the object explorer
When you click into a prop field, an object explorer expands below the input. You can explore all the objects in scope, filter for keywords (e.g., a key name), and then select the element to insert into the form as an expression.
### Manually enter or edit an expression
To manually enter or edit an expression, just enter or edit a value between double curly braces `{{ }}`. Pipedream provides auto-complete support as soon as you type.
You can also run Node.js code in `{{ }}`. For example, if `event.foo` is a JSON object and you want to pass it to a param as a string, you can run `{{JSON.stringify(event.foo)}}`.
### Paste a reference from a step export
To paste a reference from a step export, find the reference you want to use, click **Copy Path** and then paste it into the input.
# Data Stores
Source: https://pipedream.com/docs/workflows/data-management/data-stores
**Data stores** are Pipedream’s built-in key-value store.
Data stores are useful for:
* Storing and retrieving data at a specific key
* Setting automatic expiration times for temporary data (TTL)
* Counting or summing values over time
* Retrieving JSON-serializable data across workflow executions
* Caching and rate limiting
* And any other case where you’d use a key-value store
You can connect to the same data store across workflows, so they’re also great for sharing state across different services.
You can use pre-built, no-code actions to store, update, and clear data, or interact with data stores programmatically in [Node.js](/docs/workflows/building-workflows/code/nodejs/using-data-stores/) or [Python](/docs/workflows/building-workflows/code/python/using-data-stores/).
## Using pre-built Data Store actions
Pipedream provides several pre-built actions to set, get, delete, and perform other operations with data stores.
### Inserting data
To insert data into a data store:
1. Add a new step to your workflow.
2. Search for the **Data Stores** app and select it.
3. Select the **Add or update a single record** pre-built action.
Configure the action:
1. **Select or create a Data Store** — create a new data store or choose an existing data store.
2. **Key** - the unique ID for this data that you’ll use for lookup later
3. **Value** - The data to store at the specified `key`
4. **Time to Live (TTL)** - (Optional) The number of seconds until this record expires and is automatically deleted. Leave blank for records that should not expire.
For example, to store the timestamp when the workflow was initially triggered, set the **Key** to **Triggered At** and the **Value** to `{{steps.trigger.context.ts}}`.
The **Key** must evaluate to a string. You can pass a static string, reference [exports](/docs/workflows/#step-exports) from a previous step, or use [any valid expression](/docs/workflows/building-workflows/using-props/#entering-expressions).
Need to store multiple records in one action? Use the **Add or update multiple records** action instead.
### Retrieving Data
The **Get record** action will retrieve the latest value of a data point in one of your data stores.
1. Add a new step to your workflow.
2. Search for the **Data Stores** app and select it.
3. Select the **Add or update a single record** pre-built action.
Configure the action:
1. **Select or create a Data Store** — create a new data store or choose an existing data store.
2. **Key** - the unique ID for this data that you’ll use for lookup later
3. **Create new record if key is not found** - if the specified key isn’t found, you can create a new record
4. **Value** - The data to store at the specified `key`
### Setting or updating record expiration (TTL)
You can set automatic expiration times for records using the **Update record expiration** action:
1. Add a new step to your workflow.
2. Search for the **Data Stores** app and select it.
3. Select the **Update record expiration** pre-built action.
Configure the action:
1. **Select a Data Store** - select the data store containing the record to modify
2. **Key** - the key for the record you want to update the expiration for
3. **Expiration Type** - choose from preset expiration times (1 hour, 1 day, 1 week, etc.) or select “Custom value” to enter a specific time in seconds
4. **Custom TTL (seconds)** - (only if “Custom value” is selected) enter the number of seconds until the record expires
To remove expiration from a record, select “No expiration” as the expiration type.
### Deleting Data
To delete a single record from your data store, use the **Delete a single record** action in a step:
Then configure the action:
1. **Select a Data Store** - select the data store that contains the record to be deleted
2. **Key** - the key that identifies the individual record
For example, you can delete the data at the **Triggered At** key that we’ve created in the steps above:
Deleting a record does not delete the entire data store. [To delete an entire data store, use the Pipedream Data Stores Dashboard](/docs/workflows/data-management/data-stores/#deleting-data-stores).
## Managing data stores
You can view the contents of your data stores at any time in the [Pipedream Data Stores dashboard](https://pipedream.com/data-stores/). You can also add, edit, or delete data store records manually from this view.
### Editing data store values manually
1. Select the data store
2. Click the pencil icon on the far right of the record you want to edit. This will open a text box that will allow you to edit the contents of the value. When you’re finished with your edits, save by clicking the checkmark icon.
### Deleting data stores
You can delete a data store from this dashboard as well. On the far right in the data store row, click the trash can icon.
**Deleting a data store is irreversible**.
If the **Delete** option is greyed out and unclickable, you have workflows using the data store in a step. Click the **>** to the left of the data store’s name to expand the linked workflows.
Then remove the data store from any linked steps.
## Using data stores in code steps
Refer to the [Node.js](/docs/workflows/building-workflows/code/nodejs/using-data-stores/) and [Python](/docs/workflows/building-workflows/code/python/using-data-stores/) data store docs to learn how to use data stores in code steps. You can get, set, delete and perform any other data store operations in code. You cannot use data stores in [Bash](/docs/workflows/building-workflows/code/bash/) or [Go](/docs/workflows/building-workflows/code/go/) code steps.
## Compression
Data saved in data stores is [Brotli-compressed](https://github.com/google/brotli), minimizing storage. The total compression ratio depends on the data being compressed. To test this on your own data, run it through a package that supports Brotli compression and measure the size of the data before and after.
## Data store limits
Depending on your plan, Pipedream sets limits on:
1. The total number of data stores
2. The total number of keys across all data stores
3. The total storage used across all data stores, [after compression](/docs/workflows/data-management/data-stores/#compression)
You’ll find your workspace’s limits in the **Data Stores** section of usage dashboard in the bottom-left of [https://pipedream.com](https://pipedream.com).
## Atomic operations
Data store operations are not atomic or transactional, which can lead to race conditions. To ensure atomic operations, be sure to limit access to a data store key to a [single workflow with a single worker](/docs/workflows/building-workflows/settings/concurrency-and-throttling/) or use a service that supports atomic operations from among our [integrated apps](https://pipedream.com/apps).
## Supported data types
Data stores can hold any JSON-serializable data within the storage limits. This includes data types including:
* Strings
* Objects
* Arrays
* Dates
* Integers
* Floats
But you cannot serialize functions, classes, sets, maps, or other complex objects.
## Exporting data to an external service
In order to stay within the [data store limits](/docs/workflows/data-management/data-stores/#data-store-limits), you may need to export the data in your data store to an external service.
The following Node.js example action will export the data in chunks via an HTTP POST request. You may need to adapt the code to your needs. Click on [this link](https://pipedream.com/new?h=tch_egfAMv) to create a copy of the workflow in your workspace.
If the data contained in each key is large, consider lowering the number of `chunkSize`.
* Adjust your [workflow memory and timeout settings](/docs/workflows/building-workflows/settings/) according to the size of the data in your data store. Set the memory at 512 MB and timeout to 60 seconds and adjust higher if needed.
* Monitor the exports of this step after each execution for any potential errors preventing a full export. Run the step as many times as needed until all your data is exported.
This action deletes the keys that were successfully exported. It is advisable to first run a test without deleting the keys. In case of any unforeseen errors, your data will still be safe.
```javascript
import { axios } from "@pipedream/platform";
export default defineComponent({
props: {
dataStore: {
type: "data_store",
},
chunkSize: {
type: "integer",
label: "Chunk Size",
description: "The number of items to export in one request",
default: 100,
},
shouldDeleteKeys: {
type: "boolean",
label: "Delete keys after export",
description: "Whether the data store keys will be deleted after export",
default: true,
},
},
methods: {
async *chunkAsyncIterator(asyncIterator, chunkSize) {
let chunk = [];
for await (const item of asyncIterator) {
chunk.push(item);
if (chunk.length === chunkSize) {
yield chunk;
chunk = [];
}
}
if (chunk.length > 0) {
yield chunk;
}
},
},
async run({ steps, $ }) {
const iterator = this.chunkAsyncIterator(this.dataStore, this.chunkSize);
for await (const chunk of iterator) {
try {
// export data to external service
await axios($, {
url: "https://external_service.com",
method: "POST",
data: chunk,
// may need to add authentication
});
// delete exported keys and values
if (this.shouldDeleteKeys) {
await Promise.all(chunk.map(([key]) => this.dataStore.delete(key)));
}
console.log(
`number of remaining keys: ${(await this.dataStore.keys()).length}`
);
} catch (e) {
// an error occurred, don't delete keys
console.log(`error exporting data: ${e}`);
}
}
},
});
```
# Connecting To Databases
Source: https://pipedream.com/docs/workflows/data-management/databases
Connecting to a database is essential for developing production workflows. Whether you’re storing application data, querying user information, or analyzing event logs, most workflows and serverless functions require querying data at some point.
Pipedream workflows run in the AWS `us-east-1` network, sending requests from standard AWS IP ranges.
## Connecting to Restricted Databases
**Unless your database is publicly accessible, you’ll likely need to add specific IPs to its allow-list.** To do this, you can configure your database connection to use either a shared or dedicated static IP address from Pipedream:
### Create a Dedicated Static IP for Outbound Traffic
* [Virtual Private Clouds (VPCs)](/docs/workflows/vpc/) in Pipedream let you deploy any workflow to a private network and is the most secure and recommended approach to using a static IP.
* Once configured, the VPC will give you a dedicated egress IP that’s unique to your workspace, and is available to any workflow within your workspace.
### Send Requests from a Shared Static IP
* When configuring your database connection as a [connected account](/docs/apps/connected-accounts/) to Pipedream, you can choose to route network requests through a static IP block for [any app that’s supported by Pipedream’s SQL Proxy](/docs/workflows/data-management/databases/#supported-databases)
* Pipedream’s SQL Proxy routes requests to your database from the IP block below.
#### Supported Databases
Pipedream’s SQL Proxy, which enables the shared static IP, currently supports [MySQL](https://pipedream.com/apps/mysql), [PostgreSQL](https://pipedream.com/apps/postgresql), and [Snowflake](https://pipedream.com/apps/snowflake). Please let us know if you’d like to see support for other database types.
#### Enabling the Shared Static IP
Connect your account for one of the [supported database apps](/docs/workflows/data-management/databases/#supported-databases) and set **Use Shared Static IP** to **TRUE**, then click **Test connection** to ensure Pipedream can successfully connect to your database.
#### Shared Static IP Block
Add the following IP block to your database allow-list:
```
44.223.89.56/29
```
## FAQ
### What’s the difference between using a shared static IP with the SQL Proxy vs a dedicated IP using a VPC?
Both the SQL Proxy and VPCs enable secure database connections from a static IP.
* VPCs offer enhanced isolation and security by providing a **dedicated** static IP for workflows within your workspace
* The SQL proxy routes requests to your database connections through a set of **shared** static IPs
# Working With SQL
Source: https://pipedream.com/docs/workflows/data-management/databases/working-with-sql
Pipedream makes it easy to interact with SQL databases within your workflows. You can securely connect to your database and use either pre-built no-code triggers and actions to interact with your database, or execute custom SQL queries.
## SQL Editor
With the built-in SQL editor, you access linting and auto-complete features typical of modern SQL editors.
## Schema Explorer
When querying a database, you need to understand the schema of the tables you’re working with. The schema explorer provides a visual interface to explore the tables in your database, view their columns, and understand the relationships between them.
* Once you connect your account with one of the [supported database apps](/docs/workflows/data-management/databases/working-with-sql/#supported-databases), we automatically fetch and display the details of the database schema below
* You can **view the columns of a table**, their data types, and relationships between tables
* You can also **search and filter** the set of tables that are listed in your schema based on table or column name
## Prepared Statements
Prepared statements let you safely execute SQL queries with dynamic inputs that are automatically defined as parameters, in order to help prevent SQL injection attacks.
To reference dynamic data in a SQL query, simply use the standard `{{ }}` notation just like any other code step in Pipedream. For example,
```sql
select *
from products
where name = {{steps.get_product_info.$return_value.name}}
and created_at > {{steps.get_product_info.$return_value.created_at}}
```
**Prepared statement:**
**Computed statement:**
When you include step references in your SQL query, Pipedream automatically converts your query to a prepared statement using placeholders with an array of params.
Below your query input, you can toggle between the computed and prepared statements:
## Getting Started
* From the step selector in the builder, select the **Query a Database** action for the [relevant database app](/docs/workflows/data-management/databases/working-with-sql/#supported-databases)
* If you already have a connected account, select it from the dropdown. Otherwise, click **Connect Account** to configure the database connection.
* Follow the prompts to connect your account, then click **Test connection** to ensure Pipedream can successfully connect to your database
* Once you’ve successfully connected your account, you can explore the [database schema](/docs/workflows/data-management/databases/working-with-sql/#schema-explorer) to understand the tables and columns in your database
* Write your SQL query in the editor — read more about [prepared statements](/docs/workflows/data-management/databases/working-with-sql/#prepared-statements) above to reference dynamic data in your query
### Supported Databases
The [SQL editor](/docs/workflows/data-management/databases/working-with-sql/#sql-editor), [schema explorer](/docs/workflows/data-management/databases/working-with-sql/#schema-explorer), and support for [prepared statements](/docs/workflows/data-management/databases/working-with-sql/#prepared-statements) are currently supported for these datbase apps:
* [MySQL](https://pipedream.com/apps/mysql)
* [PostgreSQL](https://pipedream.com/apps/postgresql)
* [Snowflake](https://pipedream.com/apps/snowflake)
Need to query a different database type? Let us know!
# Destinations
Source: https://pipedream.com/docs/workflows/data-management/destinations
**Destinations**, like [actions](/docs/components/contributing/#actions), abstract the delivery and connection logic required to send events to services like Amazon S3, or targets like HTTP and email.
However, Destinations are different than actions in two ways:
* Events are delivered to the Destinations asynchronously, after your workflow completes. This means you don’t wait for network I/O (e.g. for HTTP requests or connection overhead for data warehouses) within your workflow code, so you can process more events faster.
* In the case of data stores like S3, you typically don’t want to send every event on its own. This can be costly and carries little benefit. Instead, you typically want to batch a collection of events together, sending the batch at some frequency. Destinations handle that batching for relevant services.
The docs below discuss features common to all Destinations. See the [docs for a given destination](/docs/workflows/data-management/destinations/#available-destinations) for information specific to those destinations.
## Available Destinations
* [HTTP](/docs/workflows/data-management/destinations/http/)
* [Email](/docs/workflows/data-management/destinations/email/)
* [S3](/docs/workflows/data-management/destinations/s3/)
* [SSE](/docs/workflows/data-management/destinations/sse/)
* [Emit to another listener](/docs/workflows/data-management/destinations/emit/)
## Using destinations
### Using destinations in workflows
You can send data to Destinations in [Node.js code steps](/docs/workflows/building-workflows/code/nodejs/), too, using `$.send` functions.
`$.send` is an object provided by Pipedream that exposes destination-specific functions like `$.send.http()`, `$.send.s3()`, and more. This allows you to send data to destinations programmatically, if you need more control than the default actions provide.
Let’s use `$.send.http()` to send an HTTP POST request like we did in the Action example above. [Add a new action](/docs/workflows/building-workflows/actions/), then search for “**Run custom code**”:
Create a new HTTP endpoint URL (try creating a new Pipedream workflow and adding an HTTP trigger), and add the code below to your code step, with the URL you created:
```javascript
export default defineComponent({
async run({ steps, $}) {
$.send.http({
method: "POST",
url: "[YOUR URL HERE]",
data: {
name: "Luke Skywalker",
},
});
}
})
```
See the docs for the [HTTP destination](/docs/workflows/data-management/destinations/http/) to learn more about all the options you can pass to the `$.send.http()` function.
Again, it’s important to remember that **Destination delivery is asynchronous**. If you iterate over an array of values and send an HTTP request for each:
```javascript
export default defineComponent({
async run({ steps, $}) {
const names = ["Luke", "Han", "Leia", "Obi Wan"];
for (const name of names) {
$.send.http({
method: "POST",
url: "[YOUR URL HERE]",
data: {
name,
},
});
}
}
})
```
you won’t have to `await` the execution of the HTTP requests in your workflow. We’ll collect every `$.send.http()` call and defer those HTTP requests, sending them after your workflow finishes.
### Using destinations in actions
If you’re authoring a [component action](/docs/components/contributing/#actions), you can deliver data to destinations, too. `$.send` isn’t directly available to actions like it is for workflow code steps. Instead, you use `$.send` to access the destination-specific functions:
```javascript
export default {
name: "Action Demo",
key: "action_demo",
version: "0.0.1",
type: "action",
async run({ $ }) {
$.send.http({
method: "POST",
url: "[YOUR URL HERE]",
data: {
name: "Luke Skywalker",
},
});
}
}
```
[See the component action API docs](/docs/components/contributing/api/#actions) for more details.
## Asynchronous Delivery
Events are delivered to destinations *asynchronously* — that is, separate from the execution of your workflow. **This means you’re not waiting for network or connection I/O in the middle of your function, which can be costly**.
Some destination payloads, like HTTP, are delivered within seconds. For other destinations, like S3 and SQL, we collect individual events into a batch and send the batch to the destination. See the [docs for a specific destination](/docs/workflows/data-management/destinations/#available-destinations) for the relevant batch delivery frequency.
# Email
Source: https://pipedream.com/docs/workflows/data-management/destinations/email
The Email Destination allows you send an email to *yourself* — the email address tied to the account you signed up with — at any step of a workflow.
You can use this to email yourself when you receive a specific event, for example when a user signs up on your app. You can send yourself an email when a cron job finishes running, or when a job fails. Anywhere you need an email notification, you can use the Email Destination!
## Adding an Email Destination
### Adding an Email Action
1. Add a new step to your workflow
2. Select the **Send Yourself an Email** Action. You can modify the **Subject** and the message (either **Plain Text** or **HTML**) however you want.
### Using `$.send.email` in workflows
You can send data to an Email Destination in [Node.js code steps](/docs/workflows/building-workflows/code/nodejs/), too, using the `$.send.email()` function. **This allows you to send emails to yourself programmatically, if you need more control than actions provide**.
`$.send.email()` takes the same parameters as the corresponding action:
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.email({
subject: "Your subject",
text: "Plain text email body",
html: "HTML email body"
});
}
});
```
The `html` property is optional. If you include both the `text` and `html` properties, email clients that support HTML will prefer that over the plaintext version.
Like with any `$.send` function, you can use `$.send.email()` conditionally, within a loop, or anywhere you’d use a function normally in Node.js.
### Using `$.send.email` in component actions
If you’re authoring a [component action](/docs/components/contributing/#actions), you can deliver data to an email destination using `$.send.email`.
`$.send.email` functions the same as [`$.send.email` in workflow code steps](/docs/workflows/data-management/destinations/email/#using-sendemail-in-workflows):
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.email({
subject: "Your subject",
text: "Plain text email body",
html: "HTML email body"
});
}
})
```
## Delivery details
All emails come from **[notifications@pipedream.com](mailto:notifications@pipedream.com)**.
# Emit Events
Source: https://pipedream.com/docs/workflows/data-management/destinations/emit
Like [event sources](/docs/workflows/building-workflows/triggers/), workflows can emit events. These events can trigger other workflows, or be consumed using Pipedream’s [REST API](/docs/rest-api/#get-workflow-emits).
## Using `$.send.emit()` in workflows
You can emit arbitrary events from any [Node.js code steps](/docs/workflows/building-workflows/code/nodejs/) using `$.send.emit()`.
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.emit({
name: "Yoda",
});
}
});
```
`$.send.emit()` accepts an object with the following properties:
```javascript
$.send.emit(
event, // An object that contains the event you'd like to emit
channel, // Optional, a string specifying the channel
);
```
## Emitting events to channels
By default, events are emitted to the default channel. You can optionally emit events to a different channel, and listening sources or workflows can subscribe to events on this channel, running the source or workflow only on events emitted to that channel.
Pass the channel as the second argument to `$.send.emit()`:
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.emit(
{
name: "Yoda",
},
'channel_name'
);
}
});
```
## Using `$.send.emit()` in component actions
If you’re authoring a [component action](/docs/components/contributing/#actions), you can emit data using `$.send.emit()`.
`$.send.emit()` functions the same as [`$.send.emit()` in workflow code steps](/docs/workflows/data-management/destinations/emit/#using-sendemit-in-workflows):
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.emit({
name: "Yoda",
});
}
})
```
**Destination delivery is asynchronous**: emits are sent after your workflow finishes.
You can call `$.send.emit()` multiple times within a workflow, for example: to iterate over an array of values and emit an event for each.
```javascript
export default defineComponent({
async run({ steps, $ }) {
const names = ["Luke", "Han", "Leia", "Obi Wan"];
for (const name of names) {
$.send.emit({
name,
});
}
}
});
```
## Trigger a workflow from emitted events
We call the events you emit from a workflow **emitted events**. Sometimes, you’ll want emitted events to trigger another workflow. This can be helpful when:
* You process events from different workflows in the same way. For example, you want to log events from many workflows to Amazon S3 or a logging service. You can write one workflow that handles logging, then `$.send.emit()` events from other workflows that are consumed by the single, logging workflow. This helps remove duplicate logic from the other workflows.
* Your workflow is complex and you want to separate it into multiple workflows to group logical functions together. You can `$.send.emit()` events from one workflow to another to chain the workflows together.
Here’s how to configure a workflow to listen for emitted events.
1. Currently, you can’t select emitted events as a workflow trigger from the Pipedream UI. We’ll show you how add the trigger via API. First, pick an existing workflow where you’d like to receive emitted events. **If you want to start with a [new workflow](https://pipedream.com/new), just select the HTTP / Webhook trigger**.
2. This workflow is called the **listener**. The workflow where you’ll use `$.send.emit()` is called the **emitter**. If you haven’t created the emitter workflow yet, [do that now](https://pipedream.com/new).
3. Get the workflow IDs of both the listener and emitter workflows. **You’ll find the workflow ID in the workflow’s URL in your browser bar —it’s the `p_abc123` in `https://pipedream.com/@username/p_abc123/`**.
4. You can use the Pipedream REST API to configure the listener to receive events from the emitter. We call this [creating a subscription](/docs/rest-api/#listen-for-events-from-another-source-or-workflow). If your listener’s ID is `p_abc123` and your emitter’s ID is `p_def456`, you can run the following command to create this subscription:
```powershell
curl "https://api.pipedream.com/v1/subscriptions?emitter_id=dc_def456&listener_id=p_abc123" \
-X POST \
-H "Authorization: Bearer " \
-H "Content-Type: application/json"
```
5. Run your emitter workflow, emitting an event using `$.send.emit()`:
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.emit({
name: "Yoda",
});
}
});
```
This should trigger your listener, and you should see the same event in [the event inspector](/docs/workflows/building-workflows/inspect/#the-inspector).
**Note**: Please upvote [this issue](https://github.com/PipedreamHQ/pipedream/issues/682) to see support for *adding* emitted events as a workflow trigger in the UI.
## Consuming emitted events via REST API
`$.send.emit()` can emit any data you’d like. You can retrieve that data using Pipedream’s REST API endpoint for [retrieving emitted events](/docs/rest-api/#get-workflow-emits).
This can be helpful when you want a workflow to process data asynchronously using a workflow. You can save the results of your workflow with `$.send.emit()`, and only retrieve the results in batch when you need to using the REST API.
## Emit logs / troubleshooting
Below your code step, you’ll see both the data that was sent in the emit. If you ran `$.send.emit()` multiple times within the same code step, you’ll see the data that was emitted for each.
# HTTP
Source: https://pipedream.com/docs/workflows/data-management/destinations/http
HTTP Destinations allow you to send data to another HTTP endpoint URL outside of Pipedream. This can be an endpoint you own and operate, or a URL tied to a service you use (for example, a [Slack Incoming Webhook](https://api.slack.com/incoming-webhooks)).
## Using `$.send.http` in workflows
You can send HTTP requests in [Node.js code steps](/docs/workflows/building-workflows/code/nodejs/) using `$.send.http()`.
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.http({
method: "POST",
url: "[YOUR URL HERE]",
data: {
name: "Luke Skywalker",
},
});
}
});
```
`$.send.http()` accepts an object with all of the following properties:
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.http({
method, // Required, HTTP method, a string, e.g. POST, GET
url, // Required, the URL to send the HTTP request to
data, // HTTP payload
headers, // An object containing custom headers, e.g. { "Content-Type": "application/json" }
params, // An object containing query string parameters as key-value pairs
auth, // An object that contains a username and password property, for HTTP basic auth
});
}
});
```
**Destination delivery is asynchronous**: the HTTP requests are sent after your workflow finishes. This means **you cannot write code that operates on the HTTP response**. These HTTP requests **do not** count against your workflow’s compute time.
If you iterate over an array of values and send an HTTP request for each:
```javascript
export default defineComponent({
async run({ steps, $ }) {
const names = ["Luke", "Han", "Leia", "Obi Wan"];
names.forEach((name) => {
$.send.http({
method: "POST",
url: "[YOUR URL HERE]",
data: {
name,
},
});
});
}
});
```
you won’t have to `await` the execution of the HTTP requests in your workflow. We’ll collect every `$.send.http()` call and defer those HTTP requests, sending them after your workflow finishes.
## Using `$.send.http` in component actions
If you’re authoring a [component action](/docs/components/contributing/#actions), you can deliver data to an HTTP destination using `$.send.http`.
`$.send.http` functions the same as [`$.send.http` in workflow code steps](/docs/workflows/data-management/destinations/http/#using-sendhttp-in-workflows):
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.http({
method: "GET",
url: "https://example.com"
})
}
});
```
## HTTP Destination delivery
HTTP Destination delivery is handled asynchronously, separate from the execution of a workflow. However, we deliver the specified payload to HTTP destinations for every event sent to your workflow.
Generally, this means it should only take a few seconds for us to send the event to the destination you specify. In some cases, delivery will take longer.
The time it takes to make HTTP requests sent with `$.send.http()` does not count against your workflow quota.
## HTTP request and response logs
Below your code step, you’ll see both the data that was sent in the HTTP request, and the HTTP response that was issued. If you issue multiple HTTP requests, we’ll show the request and response data for each.
## What if I need to access the HTTP response in my workflow?
Since HTTP requests sent with `$.send.http()` are sent asynchronously, after your workflow runs, **you cannot access the HTTP response in your workflow**.
If you need to access the HTTP response data in your workflow, [use `axios`](/docs/workflows/building-workflows/code/nodejs/http-requests/) or another HTTP client.
## Timeout
The timeout on HTTP request sent with `$.send.http()` is currently **5 seconds**. This time includes DNS resolution, connecting to the host, writing the request body, server processing, and reading the response body.
Any requests that exceed 5 seconds will yield a `timeout` error.
## Retries
Currently, Pipedream will not retry any failed request. If your HTTP destination endpoint is down, or returns an error response, we’ll display that response in the observability associated with the Destination in the relevant step.
## IP addresses for Pipedream HTTP requests
These IP addresses are tied to **requests sent with `$.send.http` only, not other HTTP requests made from workflows**. To whitelist standard HTTP requests from Pipedream workflows, [use VPCs](/docs/workflows/vpc/).
When you make an HTTP request using `$.send.http()`, the traffic will come from one of the following IP addresses:
```
3.208.254.1053.212.246.1733.223.179.1313.227.157.1893.232.105.553.234.187.12618.235.13.18234.225.84.3152.2.233.852.23.40.20852.202.86.952.207.145.19054.86.100.5054.88.18.8154.161.28.250107.22.76.172
```
This list may change over time. If you’ve previously whitelisted these IP addresses and are having trouble sending HTTP requests to your target service, please check to ensure this list matches your firewall rules.
# Amazon S3
Source: https://pipedream.com/docs/workflows/data-management/destinations/s3
[Amazon S3](https://aws.amazon.com/s3/) — the Simple Storage Service — is a common place to dump data for long-term storage on AWS. Pipedream supports delivery to S3 as a first-class Destination.
## Using `$.send.s3` in workflows
You can send data to an S3 Destination in [Node.js code steps](/docs/workflows/building-workflows/code/nodejs/) using `$.send.s3()`.
`$.send.s3()` takes the following parameters:
```php
$.send.s3({
bucket: "your-bucket-here",
prefix: "your-prefix/",
payload: event.body,
});
```
Like with any `$.send` function, you can use `$.send.s3()` conditionally, within a loop, or anywhere you’d use a function normally.
## Using `$.send.s3` in component actions
If you’re authoring a [component action](/docs/components/contributing/#actions), you can deliver data to an S3 destination using `$.send.s3`.
`$.send.s3` functions the same as [`$.send.s3` in workflow code steps](/docs/workflows/data-management/destinations/s3/#using-sends3-in-workflows):
```javascript
async run({ $ }) {
$.send.s3({
bucket: "your-bucket-here",
prefix: "your-prefix/",
payload: event.body,
});
}
```
## S3 Bucket Policy
In order for us to deliver objects to your S3 bucket, you need to modify its [bucket policy](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-bucket-policy.html) to allow Pipedream to upload objects.
**Replace `[your bucket name]` with the name of your bucket** near the bottom of the policy.
```json
{
"Version": "2012-10-17",
"Id": "allow-pipedream-limited-access",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::203863770927:role/Pipedream"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ListBucketMultipartUploads"
],
"Resource": [
"arn:aws:s3:::[your bucket name]",
"arn:aws:s3:::[your bucket name]/*"
]
}
]
}
```
This bucket policy provides the minimum set of permissions necessary for Pipedream to deliver objects to your bucket. We use the [Multipart Upload API](https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html) to upload objects, and require the [relevant permissions](https://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html).
## S3 Destination delivery
S3 Destination delivery is handled asynchronously, separate from the execution of a workflow. **Moreover, events sent to an S3 bucket are batched and delivered once a minute**. For example, if you sent 30 events to an S3 Destination within a particular minute, we would collect all 30 events, delimit them with newlines, and write them to a single S3 object.
In some cases, delivery will take longer than a minute.
## S3 object format
We upload objects using the following format:
```swift
[PREFIX]/YYYY/MM/DD/HH/YYYY-MM-DD-HH-MM-SS-IDENTIFIER.gz
```
That is — we write objects first to your prefix, then within folders specific to the current date and hour, then upload the object with the same date information in the object, so that it’s easy to tell when it was uploaded by object name alone.
For example, if I were writing data to a prefix of `test/`, I might see an object in S3 at this path:
```swift
test/2019/05/25/16/2019-05-25-16-14-58-8f25b54462bf6eeac3ee8bde512b6c59654c454356e808167a01c43ebe4ee919.gz
```
As noted above, a given object contains all payloads delivered to an S3 Destination within a specific minute. Multiple events within a given object are newline-delimited.
## Limiting S3 Uploads by IP
S3 provides a mechanism to [limit operations only from specific IP addresses](https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-3). If you’d like to apply that filter, uploads using `$.send.s3()` should come from one of the following IP addresses:
```
3.208.254.1053.212.246.1733.223.179.1313.227.157.1893.232.105.553.234.187.12618.235.13.18234.225.84.3152.2.233.852.23.40.20852.202.86.952.207.145.19054.86.100.5054.88.18.8154.161.28.250107.22.76.172
```
This list may change over time. If you’ve previously whitelisted these IP addresses and are having trouble uploading S3 objects, please check to ensure this list matches your firewall rules.
# Server Sent Events (SSE)
Source: https://pipedream.com/docs/workflows/data-management/destinations/sse
Pipedream supports [Server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) (SSE) as a destination, enabling you to send events from a workflow directly to a client subscribed to the event stream.
## What is SSE?
[Server-sent Events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) (SSE) is a specification that allows servers to send events directly to clients that subscribe to those events, similar to [WebSockets](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) and related server to client push technologies.
Unlike WebSockets, SSE enables one-way communication from server to clients (WebSockets enable bidirectional communication between server and client, allowing you to pass messages back and forth). Luckily, if you only need a client to subscribe to events from a server, and don’t require bidirectional communication, SSE is simple way to make that happen.
## What can I do with the SSE destination?
SSE is typically used by web developers to update a webpage with new events in real-time, without forcing a user to reload a page to fetch new data. If you’d like to update data on a webpage in that manner, you can subscribe to your workflow’s event stream and handle new events as they come in.
Beyond web browsers, any program that’s able to create an [`EventSource` interface](https://developer.mozilla.org/en-US/docs/Web/API/EventSource) can listen for server-sent events delivered from Pipedream. You can run a Node.js script or a Ruby on Rails app that receives server-sent events, for example.
## Sending data to an SSE Destination in workflows
You can send data to an SSE Destination in [Node.js code steps](/docs/workflows/building-workflows/code/nodejs/) using the `$.send.sse()` function.
1. Add a new step to your workflow
2. Select the option to **Run custom code** and choose the Node.js runtime.
3. Add this code to that step:
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.sse({
channel: "events", // Required, corresponds to the event in the SSE spec
payload: { // Required, the event payload
name: "Luke Skywalker"
}
});
}
});
```
**See [this workflow](https://pipedream.com/new?h=tch_mp7f6q)** for an example of how to use `$.send.sse()`.
Send a test event to your workflow, then review the section on [Receiving events](/docs/workflows/data-management/destinations/sse/#receiving-events) to see how you can setup an `EventSource` to retrieve events sent to the SSE Destination.
**Destination delivery is asynchronous**. If you iterate over an array of values and send an SSE for each:
```javascript
export default defineComponent({
async run({ steps, $ }) {
const names = ["Luke", "Han", "Leia", "Obi Wan"];
names.forEach(name => {
$.send.sse({
channel: "names",
payload: {
name
}
});
});
}
});
```
you won’t have to `await` the execution of the SSE Destination requests in your workflow. We’ll collect every `$.send.sse()` call and defer those requests, sending them after your workflow finishes.
## Using `$.send.sse` in component actions
If you’re authoring a [component action](/docs/components/contributing/#actions), you can send events to an SSE destination using `$.send.sse`.
`$.send.sse` functions the same as [`$.send.sse` in workflow code steps](/docs/workflows/data-management/destinations/sse/#sending-data-to-an-sse-destination-in-workflows):
```javascript
export default defineComponent({
async run({ steps, $ }) {
$.send.sse({
channel: "events",
payload: {
name: "Luke Skywalker"
}
});
}
});
```
## Receiving events
Once you’ve sent events to an SSE Destination, you can start receiving a stream of those events in a client by configuring an [`EventSource`](https://developer.mozilla.org/en-US/docs/Web/API/EventSource) that connects to the Pipedream SSE stream.
### Retrieving your workflow’s event stream URL
First, it’s important to note that all events sent to an SSE destination within a workflow are sent to an SSE event stream specific to that workflow. The event stream is tied to the workflow’s ID, which you can find by examining the URL of the pipeline in the Pipedream UI. For example, the `p_aBcDeF` in this URL is the pipeline ID:
**Note that the `p_` prefix is part of the workflow ID**.
Once you have the workflow ID, you can construct the event source URL for your SSE destination. That URL is of the following format:
```ruby
http://sdk.m.pipedream.net/pipelines/[YOUR WORKFLOW ID]/sse
```
In the example above, the URL of our event stream would be:
```ruby
http://sdk.m.pipedream.net/pipelines/p_aBcDeF/sse
```
You should be able to open that URL in your browser. Most modern browsers support connecting to an event stream directly, and will stream events without any work on your part to help you confirm that the stream is working.
If you’ve already sent events to your SSE destination, you should see those events here! We’ll return the most recent 100 events delivered to the corresponding SSE destination immediately. This allows your client to catch up with events previously sent to the destination. Then, any new events sent to the SSE destination while you’re connected will be delivered to the client.
### Sample code to connect to your event stream
It’s easy to setup a simple webpage to `console.log()` all events from an event stream. You can find a lot more examples of how to work with SSE on the web, but this should help you understand the basic concepts.
You’ll need to create two files in the same directory on your machine: an `index.html` file for the HTML.
**index.html**
```html
SSE test
```
**Make sure to add your workflow ID and the name of your channel you specified in your SSE Destination**. Then, open the `index.html` page in your browser. In your browser’s developer tools JavaScript console, you should see new events appear as you send them.
Note that the `addEventListener` code will listen specifically for events sent to the **events** `channel` specified in our SSE destination. You can listen for multiple types of events at once by adding multiple event listeners on the client.
**Try triggering more test events from your workflow while this page is open to see how this works end-to-end**.
## `:keepalive` messages
[The SSE spec](https://www.w3.org/TR/2009/WD-eventsource-20090421/#notes) notes that
> Legacy proxy servers are known to, in certain cases, drop HTTP connections after a short timeout. To protect against such proxy servers, authors can include a comment line (one starting with a ’:’ character) every 15 seconds or so.
Roughly every 15 seconds, we’ll send a message with the `:keepalive` comment to keep open SSE connections alive. These comments should be ignored when you’re listening for messages using the `EventSource` interface.
# File Stores
Source: https://pipedream.com/docs/workflows/data-management/file-stores
In Preview
File Stores are available in Preview. There may be changes to allowed limits in the future.
If you have any feedback on File Stores, please let us know in our [community](https://pipedream.com/support).
File Stores are a filesystem that are scoped to a Project. All workflows within the same Project have access to the File Stores.
You can interact with these files through the Pipedream Dashboard or programmatically through your Project’s workflows.
Unlike files stored within a workflow’s `/tmp` directory which are subject to deletion between executions, File Stores are separate cloud storage. Files within a File Store can be long term storage accessible by your workflows.
## Managing File Stores from the Dashboard
You can access a File Store by opening the Project and selecting the *File Store* on the left hand navigation menu.
### Uploading files to the File Store
To upload a file, select *New* then select *File*:
Then in the new pop-up, you can either drag and drop or browser your computer to stage a file for uploading:
Now that the file(s) are staged for uploaded. Click *Upload* to upload them:
Finally, click *Done* to close the upload pop-up:
You should now see your file is uploaded and available for use within your Project:
### Deleting files from the File Store
You can delete individual files from a File Store by clicking the three dot menu on the far right of the file and selecting *Delete*.
After confirming that you want to delete the file, it will be permanently deleted.
File deletion is permanent
Once a file is deleted, it’s not possible to recover it. Please take care when deleting files from File Stores.
## Managing File Stores from Workflows
Files uploaded to a File Store are accessible by workflows within that same project.
You can access these files programmatically using the `$.files` helper within Node.js code steps.
File Stores are scoped to Projects
Only workflows within the same project as the File Store can access the files. Workflows outside of the project will not be able to access that project’s File Store.
### Listing files in the File Store
The `$.files.dir()` method allows you to list files and directories within the Project’s File Store. By default it will list the files at the root directory.
Here’s an example of how to iterate over the files in the root directory and open them as `File` instances:
```javascript
export default defineComponent({
async run({ steps, $ }) {
// list all contents of the root File Stores directory in this project
const dirs = $.files.dir();
let files = [];
for await(const dir of dirs) {
// if this is a file, let's open it
if(dir.isFile()) {
files.push(dir.path)
}
}
return files
},
})
```
### Opening files
To interact with a file uploaded to the File Store, you’ll first need to open it.
Given there’s a file in the File Store called `example.png`, you can open it using the `$.files.open()` method:
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Open the file by it's path in the File Store
const file = $.files.open('example.png')
// Log the S3 url to access the file publicly
return await file.toUrl()
},
})
```
Once the file has been opened, you can [read, write, delete the file and more](/docs/workflows/data-management/file-stores/reference/).
### Uploading files to File Stores
You can upload files using Node.js code in your workflows, either from URLs, from the `/tmp` directory in your workflows or directly from streams for high memory efficency.
#### Uploading files from URLs
`File.fromUrl()` can upload a file from a public URL to the File Store.
First open a new file at a specific path in the File Store, and then pass a URL to the `fromUrl` method on that new file:
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Upload a file to the File Store by a URL
const file = await $.files.open('pipedream.png').fromUrl('https://res.cloudinary.com/pipedreamin/image/upload/t_logo48x48/v1597038956/docs/HzP2Yhq8_400x400_1_sqhs70.jpg')
// display the uploaded file's URL from the File Store:
console.log(await file.toUrl())
},
})
```
#### Uploading files from the workflow’s `/tmp` directory
`File.fromFile()` can upload a file stored within the workflow’s `/tmp` directory to the File Store.
First open a new file at a specific path in the File Store, and then pass a URL to the `fromFile` method on that new file:
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Upload a file to the File Store from the local /tmp/ directory
const file = await $.files.open('recording.mp3').fromFile('/tmp/recording.mp3')
// Display the URL to the File Store hosted file
console.log(await file.toUrl())
},
})
```
#### Uploading files using streams
File Stores also support streaming to write large files. `File.createWriteStream()` creates a write stream for the file to upload to. Then you can pair this stream with a download stream from another remote location:
```javascript
import { pipeline } from 'stream/promises';
import got from 'got'
export default defineComponent({
async run({ steps, $ }) {
const writeStream = await $.files.open('logo.png').createWriteStream()
const readStream = got.stream('https://pdrm.co/logo')
await pipeline(readStream, writeStream);
},
})
```
Additionally, you can pass a `ReadableStream` instance directly to a File instance:
```javascript
import got from 'got'
export default defineComponent({
async run({ steps, $ }) {
// Start a new read stream
const readStream = got.stream('https://pdrm.co/logo')
// Populate the file's content from the read stream
await $.files.open("logo.png").fromReadableStream(readStream)
},
})
```
(Recommended) Pass the contentLength if possible
If possible, pass a `contentLength` argument, then File Store will be able to efficiently stream to use less memory. Without a `contentLength` argument, the entire file will need to be downloaded to `/tmp/` until it can be uploaded to the File store.
### Downloading files
File Stores live in cloud storage by default, but files can be downloaded to your workflows individually.
#### Downloading files to the workflow’s `/tmp` directory
First open a new file at a specific path in the File Store, and then call the `toFile()` method to download the file to the given path:
```javascript
import fs from 'fs';
export default defineComponent({
async run({ steps, $ }) {
// Download a file from the File Store to the local /tmp/ directory
const file = await $.files.open('recording.mp3').toFile('/tmp/README.md')
// read the file version of the file stored in /tmp
return (await fs.promises.readFile('/tmp/README.md')).toString()
},
})
```
Only the `/tmp/` directory is readable and writable
Make sure that your path to `toFile(path)` includes
### Passing files between steps
Files can be passed between steps. Pipedream will automatically serialize the file as a JSON *description* of the file. Then when you access the file as a step export as a prop in a Node.js code step, then you can interact with the `File` instance directly.
For example, if you have a file stored at the path `logo.png` within your File Store, then within a Node.js code step you can open it:
```javascript
// "open_file" Node.js code step
export default defineComponent({
async run({ steps, $ }) {
// Return data to use it in future steps
const file = $.files.open('logo.png')
return file
},
})
```
Then in a downstream code step, you can use it via the `steps` path:
```javascript
// "get_file_url" Node.js code step
export default defineComponent({
async run({ steps, $ }) {
// steps.open_file.$return_value is automatically parsed back into a File instance:
return await steps.open_file.$return_value.toUrl()
},
})
```
Files descriptions are compatible with other workflow helpers
Files can also be used with `$.flow.suspend()` and `$.flow.delay()`.
#### Handling lists of files
One limitation of the automatic parsing of files between steps is that it currently doesn’t automatically handle lists of files between steps.
For example, if you have a step that returns an array of `File` instances:
```javascript
// "open_files" Node.js code step
export default defineComponent({
async run({ steps, $ }) {
// Return data to use it in future steps
const file1 = $.files.open('vue-logo.svg')
const file2 = $.files.open('react-logo.svg')
return [file1, file]
},
})
```
Then you’ll need to use `$.files.openDescriptor` to parse the JSON definition of the files back into `File` instances:
```typescript
// "parse_files" Node.js code step
export default defineComponent({
async run({ steps, $ }) {
const files = steps.open_files.$return_value.map(object => $.files.openDescriptor(object))
// log the URL to the first File
console.log(await files[0].toUrl());
},
})
```
### Deleting files
You can call `delete()` on the file to delete it from the File Store.
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Open the file and delete it
const file = await $.files.open('example.png').delete()
console.log('File deleted.')
},
})
```
Deleting files is irreversible
It’s not possible to restore deleted files. Please take care when deleting files.
## FAQ
### Are there size limits for files within File Stores?
At this time no, but File Stores are in preview and are subject to change.
### Are helpers available for Python to download, upload and manage files?
At this time no, only Node.js includes a helper to interact with the File Store programmatically within workflows.
### Are File Stores generally available?
At this time File Stores are only available to Advanced plan and above subscribed workspaces. You can change your plan within the [pricing page](https://pipedream.com/pricing).
# File Stores Node.Js Reference
Source: https://pipedream.com/docs/workflows/data-management/file-stores/reference
The File Stores Node.js helper allows you to manage files within Code Steps and Action components.
## `$.files`
The `$.files` helper is the main module to interact with the Project’s File Store. It can instatiate new files, open files from descriptors and list the contents of the File Store.
### `$.files.open(path)`
*Sync.* Opens a file from the relative `path`. If the file doesn’t exist, a new empty file is created.
### `$.files.openDescriptor(fileDescriptor)`
*Sync.* Creates a new `File` from the JSON friendly description of a file. Useful for recreating a `File` from a step export.
For example, export a `File` as a step export which will render the `File` as JSON:
```javascript
// create_file
// Creates a new Project File and uploads an image to it
export default defineComponent({
async run({ steps, $ }) {
// create the new file and upload the contents to it from a URL
const file = await $.files.open("imgur.png").fromUrl("https://i.imgur.com/TVIPgNq.png")
// return the file as a step export
return file
},
}
```
Then in a downstream step recreate the `File` instance from the step export friendly *description*:
```javascript
// download_file
// Opens a file downloaded from a previous step, and saves it.
export default defineComponent({
async run({ steps, $ }) {
// Convert the the description of the file back into a File instance
const file = $.files.openDescriptor(steps.create_file.$return_value)
// Download the file to the local /tmp directory
await $.file.download('/tmp/example.png')
console.log("File downloaded to /tmp")
},
})
```
### `$.files.dir(?path)`
*Sync.* Lists the files & directories at the given `path`. By default it will list the files at the root directory.
Here’s an example of how to iterate over the files in the root directory and open them as `File` instances:
```javascript
export default defineComponent({
async run({ steps, $ }) {
// list all contents of the root File Stores directory in this project
const dirs = $.files.dir();
let files = [];
for await(const dir of dirs) {
// if this is a file, let's open it
if(dir.isFile()) {
files.push(await $.files.open(dir.path))
}
}
return files
},
})
```
Each iteratee of `$.files.dir()` will contain the following properties:
* `isDirectory()` - `true` if this instance is a directory.
* `isFile()` - `true` if this instance is a file.
* `path` - The path to the file.
* `size` - The size of the file in bytes.
* `modifiedAt` - The last modified at timestamp.
## `File`
This class describes an instance of a single file within a File Store.
When using `$.files.open` or `$.files.openDescriptor`, you’ll create a new instance of a `File` with helper methods that give you more flexibility to perform programatic actions with the file.
### `File.toUrl()`
*Async.* The pre-signed GET URL to retrieve the file.
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Retrieve the pre-signed GET URL for logo.png
const url = await $.files.open('logo.png').toUrl()
return url
},
})
```
Pre-signed GET URLs are short lived.
The `File.toUrl()` will expire after 30 minutes.
### `File.toFile(path)`
*Async.* Downloads the file to the local path in the current workflow. If the file doesn’t exist, a new one will be created at the path specified.
Only `/tmp` is writable in workflow environments
Only the `/tmp` directory is writable in your workflow’s exection environment. So you must download your file to the `/tmp` directory.
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Download the file in the File Store to the workflow's /tmp/ directory
await $.files.open('logo.png').toFile("/tmp/logo.png")
},
})
```
### `File.toBuffer()`
*Async.* Downloads the file as a Buffer to create readable or writeable streams.
```javascript
export default defineComponent({
async run({ steps, $ }) {
// opens a file at the path "hello.txt" and downloads it as a Buffer
const buffer = await $.files.open('hello.txt').toBuffer()
// Logs the contents of the Buffer as a string
console.log(buffer.toString())
},
})
```
### `File.fromFile(localFilePath, ?contentType)`
*Async.* Uploads a file from the file at the `/tmp` local path. For example, if `localFilePath` is given `/tmp/recording.mp3`, it will upload that file to the current File Store File instance.
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Upload a file to the File Store from the local /tmp/ directory
const file = await $.files.open('recording.mp3').fromFile('/tmp/recording.mp3')
console.log(file.url)
},
})
```
### `File.fromUrl(url)`
*Async.* Accepts a `url` to read from.
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Upload a file to the File Store by a URL
const file = await $.files.open('pipedream.png').fromUrl('https://res.cloudinary.com/pipedreamin/image/upload/t_logo48x48/v1597038956/docs/HzP2Yhq8_400x400_1_sqhs70.jpg')
console.log(file.url)
},
})
```
### `File.createWriteStream(?contentType, ?contentLength)`
*Async.* Creates a write stream to populate the file with.
Pass the content length if possible
The `contentLength` argument is optional, however we do recommend passing it. Otherwise the entire file will need to be written to the local `/tmp` before it can be uploaded to the File store.
```javascript
import { pipeline } from 'stream/promises';
import got from 'got'
export default defineComponent({
async run({ steps, $ }) {
const writeStream = await $.files.open('logo.png').createWriteStream("image/png", 2153)
const readStream = got.stream('https://pdrm.co/logo')
await pipeline(readStream, writeStream);
},
})
```
### `File.fromReadableStream(?contentType, ?contentLength)`
*Async.* Populates a file’s contents from the `ReadableStream`.
Pass the content length if possible
The `contentLength` argument is optional, however we do recommend passing it. Otherwise the entire file will need to be written to the local `/tmp` before it can be uploaded to the File store.
```javascript
import got from 'got'
export default defineComponent({
async run({ steps, $ }) {
// Start a new read stream
const readStream = got.stream('https://pdrm.co/logo')
// Populate the file's content from the read stream
await $.files.open("logo.png").fromReadableStream(readStream, "image/png", 2153)
},
})
```
### `File.delete()`
*Async.* Deletes the Project File.
```javascript
export default defineComponent({
async run({ steps, $ }) {
// Open the Project File and delete it
const file = await $.files.open('example.png').delete()
console.log('File deleted.')
},
})
```
Deleting files is irreversible
It’s not possible to restore deleted files. Please take care when deleting files.
# Custom Domains
Source: https://pipedream.com/docs/workflows/domains
export const ENDPOINT_BASE_URL = '*.m.pipedream.net';
By default, all new [Pipedream HTTP endpoints](/docs/workflows/building-workflows/triggers/#http) are hosted on the **{ENDPOINT_BASE_URL}** domain. But you can configure any domain you want: instead of `https://endpoint.m.pipedream.net`, the endpoint would be available on `https://endpoint.example.com`.
## Configuring a new custom domain
### 1. Choose your domain
You can configure any domain you own to work with Pipedream HTTP endpoints. For example, you can host Pipedream HTTP endpoints on a dedicated subdomain on your core domain, like `*.eng.example.com` or `*.marketing.example.com`. This can be any domain or subdomain you own.
In this example, endpoints would look like:
```csharp
[endpoint_id].eng.example.com
[endpoint_id_1].eng.example.com
...
```
where `[endpoint_id]` is a uniquely-generated hostname specific to your Pipedream HTTP endpoint.
If you own a domain that you want to *completely* redirect to Pipedream, you can also configure `*.example.com` to point to Pipedream. In this example, endpoints would look like:
```csharp
[endpoint_id].example.com
[endpoint_id_1].example.com
...
```
Since all traffic on `*.example.com` points to Pipedream, we can assign hosts on the root domain. This also means that **you cannot use other hosts like [www.example.com](http://www.example.com)** without conflicting with Pipedream endpoints. Choose this option only if you’re serving all traffic from `example.com` from Pipedream.
Before you move on, make sure you have access to manage DNS records for your domain. If you don’t, please coordinate with the team at your company that manages DNS records, and feel free to [reach out to our Support team](https://pipedream.com/support) with any questions.
#### A note on domain wildcards
Note that the records referenced above use the wildcard (`*`) for the host portion of the domain. When you configure DNS records in [step 3](/docs/workflows/domains/#3-add-your-dns-records), this allows you to point all traffic for a specific domain to Pipedream and create any number of Pipedream HTTP endpoints that will work with your domain.
### 2. Reach out to Pipedream Support
Once you’ve chosen your domain and are in an [eligible plan](https://pipedream.com/pricing), [reach out to Pipedream Support](https://pipedream.com/support) and let us know what domain you’d like to configure for your workspace. We’ll configure a TLS/SSL certificate for that domain, and give you two DNS CNAME records to add for that domain in [step 3](/docs/workflows/domains/#3-add-your-dns-records).
### 3. Add your DNS records
Once we configure your domain, we’ll ask you to create two DNS CNAME records:
* [One record to prove ownership of your domain](/docs/workflows/domains/#add-the-cname-validation-record) (a `CNAME` record)
* [Another record to point traffic on your domain to Pipedream](/docs/workflows/domains/#add-the-dns-cname-wildcard-record) (a `CNAME` record)
#### Add the CNAME validation record
Pipedream uses [AWS Certificate Manager](https://aws.amazon.com/certificate-manager/) to create the TLS certificate for your domain. To validate the certificate, you need to add a specific DNS record provided by Certificate Manager. Pipedream will provide the name and value.
For example, if you requested `*.eng.example.com` as your custom domain, Pipedream will provide the details of the record, like in this example:
* **Type**: `CNAME`
* **Name**: `_2kf9s72kjfskjflsdf989234nsd0b.eng.example.com`
* **Value**: `_7ghslkjsdfnc82374kshflasfhlf.vvykbvdtpk.acm-validations.aws.`
* **TTL (seconds)**: 300
Consult the docs for your DNS service for more information on adding CNAME records. Here’s an example configuration using AWS’s Route53 service:
#### Add the DNS CNAME wildcard record
Now you’ll need to add the wildcard record that points all traffic for your domain to Pipedream. Pipedream will also provide the details of this record, like in this example:
* **Type**: `CNAME`
* **Name**: `*.eng.example.com`
* **Value**: `id123.cd.pdrm.net.`
* **TTL (seconds)**: 300
Once you’ve finished adding these DNS records, please **reach out to the Pipedream team**. We’ll validate the records and finalize the configuration for your domain.
### 4. Send a test request to your custom domain
Any traffic to existing **{ENDPOINT_BASE_URL}** endpoints will continue to work uninterrupted.
To confirm traffic to your new domain works, take any Pipedream endpoint URL and replace the **{ENDPOINT_BASE_URL}** with your custom domain. For example, if you configured a custom domain of `pipedream.example.com` and have an existing endpoint at
```javascript
https://[endpoint_id].m.pipedream.net
```
Try making a test request to
```javascript
https://[endpoint_id].eng.example.com
```
## Security
### How Pipedream manages the TLS/SSL certificate
See our [TLS/SSL security docs](/docs/privacy-and-security/#encryption-of-data-in-transit-tls-ssl-certificates) for more detail on how we create and manage the certificates for custom domains.
### Requests to custom domains are allowed only for your endpoints
Custom domains are mapped directly to customer endpoints. This means no other customer can send requests to their endpoints on *your* custom domain. Requests to `example.com` are restricted specifically to HTTP endpoints in your Pipedream workspace.
# Environment Variables
Source: https://pipedream.com/docs/workflows/environment-variables
Environment variables (env vars) enable you to separate secrets and other static configuration data from your code.
You shouldn’t include API keys or other sensitive data directly in your workflow’s code. By referencing the value of an environment variable instead, your workflow includes a reference to that variable — for example, `process.env.API_KEY` instead of the API key itself.
You can reference env vars and secrets in [workflow code](/docs/workflows/building-workflows/code/) or in the object explorer when passing data to steps, and you can define them either globally for the entire workspace, or scope them to individual projects.
| Scope | Description |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Workspace** | All environment variables are available to all workflows within the workspace. All workspace members can manage workspace-wide variables [in the UI](https://pipedream.com/settings/env-vars). |
| **Project** | Environment variables defined within a project are only accessible to the workflows within that project. Only workspace members who have [access to the project](/docs/projects/access-controls/) can manage project variables. |
## Creating and updating environment variables
* To manage **global** environment variables for the workspace, navigate to **Settings**, then click **Environment Variables**: [https://pipedream.com/settings/env-vars](https://pipedream.com/settings/env-vars)
* To manage environment variables within a project, open the project, then click **Variables** from the project nav on the left
Click **New Variable** to add a new environment variable or secret:
**Configure the required fields**:
| Input field | Description |
| --------------- | ---------------------------------------------------------------------------------------------------------------------------- |
| **Key** | Name of the variable — for example, `CLIENT_ID` |
| **Value** | The value, which can contain any string with a max limit of 64KB |
| **Description** | Optionally add a description of the variable. This is only visible in the UI, and is not accessible within a workflow. |
| **Secret** | New variables default to **secret**. If configured as a secret, the value is never exposed in the UI and cannot be modified. |
To edit an environment variable, click the **Edit** button from the three dots to the right of a specific variable.
* Updates to environment variables will be made available to your workflows as soon as the save operation is complete — typically a few seconds after you click **Save**.
* If you update the value of an environment variable in the UI, your workflow should automatically use that new value where it’s referenced.
* If you delete a variable in the UI, any deployed workflows that reference it will return `undefined`.
## Referencing environment variables in code
You can reference the value of any environment variable using the object [`process.env`](https://nodejs.org/dist/latest-v10.x/docs/api/process.html#process_process_env). This object contains environment variables as key-value pairs.
For example, let’s say you have an environment variable named `API_KEY`. You can reference its value in Node.js using `process.env.API_KEY`:
```javascript
const url = `http://yourapi.com/endpoint/?api_key=${process.env.API_KEY}`;
```
Reference the same environment variable in Python:
```python
import os
print(os.environ["API_KEY"])
```
Variable names are case-sensitive. Use the key name you defined when referencing your variable in `process.env`.
Referencing an environment variable that doesn’t exist returns the value `undefined` in Node.js. For example, if you try to reference `process.env.API_KEY` without first defining the `API_KEY` environment variable in the UI, it will return the value `undefined`.
### Using autocomplete to reference env vars
When referencing env vars directly in code within your Pipedream workflow, you can also take advantage of autocomplete:
Logging the value of any environment variables — for example, using `console.log` — will include that value in the logs associated with the cell. Please keep this in mind and take care not to print the values of sensitive secrets.
`process.env` will always return `undefined` when used outside of the `defineComponent` export.
## Referencing environment variables in actions
[Actions](/docs/components/contributing/#actions) are pre-built code steps that let you provide input in a form, selecting the correct params to send to the action.
You can reference the value of environment variables using `{{process.env.YOUR_ENV_VAR}}`. You’ll see a list of your environment variables in the object explorer when selecting a variable to pass to a step.
[Private components](/docs/components/contributing/#using-components) (actions or triggers) do not have direct access to workspace or project variables as public components or code steps. Add a prop specifically for the variable you need. For sensitive data like API keys, [configure the prop as a secret](/docs/components/contributing/api/#props). In your prop configuration, set the value to `{{process.env.YOUR_ENV_VAR}}` to securely reference the environment variable.
## FAQ
### What if I define the same variable key in my workspace env vars and project env vars?
The project-scoped variable will take priority if the same variable key exists at both the workspace and project level. If a workflow *outside* of the relevant project references that variable, it’ll use the value of the environment variable defined for the workspace.
### What happens if I share a workflow that references an environment variable?
If you [share a workflow](/docs/workflows/building-workflows/sharing/) that references an environment variable, **only the reference is included, and not the actual value**.
## Limits
* Currently, environment variables are only exposed in Pipedream workflows, [not event sources](https://github.com/PipedreamHQ/pipedream/issues/583).
* The value of any environment variable may be no longer than `64KB`.
* The names of environment variables must start with a letter or underscore.
* Pipedream reserves environment variables that start with `PIPEDREAM_` for internal use. You cannot create an environment variable that begins with that prefix.
# Event History
Source: https://pipedream.com/docs/workflows/event-history
Monitor all workflow events and their stack traces in one centralized view under the [**Event History**](https://pipedream.com/event-history) section in the dashboard.
Within the **Event History**, you’ll be able to filter your events by workflow, execution status, within a specific time range.
Workspace admins are able to view events for all workflows, but members are required to select a workflow since they might not have [access to certain projects](/docs/projects/access-controls/#permissions).
## Filtering Events
The filters at the top of the screen allow you to search all events processed by your workflows.
You can filter by the event’s **Status**, **time of initiation** or by the **Workflow name**.
The filters are scoped to the current [workspace](/docs/workspaces/). If you’re not seeing the events or workflow you’re expecting, try [switching workspaces](/docs/workspaces/#switching-between-workspaces).
### Filtering by status
* The **Status** filter controls which events are shown by their status
* For example selecting the **Success** status, you’ll see all workflow events that were successfully executed
#### All failed workflow executions
* You can view all failed workflow executions by applying the **Error** status filter
* This will only display the failed workflow executions in the selected time period
* This view in particular is helpful for identifying trends of errors, or workflows with persistent problems
#### All paused workflow executions
* Workflow executions that are currently in a suspended state from `$.flow.delay` or `$.flow.suspend` will be shown when this filter is selected
If you’re using `setTimeout` or `sleep` in Node.js or Python steps, the event will not be considered **Paused**. Using those language native execution holding controls leaves your workflow in a **Executing** state.
### Within a timeframe
* Filtering by time frame will include workflow events that *began* execution within the defined range
* Using this dropdown, you can select between convenient time ranges, or specify a custom range on the right side
### Filtering by workflow
You can also filter events by a specific workflow. You can search by the workflow’s name in the search bar in the top right.
Alternatively, you can filter by workflow from a specific event. First, open the menu on the far right, then select **Filter By Workflow**. Then only events processed by that workflow will appear.
## Inspecting events
* Clicking on an individual event will open a panel that displays the steps executed, their individual configurations, as well as the overall performance and results of the entire workflow.
* The top of the event history details will display details including the overall status of that particular event execution and errors if any.
* If there is an error message, the link at the bottom of the error message will link to the corresponding workflow step that threw the error.
* From here you can easily **Build from event** or **Replay event**
## Bulk actions
You can select multiple events and perform bulk actions on them.
* **Replay**: Replays the selected events. This is useful for example when you have multiple errored events that you want to execute again after fixing a bug in your workflow.
* **Delete**: Deletes the selected events. This may be useful if you have certain events you want to scrub from the event history, or when you’ve successfully replayed events that had originally errored.
When you replay multiple events at once, they’ll be replayed in the order they were originally executed. This means the first event that came in will be replayed first, followed by the second, and so on.
## Limits
The number of events recorded and available for viewing in the Event History depends on your plan. [Please see the pricing page](https://pipedream.com/pricing) for more details.
## FAQ
### Is Event History available on all plans?
Yes, event history is available for all workspace plans, including free plans. However, the length of searchable or viewable history changes depending on your plan. [Please see the pricing page](https://pipedream.com/pricing) for more details.
# GitHub Sync
Source: https://pipedream.com/docs/workflows/git
export const PD_EGRESS_IP_RANGE = '44.223.89.56/29';
When GitHub Syncing is enabled on your project, Pipedream will serialize your workflows and synchronize changes to a GitHub repo.
Capabilities include:
* Bi-directional GitHub sync (push and pull changes)
* Edit in development branches
* Track commit and merge history
* Link users to commits
* Merge from Pipedream or create PRs and merge from GitHub
* Edit in Pipedream or use a local editor and synchronize via GitHub (e.g., edit code, find and replace across multiple steps or workflows)
* Organize workflows into projects with support for nested folders
## Getting Started
### Create a new project and enable GitHub Sync
A project may contain one or more workflows and may be further organized using nested folders. Each project may be synchronized to a single GitHub repo.
* Go to `https://pipedream.com/projects`
* Create a new project
* Enter a project name and check the box to **Configure GitHub Sync**
* To use **OAuth**
* Select a connected account, GitHub scope and repo name
* Pipedream will automatically create a new, empty repo in GitHub
*
* To use **Deploy Keys**
* Create a new repo in GitHub
* Follow the instructions to configure the deploy key
* Test your setup and create a new project
*
### Create a branch to edit a project
Branches are required to make changes
All changes to resources in a project must be made in a development branch.
Examples of changes include creating, editing, deleting, enabling, disabling and renaming workflows. This also includes changing workflow settings like concurrency, VPC assignment and auto-retries.
To edit a git-backed project you must create a development branch by clicking **Edit > Create Branch**
Next, name the branch and click **Create**:
To exit development mode without merging to production, click **Exit Development Mode**:
Your changes will be saved to the branch, if you choose to revisit them later.
### Merge changes to production
Once you’ve committed your changes, you can deploy your changes by merging them into the `production` branch through the Pipedream UI or GitHub.
When you merge a Git-backed project to production, all modified resources in the project will be deployed. Multiple workflows may be deployed, modified, or deleted in production through a single merge action.
#### Merge via the Pipedream UI
To merge changes to production, click on **Merge to production:**
Pipedream will present a diff between the development branch and the `production`. Validate your changes and click **Merge to production** to complete the merge:
#### Create a Pull Request in GitHub
To create a pull request in GitHub, either choose Open GitHub pull request from the git-actions menu in Pipedream or in GitHub:
You can also review and merge changes directly from GitHub using the standard pull request process.
Pull request reviews cannot be required
PR reviews cannot be required. That feature is on the roadmap for the Business tier.
### Commit changes
To commit changes without merging to production, select **Commit Changes** from the Git Actions menu:
You can review the diff and enter a commit message:
### Pull changes and resolve conflicts
If remote changes are detected, you’ll be prompted to pull the changes:
Pipedream will attempt to automatically merge changes. If there are conflicts, you will be prompted to manually resolve it:
### Move existing workflows to projects
Not available for v1 workflows
Legacy (v1) workflows are not supported in projects.
First, select the workflow(s) you want to move from the [workflows listing page](https://pipedream.com/workflows) and click **Move** in the top action menu:
Then, select the project to move the selected workflows to:
Undeployed changes are automatically assigned a development branch
If any moved workflows have undeployed changes, those changes will staged in a branch prefixed with `undeployed-changes` (e.g., `undeployed-changes-27361`).
### Use the changelog
The changelog tracks all git activity (for projects with GitHub sync enabled). If you encounter an error merging your project, go to the changelog and explore the log details to help you troubleshoot issues in your workflows:
### Local development
Projects that use GitHub sync may be edited outside of Pipedream. You can edit and commit directly via GitHub’s UI or clone the repo locally and use your preferred editor (e.g., VSCode).
To test external edits in Pipedream:
1. Commit and push local changes to your development branch in GitHub
2. Open the project in Pipedream’s UI and load your development branch
3. Use the Git Actions menu to pull changes from GitHub
## Known Issues
Below are a list of known issues that do not currently have solutions, but are in progress:
* Project branches on Pipedream cannot be deleted.
* If a workflow uses an action that has been deprecated, merging to production will fail.
* Legacy (v1) workflows are not supported in projects.
* Self-hosted GitHub Server instances are not yet supported. [Please contact us for help](https://pipedream.com/support).
* Workflow attachments are not supported
## GitHub Enterprise Cloud
If your repository is hosted on an GitHub Enterprise account, you can allow Pipedream’s address range to sync your project changes.
[Follow the directions here](https://docs.github.com/en/enterprise-cloud@latest/organizations/keeping-your-organization-secure/managing-security-settings-for-your-organization/managing-allowed-ip-addresses-for-your-organization) and add the following IP range:
{PD_EGRESS_IP_RANGE}
GitHub Sync is available on Business plan
To use this public IP address and connect to GitHub Enterprise Cloud hosted repositories, you’ll need to have a Pipedream Business plan. [View our plans](https://pipedream.com/pricing).
## FAQ
### How are Pipedream workflows synchronized to GitHub?
Pipedream will serialize your project’s workflows and their configuration into a standard YAML format for storage in GitHub.
Then Pipedream will commit your changes to your connected GitHub account.
### Do you have a definition of this YAML?
Not yet, please stay tuned!
### Can I sync multiple workflows to a single GitHub Repository?
Yes, *projects* are synced to a single GitHub Repository which allows you to store multiple workflows into a single GitHub Repository for easier organization and management.
### Can I use this feature to develop workflows locally?
Yes, you can use the GitHub Syncing feature to develop your workflows from YAML files checked into your Pipedream connected GitHub Repository.
Then pushing changes to the `production` branch will trigger a deploy for your Pipedream workflows.
### Why am I seeing the error “could not resolve step\[index].uses: component-key\@version” when merging to production?
This error occurs when a workflow references a [private component](/docs/components/contributing/#using-private-actions) without properly prefixing the component key with your workspace name in the `workflow.yaml` configuration file. Pipedream requires this prefix to correctly identify and resolve components specific to your workspace.
For example, if you modified a [registry action](/docs/components/contributing/) and published it privately, the correct component key should be formatted as `@workspacename/component-key@version` (e.g., `@pipedream/github-update-issue@0.1.0`).
To resolve this error:
1. Clone your repository locally and create a development branch.
2. Locate the error in your `workflow.yaml` file where the component key is specified.
3. Add your workspace name prefix to the component key, ensuring it follows the format `@workspacename/component-key@version`.
4. Commit your changes and push them to your repository.
5. Open your project in the Pipedream UI and select your development branch.
6. Click on **Merge to Production** and verify the deployment success in the [Changelog](/docs/workflows/git/#use-the-changelog).
7. If the issue persists, [reach out to Pipedream Support](https://pipedream.com/support) for further assistance.
### Why am I seeing an error about “private auth mismatch” when trying to merge a branch to production?
This error occurs when **both** of the below conditions are met:
1. The referenced workflow is using a connected account that’s not shared with the entire workspace
2. The change was merged from outside the Pipedream UI (via github.com or locally)
Since Pipedream can’t verify the person who merged that change should have access to use the connected account in a workflow in this case, we block these deploys.
To resolve this error:
1. Make sure all the connected accounts in the project’s workflows are [accessible to the entire workspace](/docs/apps/connected-accounts/#access-control)
2. Re-trigger a sync with Pipedream by making a nominal change to the workflow **from outside the Pipedream UI** (via github.com or locally), then merge that change to production
### Can I sync an existing GitHub Repository with workflows to a new Pipedream Project?
No, at this time it’s not possible because of how resources are connected during the bootstrapping process from the workflow YAML specification. However, this is on our roadmap, [please subscribe to this issue](https://github.com/PipedreamHQ/pipedream/issues/9255) for the latest details.
### Migrating Github Repositories
You can migrate Pipedream project’s Github repository to a new repository, while preserving history. You may want to do this when migrating a repository from a personal Github account to an organization account, without affecting the workflows within the Pipedream project.
#### Assumptions
* **Current GitHub Repository**: `previous_github_repo`
* **New GitHub Repository**: `new_github_repo`
* Basic familiarity with git and GitHub
* Access to a local terminal (e.g., Bash, Zsh, PowerShell)
* Necessary permissions to modify both the Pipedream project and associated GitHub repositories
#### Steps
1. **Access Project Settings in Pipedream:**
* Navigate to your Pipedream project.
* Use the dropdown menu on the “Edit” button in the top right corner to access `previous_github_repo` in GitHub.
2. **Clone the Current Repository Locally:**
```php
git clone previous_github_repo_clone_url
```
3. Reset GitHub Sync in Pipedream:
* In Pipedream, go to your project settings.
* Click on “Reset GitHub Connection”.
4. Set Up New repository connection:
* Configure the project’s GitHub repository to use `new_github_repo`.
5. Clone the new repository locally:
```bash
git clone new_github_repo_clone_url
cd new_github_repo
```
6. Link to the old repository:
```python
git remote add old_github_repo previous_github_repo_clone_url
git fetch --all
```
7. Prepare for migration:
* Create and switch to a new branch for migration:
```
git checkout -b migration
```
* Merge the main branch of `old_github_repo` into migration, allowing for unrelated histories:
```swift
git merge --allow-unrelated-histories old_github_repo/production
# Resolve any conflicts, such as in README.md
git commit
```
8. Finalize the migration:
* Optionally push the `migration` branch to the remote:
```python
git push --set-upstream origin migration
```
* Switch to the `production` branch and merge:
```
git checkout production
git merge --no-ff migration
git push
```
9. Cleanup:
* Remove the connection to the old repository:
```csharp
git remote remove old_github_repo
```
* Optionally, you may now safely delete `previous_github_repo` from GitHub.
### How does the `production` branch work?
Anything merged to the `production` branch will be deployed to your production workflows on Pipedream.
From a design perspective, we want to let you manage any branching strategy on your end, since you may be making commits to the repo outside of Pipedream. Once we support managing Pipedream workflows in a monorepo, where you may have other changes, we wanted to use a branch that didn’t conflict with a conventional main branch (like `main` or `master`).
In the future, we also plan to support you changing the default branch name.
# Limits
Source: https://pipedream.com/docs/workflows/limits
export const MAX_WORKFLOW_EXECUTION_LIMIT = '750';
export const FREE_INSPECTOR_EVENT_LIMIT = '7 days of events';
export const TMP_SIZE_LIMIT = '2GB';
export const FUNCTION_PAYLOAD_LIMIT = '6MB';
export const INSPECTOR_EVENT_EXPIRY_DAYS = '365';
export const DAILY_TESTING_LIMIT = '30 minutes';
export const EMAIL_PAYLOAD_SIZE_LIMIT = '30MB';
export const MEMORY_ABSOLUTE_LIMIT = '10GB';
export const MEMORY_LIMIT = '256MB';
export const PAYLOAD_SIZE_LIMIT = '512KB';
Pipedream imposes limits on source and workflow execution, the events you send to Pipedream, and other properties. You’ll receive an error if you encounter these limits. See our [troubleshooting guide](/docs/troubleshooting/) for more information on these specific errors.
Some of these limits apply only on the free tier. For example, Pipedream limits the number of credits and active workflows you can use on the free tier. **On paid tiers, you can run an unlimited number of credits for any amount of execution time (usage charges apply)**.
Other limits apply across the free and paid tiers. Please see the details on each limit below.
**These limits are subject to change at any time**.
## Number of Workflows
The limit of active workflows depends on your current plan. [See our pricing page](https://pipedream.com/pricing) for more details.
## Number of Event Sources
**You can run an unlimited number of event sources**, as long as each operates under the limits below.
## Execution Credits
Free Pipedream account have a limit on the number of execution credits. Paid plans are not capped but are subject to additional usage charges (you can manually set a usage cap if you want to manage costs).\
\
You can view your credits usage at the bottom-left of [the Pipedream UI](https://pipedream.com).
You can also see more detailed usage in [Billing and Usage Settings](https://pipedream.com/settings/billing). Here you’ll find your usage for the last 30 days, broken out by day, by resource (e.g. your source / workflow).
### Included Credits Usage Notifications
| Tier | Notifications |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Free tiers | You’ll receive an email when you reach 100% of your usage. |
| Paid tiers | You’ll receive an email at 80% and 100% of your [included credits](/docs/pricing/#included-credits) for your [billing period](/docs/pricing/#billing-period). |
## Daily workflow testing limit
You **do not** use credits testing workflows, but workspaces on the **Free** plan are limited to {MAX_WORKFLOW_EXECUTION_LIMIT} of test runtime per day. If you exceed this limit when testing in the builder, you’ll see a **Runtime Quota Exceeded** error.
## Data stores
Depending on your plan, Pipedream sets limits on:
1. The total number of data stores
2. The total number of keys across all data stores
3. The total storage used across all data stores
You’ll find your workspace’s limits in the **Data Stores** section of usage dashboard in the bottom-left of [the Pipedream UI](https://pipedream.com).
## HTTP Triggers
The following limits apply to [HTTP triggers](/docs/workflows/building-workflows/triggers/#http).
### HTTP Request Body Size
By default, the body of HTTP requests sent to a source or workflow is limited to {PAYLOAD_SIZE_LIMIT}.
Your endpoint will issue a `413 Payload Too Large` status code when the body of your request exceeds {PAYLOAD_SIZE_LIMIT}.
**Pipedream supports two different ways to bypass this limit**. Both of these interfaces support uploading data up to `5TB`, though you may encounter other platform limits.
* You can send large HTTP payloads by passing the `pipedream_upload_body=1` query string or an `x-pd-upload-body: 1` HTTP header in your HTTP request. [Read more here](/docs/workflows/building-workflows/triggers/#sending-large-payloads).
* You can upload multiple large files, like images and videos, using the [large file upload interface](/docs/workflows/building-workflows/triggers/#large-file-support).
### QPS (Queries Per Second)
Generally the rate of HTTP requests sent to an endpoint is quantified by QPS, or *queries per second*. A query refers to an HTTP request.
**You can send an average of 10 requests per second to your HTTP trigger**. Any requests that exceed that threshold may trigger rate limiting. If you’re rate limited, we’ll return a `429 Too Many Requests` response. If you control the application sending requests, you should retry the request with [exponential backoff](https://cloud.google.com/storage/docs/exponential-backoff) or a similar technique.
We’ll also accept short bursts of traffic, as long as you remain close to an average of 10 QPS (e.g. sending a batch of 50 requests every 30 seconds should not trigger rate limiting).
**This limit can be raised for paying customers**. To request an increase, [reach out to our Support team](https://pipedream.com/support/) with the HTTP endpoint whose QPS you’d like to increase, with the new, desired limit.
## Email Triggers
Currently, most of the [limits that apply to HTTP triggers](/docs/workflows/limits/#http-triggers) also apply to [email triggers](/docs/workflows/building-workflows/triggers/#email).
The only limit that differs between email and HTTP triggers is the payload size: the total size of an email sent to a workflow - its body, headers, and attachments - is limited to {EMAIL_PAYLOAD_SIZE_LIMIT}.
## Memory
By default, workflows run with {MEMORY_LIMIT} of memory. You can modify a workflow’s memory [in your workflow’s Settings](/docs/workflows/building-workflows/settings/#memory), up to {MEMORY_ABSOLUTE_LIMIT}.
Increasing your workflow’s memory gives you a proportional increase in CPU. If your workflow is limited by memory or compute, increasing your workflow’s memory can reduce its overall runtime and make it more performant.
**Pipedream charges credits proportional to your memory configuration**. [Read more here](/docs/pricing/faq/#how-does-workflow-memory-affect-credits).
## Disk
Your code, or a third party library, may need access to disk during the execution of your workflow or event source. **You have access to {TMP_SIZE_LIMIT} of disk in the `/tmp` directory**.
This limit cannot be raised.
## Workflows
### Time per execution
Every event sent to a workflow triggers a new execution of that workflow. Workflows have a default execution limit that varies with the trigger type:
* HTTP and Email-triggered workflows default to **30 seconds** per execution.
* Cron-triggered workflows default to **60 seconds** per execution.
If your code exceeds your workflow-level limit, we’ll throw a **Timeout** error and stop your workflow. Any partial logs and observability associated with code cells that ran successfully before the timeout will be attached to the event in the UI, so you can examine the state of your workflow and troubleshoot where it may have failed.
You can increase the timeout limit, up to a max value set by your plan:
| Tier | Maximum time per execution |
| ---------- | -------------------------- |
| Free tiers | 300 seconds (5 min) |
| Paid tiers | 750 seconds (12.5 min) |
Events that trigger a **Timeout** error will appear in red in the [Inspector](/docs/workflows/building-workflows/inspect/). You’ll see the timeout error, also in red, in the cell at which the code timed out.
### Event History
The [Inspector](/docs/workflows/building-workflows/inspect/#the-inspector) shows the execution history for a given workflow. Events have a limited retention period, depending on your plan:
| Tier | Events retained per workflow |
| ---------- | -------------------------------------------------------------------------------- |
| Free tiers | {FREE_INSPECTOR_EVENT_LIMIT} |
| Paid tiers | [View breakdown of events history per paid plan](https://pipedream.com/pricing/) |
The execution details for a specific event also expires after {INSPECTOR_EVENT_EXPIRY_DAYS} days.
### Logs, Step Exports, and other observability
The total size of `console.log()` statements, [step exports](/docs/workflows/#step-exports), and the original event data sent to the workflow cannot exceed a combined size of {FUNCTION_PAYLOAD_LIMIT}. If you produce logs or step exports larger than this - for example, passing around large API responses, CSVs, or other data - you may encounter a **Function Payload Limit Exceeded** in your workflow.
This limit cannot be raised.
## Acceptable Use
We ask that you abide by our [Acceptable Use](https://pipedream.com/terms/#b-acceptable-use) policy. In short this means: don’t use Pipedream to break the law; don’t abuse the platform; and don’t use the platform to harm others.
# Workflow Development
Source: https://pipedream.com/docs/workflows/quickstart
Sign up for a [free Pipedream account](https://pipedream.com/auth/signup) (no credit card required) and complete this quickstart guide to learn the basic patterns for workflow development:
Workflows must be created in **Projects**. Projects make it easy to organize your workflows and collaborate with your team.
Go to [https://pipedream.com/projects](https://pipedream.com/projects) and click on **Create Project**.
Next, enter a project name and click **Create Project**. For this example, we’ll name our project **Getting Started**. You may also click the icon to the right to generate a random project name.
[Configure GitHub Sync](/docs/workflows/git/) for projects to enable git-based version control and unlock the ability to develop in branches, commit to or pull changes from GitHub, view diffs, create PRs and more.
After the project is created, use the **New** button to create a new workflow.
Name the workflow and click **Create Workflow** to use the default settings. For this example, we’ll name the workflow **Pipedream Quickstart**.
Next, Pipedream will launch the workflow builder and prompt you to add a trigger.
Clicking the trigger opens a new menu to select the trigger. For this example, select **New HTTP / Webhook Requests**.
Click **Save and continue** in the step editor on the right to accept the default settings.
Pipedream will generate a unique URL to trigger this workflow. Once your workflow is deployed, your workflow will run on every request to this URL.
Next, generate a test event to help you build the workflow.
The test event will be used to provide autocomplete suggestion as you build your workflow. The data will also be used when testing later steps. You may generate or select a different test event at any time when building a workflow.
For this example, let’s use the following test event data:
```json
{
"message": "Pipedream is awesome!"
}
```
Pipedream makes it easy to generate test events for your HTTP trigger. Click on **Generate Test Event** to open the HTTP request builder. Copy and paste the JSON data above into the **Raw Request Body** field and click **Send HTTP Request**.
Pipedream will automatically select and display the contents of the selected event. Validate that the `message` was received as part the event `body`.
You may also send live data to the unique URL for your workflow using your favorite HTTP tool or by running a `cURL` command, e.g.,
```sh
curl -d '{"message": "Pipedream is awesome!"}' \
-H "Content-Type: application/json" \
YOUR_ENDPOINT_URL
```
Before we send data to Google Sheets, let’s use the npm [`sentiment`](https://www.npmjs.com/package/sentiment) package to generate a sentiment score for our message. To do that, click **Continue** or the **+** button.
That will open the **Add a step** menu. Select **Run custom code**.
Pipedream will add a Node.js code step to the workflow.
Rename the step to **sentiment**.
Next, add the following code to the code step:
```javascript
import Sentiment from "sentiment"
export default defineComponent({
async run({ steps, $ }) {
let sentiment = new Sentiment()
return sentiment.analyze(steps.trigger.event.body.message)
},
})
```
This code imports the npm package, passes the message we sent to our trigger to the `analyze()` function by referencing `steps.trigger.event.body.message` and then returns the result.
To use any npm package on Pipedream, just `import` it. There’s no `npm install` or `package.json` required.
Any data you `return` from a step is exported so it can be inspected and referenced it in future steps via the `steps` object. In this example, return values will be exported to `steps.sentiment.$return_value` because we renamed the step to **sentiment** .
Your code step should now look like the screenshot below. To run the step and test the code, click the **Test** button.
You should see the results of the sentiment analysis when the test is complete.
When you **Test** a step, only the current step is executed. Use the caret to test different ranges of steps including the entire workflow.
Next, create a Google Sheet and add **Timestamp**, **Message** and **Sentiment Score** to the first row. These labels act as our column headers amd will help us configure the Google Sheets step of the workflow.
Next, let’s add a step to the workflow to send the data to Google Sheets. First, click **+** after the `sentiment` code step and select the **Google Sheets** app.
Then select the **Add Single Row** action.
Click to connect you Google Sheets account to Pipedream (or select it from the dropdown if you previously connected an account).
Pipedream will open Google’s sign in flow in a new window. Sign in with the account you want to connect.
If prompted, you must check the box for Pipedream to **See, edit, create and delete all of your Google Drive files**. These permissions are required for configure and use the pre-built actions for Google Sheets.
Learn more about Pipedream’s [privacy and security policy](/docs/privacy-and-security/).
When you complete connecting your Google account, the window should close and you should return to Pipedream. Your connected account should automatically be selected. Next, select your spreadsheet from the dropdown menu:
Then select the sheet name (the default sheet name in Google Sheets is **Sheet1**):
Next, select if the spreadsheet has headers in the first row. When a header row exists, Pipedream will automatically retrieve the header labels to make it easy to enter data (if not, you can manually construct an array of values). Since the sheet for this example contains headers, select **Yes**.
Pipedream will retrieve the headers and generate a form to enter data in your sheet:
First, let’s use the object explorer to pass the timestamp for the workflow event as the value for the first column. This data can be found in the context object on the trigger.
When you click into the **Timestamp** field, Pipedream will display an object explorer to make it easy to find data. Scroll or search to find the `ts` key under `steps.trigger.context`.
Click **select path** to insert a reference to `steps.trigger.context.ts`:
Next, let’s use autocomplete to enter a value for the **Message** column. First, add double braces `{{` — Pipedream will automatically add the closing braces `}}`.
Then, type `steps.trigger.event.body.message` between the pairs of braces. Pipedream will provide autocomplete suggestions as you type. Press **Tab** to use a suggestion and then click `.` to get suggestions for the next key. The final value in the **Message** field should be `steps.trigger.event.body.message`.
Finally, let’s copy a reference from a previous step. Click on the `sentiment` step to open the results in the editor:
Next, click the **Copy Path** link next to the score.
Click the Google Steps step or click the open tab in the editor. Then paste the value into the **Sentiment Score** field — Pipedream will automatically wrap the reference in double braces `{{ }}`.
Now that the configuration is complete, click **Test** to validate the configuration for this step. When the test is complete, you will see a success message and a summary of the action performed:
If you load your spreadsheet, you should see the data Pipedream inserted.
Next, return to your workflow and click **Deploy** to run your workflow on every trigger event.
When your workflow deploys, you will be redirected to the **Inspector**. Your workflow is now live.
To validate your workflow is working as expected, send a new request to your workflow: You can edit and run the following `cURL` command:
```swift
curl -d '{ "message": "Pipedream is awesome!" }' \
-H "Content-Type: application/json" \
YOUR-TRIGGER-URL
```
The event will instantly appear in the event list. Select it to inspect the workflow execution.
Finally, you can return to Google Sheets to validate that the new data was automatically inserted.
## Next Steps
Congratulations! You completed the quickstart and should now understand the basic patterns for workflow development. Next, try creating your own [workflows](/docs/workflows/building-workflows/), learn how to [build and run workflows for your users](/docs/connect/workflows/) or check out the rest of the [docs](/docs/)!
# Virtual Private Clouds
Source: https://pipedream.com/docs/workflows/vpc
Pipedream VPCs enable you to run workflows in dedicated and isolated networks with static outbound egress IP addresses that are unique to your workspace (unlike other platforms that provide static IPs common to all customers on the platform).
Outbound network requests from workflows that run in a VPC will originate from these static IP addresses, so you can whitelist access to sensitive resources (like databases and APIs) with confidence that the requests will only originate from the Pipedream workflows in your workspace.
## Getting started
### Create a new VPC
1. Open the [Virtual Private Clouds tab](https://pipedream.com/settings/networks):
1. Click on **New VPC** in the upper right of the page:
2. Enter a network name and click **Create**:
3. It may take 5-10 minutes to complete setting up your network. The status will change to **Available** when complete:
### Run workflows within a VPC
To run workflows in a VPC, check the **Run in Private Network** option in workflow settings and select the network you created. All outbound network requests for the workflow will originate from the static egress IP for the VPC (both when testing a workflow or when running the workflow in production).
If you don’t see the network listed, the network setup may still be in progress. If the issue persists longer than 10 minutes, please [contact support](https://pipedream.com/support).
### Find the static outbound IP address for a VPC
You can view and copy the static outbound IP address for each VPC in your workspace from the [Virtual Private Cloud settings](https://pipedream.com/settings/networks). If you need to restrict access to sensitive resources (e.g., a database) by IP address, copy this address and configure it in your application with the `/32` CIDR block. Network requests from workflows running in the VPC will originate from this address.
## Managing a VPC
To rename or delete a VPC, navigate to the [Virtual Private Cloud settings](https://pipedream.com/settings/networks) for your workspace and select the option from the menu at the the right of the VPC you want to manage.
## Self-hosting and VPC peering
If you’re interested in running Pipedream workflows in your own infrastructure, or configure VPC peering to allow Pipedream to communicate to resources in a private network, please reach out to our [Sales team](mailto:sales@pipedream.com).
## Limitations
* Only workflows can run in VPCs (other resources like sources or data stores are not currently supported). For example, [sources](/docs/workflows/building-workflows/triggers/) cannot yet run in VPCs.
* Creating a new network can take up to 5 minutes. Deploying your first workflow into a new network and testing that workflow for the first time can take up to 1 min. Subsequent operations should be as fast as normal.
* VPCs only provide static IPs for outbound network requests. This feature does not provide a static IP for or otherwise restrict inbound requests.
* You can’t set a default network for all new workflows in a workspace or project (you must select the network every time you create a new workflow). Please [reach out](https://pipedream.com/support) if you’re interesting in imposing controls like this in your workspace.
* Workflows running in a VPC will still route specific requests routed through [the shared Pipedream network](/docs/workflows/data-management/destinations/http/#ip-addresses-for-pipedream-http-requests):
* [`$.send.http()`](/docs/workflows/data-management/destinations/http/) requests
* Async options requests (these are requests that are made to populate options in drop down menus for actions while a building a workflow — e.g., the option to “select a Google Sheet” when using the “add row to Google Sheets” action)
## FAQ
### Will HTTP requests sent from Node.js, Python and the HTTP request steps use the assigned static IP address?
Yes, all steps that send HTTP requests from a workflow assigned to a VPC will use that VPC’s IP address to send HTTP requests.
This will also include `axios`, `requests`, `fetch` or any HTTP client you prefer in your language of choice.
The only exception are requests sent by `$.send.http()` or the HTTP requests used to populate async options that power props like “Select a Google Sheet” or “Select a Slack channel”. These requests will route through the [standard set of Pipedream IP addresses.](/docs/privacy-and-security/#hosting-details)
### Can a single workflow live within multiple VPCs?
No, a VPC can contain many workflows, but a single workflow can only belong to one VPC.
### Can I modify my VPC’s IP address to another address?
No, IP addresses are assigned to VPCs for you, and they are not changeable.
### How much will VPCs cost?
VPCs are available on the **Business** plan. [Upgrade your plan here](https://pipedream.com/pricing).
# Managing workspaces
Source: https://pipedream.com/docs/workspaces
When you sign up for Pipedream, you’ll either create a new workspace or join an existing one if you signed up from an invitation.
You can create and join any number of workspaces. For example, you can create one to work alone and another to collaborate with your team. You can also start working alone, then easily add others into your existing workspace to work together on workflows you’ve already built out.
Once you’ve created a new workspace, you can invite your team to create and edit workflows together, and organize them within projects and folders.
## Creating a new workspace
To create a new workspace,
1. Open the dropdown menu in the top left of the Pipedream dashboard
2. Select **New workspace**
3. You’ll be prompted to name the workspace (you can [change the name later](/docs/workspaces/#renaming-a-workspace))
## Workspace settings
Find your current [workspace settings](https://pipedream.com/settings/account) like current members, under the **Settings** navigation menu item on the left hand side. This is where you can manage your workspace settings, including the workspace name, members, and member permissions.
### Inviting others to a join a workspace
After opening your workspace settings, open the [Membership](https://pipedream.com/settings/users) tab.
* Invite people to your workspace by entering their email address and then clicking **Send**
* Or create an invite link to more easily share with a larger group (you can limit access to only specific email domains)
### Managing member permissions
By default, new workspace members are assigned the **Member** level permission.
**Members** will be able to perform general tasks like viewing, developing, and deploying workflows.
However, only **Admins** will be able to manage workspace level settings, like changing member roles, renaming workspaces, and modifying Slack error notifications.
#### Promoting a member to admin
To promote a member to an admin level account in your workspace, click the 3 dots to the right of their email and select “Make Admin”.
#### Demoting an admin to a member
To demote an admin back to a member, click the 3 dots to the right of their email address and select “Remove Admin”.
### Finding your workspace’s ID
Visit your [workspace settings](https://pipedream.com/settings/account) and scroll down to the **API** section. You’ll see your workspace ID here.
### Requiring Two-Factor Authentication
As a workspace admin or owner on the [Business plan](https://pipedream.com/pricing), you’re able to **require** that all members in your workspace must enable 2FA on their account.
1. Open the Authentication tab in your [workspace settings](https://pipedream.com/settings/authentication) (you must be an admin or owner to make changes here)
2. Make sure you’re in the [correct workspace](/docs/workspaces/#switching-between-workspaces)
3. Click the toggle under **Require 2FA** — this will open a confirmation modal with some additional information
4. Once you enable the change in the modal, **all workspace members (including admins and owners) will immediately be required to configure 2FA on their account**. All new and existing workspace members will be required to set up 2FA the next time they sign in.
Anyone who is currently logged in to Pipedream will be temporarily signed out until they set up 2FA
If anyone is actively making changes to a workflow, their session may be interrupted. We recommend enabling the 2FA requirement in off hours.
### Configuring Single Sign-On (SSO)
Workspaces on the Business plan can configure Single Sign-On, so your users can login to Pipedream using your identity provider.
Pipedream supports SSO with Google, Okta, and any provider that supports the SAML protocol. See the guides below to configure SSO for your identity provider:
* [Okta](/docs/workspaces/sso/okta/)
* [Google](/docs/workspaces/sso/google/)
* [Other SAML provider](/docs/workspaces/sso/saml/)
### SCIM
Pipedream supports provisioning user accounts from your IdP via SCIM. Any workspace on the Business plan can configure Single Sign-On with SCIM.
### Renaming a workspace
To rename a workspace, open your [workspace settings](https://pipedream.com/settings/account) and navigate to the **General** tab.
Click the save button to save the changes.
This action is only available to workspace **admins**.
### Deleting a workspace
To delete a workspace, open your workspace settings and navigate to the **Danger Zone**.
Click the **Delete workspace** button and confirm the action by entering in your workspace name and `delete my workspace` into the text prompt.
Deleting a workspace will delete all **sources**, **workflows**, and other resources in your workspace.
Deleting a workspace is **irreversible** and permanent.
## Switching between workspaces
To switch between workspaces, open the dropdown menu in the top left of the Pipedream dashboard.
Select which workspace you’d like to start working within, and your Pipedream dashboard context will change to that workspace.
# Domain Verification
Source: https://pipedream.com/docs/workspaces/domain-verification
Pipedream requires that you verify ownership of your email domain in order to [configure SAML SSO](/docs/workspaces/sso/) for your workspace. If your email is `foo@example.com`, you need to verify ownership of `example.com`. If configuring Google OAuth (not SAML), you can disregard this section.
## Getting started
1. Navigate to the **[Verified Domains](https://pipedream.com/settings/domains)** section of your workspace settings
2. Enter the domain you’d like to use then click **Add Domain**
3. You’ll see a modal with instructions for adding a `TXT` record in the DNS configuration for your domain
4. DNS changes may take between a few minutes and up to 72 hours to propagate. Once they’re live, click the **Verify** button for the domain you’ve entered
5. Once Pipedream verifies the `TXT` record, we’ll show a green checkmark on the domain
Make sure to verify all your domains. There’s no limit on the number of domains you can verify for SSO, so if you use `example.com`, `example.net`, and `foo.example.com`, make sure to verify each one.
# Single Sign On Overview
Source: https://pipedream.com/docs/workspaces/sso
Pipedream supports Single Sign-On (SSO) with [Okta](/docs/workspaces/sso/okta/), [Google](/docs/workspaces/sso/google/), or [any provider](/docs/workspaces/sso/saml/) that supports SAML or Google OAuth, which allows IT and workspace administrators easier controls to manage access and security.
Using SSO with your Identity Provider (IdP) centralizes user login management and provides a single point of control for IT teams and employees.
## Requirements for SSO
* Your workspace must be on a [Business plan](https://pipedream.com/pricing)
* If using SAML, your Identity Provider must support SAML 2.0
* Only workspace admins and owners can configure SSO
* Your workspace admin or owner must [verify ownership](/docs/workspaces/sso/#verifying-your-email-domain) of the SSO email domain
The below content is for workspace admins and owners. Only workspace admins and owners have access to add verified domains, set up SSO, and configure workspace login methods.
## Verifying your Email Domain
In order to configure SAML SSO for your workspace, you first need to verify ownership of the email domain. If configuring Google OAuth (not SAML), you can skip this section.
[Refer to the guide here](/docs/workspaces/domain-verification/) to verify your email domain.
## Setting up SSO
Navigate to the [Authentication section](https://pipedream.com/settings/domains) in your workspace settings to get started.
### SAML SSO
1. First, make sure you’ve verified the domain(s) you intend to use for SSO ([see above](/docs/workspaces/sso/#verifying-your-email-domain))
2. Click the **Enable SSO** toggle and select **SAML**
3. If setting up SAML SSO, you’ll need to enter a metadata URL, which contains all the necessary configuration for Pipedream. Refer to the provider-specific docs for the detailed walk-through ([Okta](/docs/workspaces/sso/okta/), [Google Workspace](/docs/workspaces/sso/google/), [any other SAML provider](/docs/workspaces/sso/saml/)).
4. Click **Save**
### Google OAuth
1. Click the **Enable SSO** toggle and select **Google**
2. Enter the domain that you use with Google OAuth. For example, `vandalayindustries.com`
3. Click **Save**
## Restricting Login Methods
Once you’ve configured SSO for your workspace, you can restrict the allowed login methods for [non-workspace owners](/docs/workspaces/sso/#workspace-owners-can-always-sign-in-using-any-login-method).
| Login Method | Description |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Any login method** | Everyone in the workspace can sign in either using SSO or via the login method they used to create their account (email and password, Google OAuth, GitHub) |
| **SSO only** | Workspace members and admins must [sign in using SSO](https://pipedream.com/auth/sso) |
| **SSO with guests** | When siging in using a verified email domain, members and admins must [sign in using SSO](https://pipedream.com/auth/sso). If signing in with a different domain (`gmail.com` for example), members (guests) can sign in using any login method. |
### Workspace owners can always sign in using any login method
In order to ensure you don’t get locked out of your Pipedream workspace in the event of an outage with your identity provider, workspace owners can always sign in via the login method they used to create the account (email and password, Google, or GitHub).
### Login methods are enforced when signing in to pipedream.com
This means if you are a member of 2 workspaces and one of them allows **any login method** but the other **requires SSO**, you will be required to sign in to Pipedream using SSO every time, independent of the workspace you are trying to access.
# Configure SSO With Google Workspace
Source: https://pipedream.com/docs/workspaces/sso/google
Pipedream supports Single Sign-On (SSO) with Google Workspace. This guide shows you how to configure SSO in Pipedream to authenticate with your Google org.
## Requirements
* SSO is only supported for [workspaces](/docs/workspaces/) on the Business plan. Visit the [Pipedream pricing page](https://pipedream.com/pricing) to upgrade.
* You need an administrator of your Pipedream workspace and someone who can [create SAML apps in Google Workspace](https://apps.google.com/supportwidget/articlehome?hl=en\&article_url=https%3A%2F%2Fsupport.google.com%2Fa%2Fanswer%2F6087519%3Fhl%3Den\&assistant_id=generic-unu\&product_context=6087519\&product_name=UnuFlow\&trigger_context=a) to configure SSO.
## Configuration
To configure SSO in Pipedream, you need to set up a [SAML application](https://apps.google.com/supportwidget/articlehome?hl=en\&article_url=https%3A%2F%2Fsupport.google.com%2Fa%2Fanswer%2F6087519%3Fhl%3Den\&assistant_id=generic-unu\&product_context=6087519\&product_name=UnuFlow\&trigger_context=a) in Google Workspace. If you’re a Google Workspace admin, you’re all set. Otherwise, coordinate with a Google Workspace admin before you continue.
In your **Google Workspace** admin console, select **Apps** > **Web and Mobile apps**
In the **Add app** menu, select the option to **Add custom SAML app**:
First, add **Pipedream** as the app name, and an app description that makes sense for your organization:
In the **Service provider details**, provide the following values:
* **ACS URL** — `https://api.pipedream.com/auth/saml/consume`
* **Entity ID** — Pipedream
* **Start URL** — `https://api.pipedream.com/auth/saml/`
replacing `` with the workspace name at [https://pipedream.com/settings/account](https://pipedream.com/settings/account). For example, if your workspace name is `example-workspace`, your start URL will be `https://api.pipedream.com/auth/saml/example-workspace`.
In the **Name ID** section, provide these values:
* **Name ID format** — `EMAIL`
* **Name ID** — Basic Information > Primary email
then press **Continue**.
Once the app is configured, visit the **User access** section to add Google Workspace users to your Pipedream SAML app. See [step 14 of the Google Workspace SAML docs](https://apps.google.com/supportwidget/articlehome?hl=en\&article_url=https%3A%2F%2Fsupport.google.com%2Fa%2Fanswer%2F6087519%3Fhl%3Den\&assistant_id=generic-unu\&product_context=6087519\&product_name=UnuFlow\&trigger_context=a) for more detail.
Pipedream requires access to SAML metadata at a publicly-accessible URL. This communicates public metadata about the identity provider (Google Workspace) that Pipedream can use to configure the SAML setup in Pipedream.
First, click the **Download Metadata** button on the left of the app configuration page:
**Host this file on a public web server where Pipedream can access it via URL**, for example: `https://example.com/metadata.xml`. You’ll use that URL in the next step.
In Pipedream, visit your workspace’s [authentication settings](https://pipedream.com/settings/authentication).
In the **Single Sign-On** section, select **SAML**, and add the URL from step 7 above in the **Metadata URL** field, then click Save.
Any user in your workspace can now log into Pipedream at [https://pipedream.com/auth/sso](https://pipedream.com/auth/sso) by entering your workspaces’s name (found in your [Settings](https://pipedream.com/settings/account)). You can also access your SSO sign in URL directly by visiting [https://pipedream.com/auth/org/your-workspace-name](https://pipedream.com/auth/org), where `your-workspace-name` is the name of your workspace.
## Important details
Before you configure the application in Google, make sure all your users have matching email addresses for their Pipedream user profile and their Google Workspace profile. Once SSO is enabled, they will not be able to change their Pipedream email address.
If a user’s Pipedream email does not match the email in their Google profile, they will not be able to log in.
If existing users signed up for Pipedream using an email and password, they will no longer be able to do so. They will only be able to sign in using SSO.
# Configure SSO With Okta
Source: https://pipedream.com/docs/workspaces/sso/okta
Pipedream supports Single Sign-On (SSO) with Okta. This guide shows you how to configure SSO in Pipedream to authenticate with your Okta org.
## Requirements
* SSO is only supported for [workspaces](/docs/workspaces/) on the Business plan. Visit the [Pipedream pricing page](https://pipedream.com/pricing) to upgrade.
* You must be an administrator of your Pipedream workspace
* You must have an Okta account
## Configuration
In your Okta **Admin** dashboard, select the **Applications** section and click **Applications** below that:
Click **Browse App Catalog**:
Search for “Pipedream” and select the Pipedream app:
Fill out the **General Settings** that Okta presents, and click **Done**:
Select the **Sign On** tab, and click **Edit** at the top right:
Scroll down to the **SAML 2.0** settings. In the **Default Relay State** section, enter `organization_username`:
Set any other configuration options you need in that section or in the **Credentials Details** section, and click **Save**.
In the **Sign On** section, you’ll see a section that includes the setup instructions for SAML:
Click the **Identity Provider metadata** link and copy the URL from your browser’s address bar:
Visit your [Pipedream workspace authentication settings](https://pipedream.com/settings/authentication). Click the toggle to **Enable SSO**, then click **Edit SSO Configuration**, and add the metadata URL in the **SAML** section and click **Save**:
Back in Okta, click on the **Assignments** tab of the Pipedream application. Click on the **Assign** dropdown and select **Assign to People**:
Assign the application to the relevant users in Okta, and Pipedream will configure the associated accounts on our end.
Users configured in your Okta app can log into Pipedream at [https://pipedream.com/auth/sso](https://pipedream.com/auth/sso) by entering your workspaces’s name (found in your [Settings](https://pipedream.com/settings/account)). You can also access your SSO sign in URL directly by visiting [https://pipedream.com/auth/org/your-workspace-name](https://pipedream.com/auth/org), where `your-workspace-name` is the name of your workspace.
## Important details
Before you configure the application in Okta, make sure all your users have matching email addresses for their Pipedream user profile and their Okta profile. Once SSO is enabled, they will not be able to change their Pipedream email address.
If a user’s Pipedream email does not match the email in their IDP profile, they will not be able to log in.
If existing users signed up for Pipedream using an email and password, they will no longer be able to do so. They will only be able to sign in using SSO.
# Configure SSO With Another SAML Provider
Source: https://pipedream.com/docs/workspaces/sso/saml
Pipedream supports Single Sign-On (SSO) with any identity provider that supports SAML 2.0. This guide shows you how to configure SSO in Pipedream to authenticate with your SAML provider.
If you use [Okta](/docs/workspaces/sso/okta/) or [Google Workspace](/docs/workspaces/sso/google/), please review the guides for those apps.
## Requirements
* SSO is only supported for [workspaces](/docs/workspaces/) on the Business plan. Visit the [Pipedream pricing page](https://pipedream.com/pricing) to upgrade.
* You need an administrator of your Pipedream workspace and someone who can create SAML apps in your identity provider to configure SSO.
## SAML metadata
| Name | Other names | Value |
| --------------------------------------- | -------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| SP Entity ID | Audience, Audience Restriction, SP URL | `Pipedream` |
| SP Assertion Consumer Service (ACS) URL | Reply or destination URL | `https://api.pipedream.com/auth/saml/consume` |
| SP Single Sign-on URL | Start URL | `https://api.pipedream.com/auth/saml/` replacing `` with the workspace name at [https://pipedream.com/settings/account](https://pipedream.com/settings/account). For example, if your workspace name is `example-workspace`, your start URL will be `https://api.pipedream.com/auth/saml/example-workspace`. |
## SAML attributes
* `NameID` —email
## Providing SAML metadata to Pipedream
Pipedream requires access to SAML metadata at a publicly-accessible URL. This communicates public metadata about the identity provider (your SSO provider) that Pipedream can use to configure the SAML setup in Pipedream.
Most SSO providers will provide a publicly-accessible metadata URL. If not, they should provide a mechanism to download the SAML metadata XML file. **Once you’ve configured your SAML app using the settings above, host this file on a public web server where Pipedream can access it via URL**, for example: `https://example.com/metadata.xml`.
Once you have a publicly-accessible URL that hosts your SAML metadata, visit your workspace’s [authentication settings](https://pipedream.com/settings/authentication) in Pipedream. In the **Single Sign-On** section, select **SAML**, and add your metadata URL to the **Metadata URL** field, then click **Save**.
Any user in your workspace can now log into Pipedream at [https://pipedream.com/auth/sso](https://pipedream.com/auth/sso) by entering your workspaces’s name (found in your [Settings](https://pipedream.com/settings/account)). You can also access your SSO sign in URL directly by visiting [https://pipedream.com/auth/org/your-workspace-name](https://pipedream.com/auth/org), where `your-workspace-name` is the name of your workspace.
## Important details
Before you configure the application in your IdP, make sure all your users have matching email addresses for their Pipedream user profile and their IdP profile. Once SSO is enabled, they will not be able to change their Pipedream email address.
If a user’s Pipedream email does not match the email in their IdP profile, they will not be able to log in.
If existing users signed up for Pipedream using an email and password, they will no longer be able to do so. They will only be able to sign in using SSO.