← Google Cloud + Databricks integrations

Create Job with Databricks API on New Pub/Sub Messages from Google Cloud API

Pipedream makes it easy to connect APIs for Databricks, Google Cloud and 3,000+ other apps remarkably fast.

Trigger workflow on
New Pub/Sub Messages from the Google Cloud API
Next, do this
Create Job with the Databricks API
No credit card required
Intro to Pipedream
Watch us build a workflow
Watch us build a workflow
8 min
Watch now ➜

Trusted by 1,000,000+ developers from startups to Fortune 500 companies

Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo
Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo

Developers Pipedream

Getting Started

This integration creates a workflow with a Google Cloud trigger and Databricks action. When you configure and deploy the workflow, it will run on Pipedream's servers 24x7 for free.

  1. Select this integration
  2. Configure the New Pub/Sub Messages trigger
    1. Connect your Google Cloud account
    2. Select a Type
  3. Configure the Create Job action
    1. Connect your Databricks account
    2. Configure Tasks
    3. Optional- Configure Job Name
    4. Optional- Configure Tags
    5. Optional- Configure Job Clusters
    6. Optional- Configure Email Notifications
    7. Optional- Configure Webhook Notifications
    8. Optional- Configure Timeout Seconds
    9. Optional- Configure Schedule
    10. Optional- Configure Max Concurrent Runs
    11. Optional- Configure Git Source
    12. Optional- Configure Access Control List
  4. Deploy the workflow
  5. Send a test event to validate your setup
  6. Turn on the trigger

Details

This integration uses pre-built, source-available components from Pipedream's GitHub repo. These components are developed by Pipedream and the community, and verified and maintained by Pipedream.

To contribute an update to an existing component or create a new component, create a PR on GitHub. If you're new to Pipedream component development, you can start with quickstarts for trigger span and action development, and then review the component API reference.

Trigger

Description:Emit new Pub/Sub topic in your GCP account. Messages published to this topic are emitted from the Pipedream source.
Version:0.1.7
Key:google_cloud-new-pubsub-messages

Google Cloud Overview

The Google Cloud API opens a world of possibilities for enhancing cloud operations and automating tasks. It empowers you to manage, scale, and fine-tune various services within the Google Cloud Platform (GCP) programmatically. With Pipedream, you can harness this power to create intricate workflows, trigger cloud functions based on events from other apps, manage resources, and analyze data, all in a serverless environment. The ability to interconnect GCP services with numerous other apps enriches automation, making it easier to synchronize data, streamline development workflows, and deploy applications efficiently.

Trigger Code

import { PubSub } from "@google-cloud/pubsub";
import googleCloud from "../../google_cloud.app.mjs";

export default {
  key: "google_cloud-new-pubsub-messages",
  name: "New Pub/Sub Messages",
  description: "Emit new Pub/Sub topic in your GCP account. Messages published to this topic are emitted from the Pipedream source.",
  version: "0.1.7",
  type: "source",
  dedupe: "unique", // Dedupe on Pub/Sub message ID
  props: {
    googleCloud,
    http: "$.interface.http",
    db: "$.service.db",
    topicType: {
      type: "string",
      label: "Type",
      description: "Do you have an existing Pub/Sub topic, or would you like to create a new one?",
      options: [
        "existing",
        "new",
      ],
      reloadProps: true,
    },
  },
  async additionalProps() {
    const topic = {
      type: "string",
      label: "Pub/Sub Topic Name",
      description: "Select a Pub/Sub topic from your GCP account to watch",
      options: async () => {
        return this.getTopics();
      },
    };
    if (this.topicType === "new") {
      topic.description = "**Pipedream will create a Pub/Sub topic with this name in your account**, converting it to a [valid Pub/Sub topic name](https://cloud.google.com/pubsub/docs/admin#resource_names).";
      delete topic.options;
    }
    return {
      topic,
    };
  },
  methods: {
    _getTopicName() {
      return this.db.get("topicName");
    },
    _setTopicName(topicName) {
      this.db.set("topicName", topicName);
    },
    _getSubscriptionName() {
      return this.db.get("subscriptionName");
    },
    _setSubscriptionName(subscriptionName) {
      this.db.set("subscriptionName", subscriptionName);
    },
    async getTopics() {
      const sdkParams = this.googleCloud.sdkParams();
      const pubSubClient = new PubSub(sdkParams);
      const topics = (await pubSubClient.getTopics())[0];
      if (topics.length > 0) {
        return topics.map((topic) => topic.name);
      }
      return [];
    },
    convertNameToValidPubSubTopicName(name) {
      // For valid names, see https://cloud.google.com/pubsub/docs/admin#resource_names
      return name
        // Must not start with `goog`. We add a `pd-` at the beginning if that's the case.
        .replace(/(^goog.*)/g, "pd-$1")
        // Must start with a letter, otherwise we add `pd-` at the beginning.
        .replace(/^(?![a-zA-Z]+)/, "pd-")
        // Only certain characters are allowed, the rest will be replaced with a `-`.
        .replace(/[^a-zA-Z0-9_\-.~+%]+/g, "-");
    },
  },
  hooks: {
    async activate() {
      const sdkParams = this.googleCloud.sdkParams();
      const pubSubClient = new PubSub(sdkParams);

      const currentTopic = {
        name: this.topic,
      };
      if (this.topicType === "new") {
        const topicName = this.convertNameToValidPubSubTopicName(this.topic);
        console.log(`Creating Pub/Sub topic ${topicName}`);
        const [
          topic,
        ] = await pubSubClient.createTopic(topicName);
        currentTopic.name = topic.name;
      }
      this._setTopicName(currentTopic.name);

      const pushEndpoint = this.http.endpoint;
      const subscriptionName = this.convertNameToValidPubSubTopicName(pushEndpoint);
      const subscriptionOptions = {
        pushConfig: {
          pushEndpoint,
        },
      };
      console.log(
        `Subscribing this source's URL to the Pub/Sub topic: ${pushEndpoint}
        (under name ${subscriptionName}).`,
      );
      const [
        subscriptionResult,
      ] = await pubSubClient
        .topic(currentTopic.name)
        .createSubscription(subscriptionName, subscriptionOptions);
      this._setSubscriptionName(subscriptionResult.name);
    },
    async deactivate() {
      const sdkParams = this.googleCloud.sdkParams();
      const pubSubClient = new PubSub(sdkParams);

      const subscriptionName = this._getSubscriptionName();
      if (subscriptionName) {
        await pubSubClient.subscription(subscriptionName).delete();
      }

      if (this.topicType === "new") {
        const topicName = this._getTopicName();
        if (topicName) {
          await pubSubClient.topic(topicName).delete();
        }
      }
    },
  },
  async run(event) {
    const {
      data,
      messageId,
      publishTime,
    } = event.body.message;

    if (!data) {
      console.warn("No message present, exiting");
      return;
    }
    const dataString = Buffer.from(data, "base64").toString("utf-8");
    const metadata = {
      id: messageId,
      summary: dataString,
      ts: +new Date(publishTime),
    };

    let dataObj;
    try {
      dataObj = JSON.parse(dataString);
    } catch (err) {
      console.error(
        `Couldn't parse message as JSON. Emitting raw message. Error: ${err}`,
      );
      dataObj = {
        rawMessage: dataString,
      };
    }
    this.$emit(dataObj, metadata);
  },
};

Trigger Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI and CLI.
LabelPropTypeDescription
Google CloudgoogleCloudappThis component uses the Google Cloud app.
N/Ahttp$.interface.httpThis component uses $.interface.http to generate a unique URL when the component is first instantiated. Each request to the URL will trigger the run() method of the component.
N/Adb$.service.dbThis component uses $.service.db to maintain state between executions.
TypetopicTypestringSelect a value from the drop down menu:existingnew

Trigger Authentication

Google Cloud uses API keys for authentication. When you connect your Google Cloud account, Pipedream securely stores the keys so you can easily authenticate to Google Cloud APIs in both code and no-code steps.

  1. Create a service account in GCP and set the permissions you need for Pipedream workflows.
  2. Generate a service account key
  3. Download the key details in JSON format
  4. Upload the key below.

About Google Cloud

The Google Cloud Platform, including BigQuery

Action

Description:Create a job. [See the documentation](https://docs.databricks.com/api/workspace/jobs/create)
Version:0.0.3
Key:databricks-create-job

Databricks Overview

The Databricks API allows you to interact programmatically with Databricks services, enabling you to manage clusters, jobs, notebooks, and other resources within Databricks environments. Through Pipedream, you can leverage these APIs to create powerful automations and integrate with other apps for enhanced data processing, transformation, and analytics workflows. This unlocks possibilities like automating cluster management, dynamically running jobs based on external triggers, and orchestrating complex data pipelines with ease.

Action Code

import app from "../../databricks.app.mjs";
import utils from "../../common/utils.mjs";

export default {
  key: "databricks-create-job",
  name: "Create Job",
  description: "Create a job. [See the documentation](https://docs.databricks.com/api/workspace/jobs/create)",
  version: "0.0.3",
  annotations: {
    destructiveHint: false,
    openWorldHint: true,
    readOnlyHint: false,
  },
  type: "action",
  props: {
    app,
    tasks: {
      type: "string[]",
      label: "Tasks",
      description: `A list of task specifications to be executed by this job. JSON string format. [See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#tasks) for task specification details.

**Example:**
\`\`\`json
[
  {
    "notebook_task": {
      "notebook_path": "/Workspace/Users/sharky@databricks.com/weather_ingest"
    },
    "task_key": "weather_ocean_data"
  }
]
\`\`\`
      `,
    },
    name: {
      type: "string",
      label: "Job Name",
      description: "An optional name for the job",
      optional: true,
    },
    tags: {
      type: "object",
      label: "Tags",
      description: "A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags",
      optional: true,
    },
    jobClusters: {
      type: "string[]",
      label: "Job Clusters",
      description: `A list of job cluster specifications that can be shared and reused by tasks of this job. JSON string format. [See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#job_clusters) for job cluster specification details.

**Example:**
\`\`\`json
[
  {
    "job_cluster_key": "auto_scaling_cluster",
    "new_cluster": {
      "autoscale": {
        "max_workers": 16,
        "min_workers": 2
      },
      "node_type_id": null,
      "spark_conf": {
        "spark.speculation": true
      },
      "spark_version": "7.3.x-scala2.12"
    }
  }
]
\`\`\`
      `,
      optional: true,
    },
    emailNotifications: {
      type: "string",
      label: "Email Notifications",
      description: `An optional set of email addresses to notify when runs of this job begin, complete, or when the job is deleted. Specify as a JSON object with keys for each notification type. [See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#email_notifications) for details on each field.

**Example:**
\`\`\`json
{
  "on_start": ["user1@example.com"],
  "on_success": ["user2@example.com"],
  "on_failure": ["user3@example.com"],
  "on_duration_warning_threshold_exceeded": ["user4@example.com"],
  "on_streaming_backlog_exceeded": ["user5@example.com"]
}
\`\`\`
`,
      optional: true,
    },
    webhookNotifications: {
      type: "string",
      label: "Webhook Notifications",
      description: `A collection of system notification IDs to notify when runs of this job begin, complete, or encounter specific events. Specify as a JSON object with keys for each notification type. Each key accepts an array of objects with an \`id\` property (system notification ID). A maximum of 3 destinations can be specified for each property.

Supported keys:
- \`on_start\`: Notified when the run starts.
- \`on_success\`: Notified when the run completes successfully.
- \`on_failure\`: Notified when the run fails.
- \`on_duration_warning_threshold_exceeded\`: Notified when the run duration exceeds the specified threshold.
- \`on_streaming_backlog_exceeded\`: Notified when streaming backlog thresholds are exceeded.

[See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#webhook_notifications) for details.

**Example:**
\`\`\`json
{
  "on_success": [
    { "id": "https://eoiqkb8yzox6u2n.m.pipedream.net" }
  ],
  "on_failure": [
    { "id": "https://another-webhook-url.com/notify" }
  ]
}
\`\`\`
`,
      optional: true,
    },
    timeoutSeconds: {
      type: "integer",
      label: "Timeout Seconds",
      description: "An optional timeout applied to each run of this job. The default behavior is to have no timeout",
      optional: true,
    },
    schedule: {
      type: "string",
      label: "Schedule",
      description: `An optional periodic schedule for this job, specified as a JSON object. By default, the job only runs when triggered manually or via the API. The schedule object must include:

- \`quartz_cron_expression\` (**required**): A Cron expression using Quartz syntax that defines when the job runs. [See Cron Trigger details](https://docs.databricks.com/api/workspace/jobs/create#schedule).
- \`timezone_id\` (**required**): A Java timezone ID (e.g., "Europe/London") that determines the timezone for the schedule. [See Java TimeZone details](https://docs.databricks.com/api/workspace/jobs/create#schedule).
- \`pause_status\` (optional): Set to \`"UNPAUSED"\` (default) or \`"PAUSED"\` to control whether the schedule is active.

**Example:**
\`\`\`json
{
  "quartz_cron_expression": "0 0 12 * * ?",
  "timezone_id": "Asia/Ho_Chi_Minh",
  "pause_status": "UNPAUSED"
}
\`\`\`
`,
      optional: true,
    },
    maxConcurrentRuns: {
      type: "integer",
      label: "Max Concurrent Runs",
      description: "An optional maximum allowed number of concurrent runs of the job. Defaults to 1",
      optional: true,
    },
    gitSource: {
      type: "string",
      label: "Git Source",
      description: `An optional specification for a remote Git repository containing the source code used by tasks. Provide as a JSON string.

This enables version-controlled source code for notebook, dbt, Python script, and SQL File tasks. If \`git_source\` is set, these tasks retrieve files from the remote repository by default (can be overridden per task by setting \`source\` to \`WORKSPACE\`). **Note:** dbt and SQL File tasks require \`git_source\` to be defined. [See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#git_source) for more details.

**Fields:**
- \`git_url\` (**required**): URL of the repository to be cloned (e.g., "https://github.com/databricks/databricks-cli").
- \`git_provider\` (**required**): Service hosting the repository. One of: \`gitHub\`, \`bitbucketCloud\`, \`azureDevOpsServices\`, \`gitHubEnterprise\`, \`bitbucketServer\`, \`gitLab\`, \`gitLabEnterpriseEdition\`, \`awsCodeCommit\`.
- \`git_branch\`: Name of the branch to check out (cannot be used with \`git_tag\` or \`git_commit\`).
- \`git_tag\`: Name of the tag to check out (cannot be used with \`git_branch\` or \`git_commit\`).
- \`git_commit\`: Commit hash to check out (cannot be used with \`git_branch\` or \`git_tag\`).

**Example:**
\`\`\`json
{
  "git_url": "https://github.com/databricks/databricks-cli",
  "git_provider": "gitHub",
  "git_branch": "main"
}
\`\`\`
`,
      optional: true,
    },
    accessControlList: {
      type: "string[]",
      label: "Access Control List",
      description: `A list of permissions to set on the job, specified as a JSON array of objects. Each object can define permissions for a user, group, or service principal. 

Each object may include:
- \`user_name\`: Name of the user.
- \`group_name\`: Name of the group.
- \`service_principal_name\`: Application ID of a service principal.
- \`permission_level\`: Permission level. One of: \`CAN_MANAGE\`, \`IS_OWNER\`, \`CAN_MANAGE_RUN\`, \`CAN_VIEW\`.

**Example:**
\`\`\`json
[
  {
    "permission_level": "IS_OWNER",
    "user_name": "jorge.c@turing.com"
  },
  {
    "permission_level": "CAN_VIEW",
    "group_name": "data-scientists"
  }
]
\`\`\`
[See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#access_control_list) for more details.`,
      optional: true,
    },
  },
  async run({ $ }) {
    const {
      app,
      tasks,
      name,
      tags,
      jobClusters,
      emailNotifications,
      webhookNotifications,
      timeoutSeconds,
      schedule,
      maxConcurrentRuns,
      gitSource,
      accessControlList,
    } = this;

    const response = await app.createJob({
      $,
      data: {
        name,
        tags,
        tasks: utils.parseJsonInput(tasks),
        job_clusters: utils.parseJsonInput(jobClusters),
        email_notifications: utils.parseJsonInput(emailNotifications),
        webhook_notifications: utils.parseJsonInput(webhookNotifications),
        timeout_seconds: timeoutSeconds,
        schedule: utils.parseJsonInput(schedule),
        max_concurrent_runs: maxConcurrentRuns,
        git_source: utils.parseJsonInput(gitSource),
        access_control_list: utils.parseJsonInput(accessControlList),
      },
    });

    $.export("$summary", `Successfully created job with ID \`${response.job_id}\``);

    return response;
  },
};

Action Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI.

LabelPropTypeDescription
DatabricksappappThis component uses the Databricks app.
Taskstasksstring[]

A list of task specifications to be executed by this job. JSON string format. See the API documentation for task specification details.

Example:

[
  {
    "notebook_task": {
      "notebook_path": "/Workspace/Users/sharky@databricks.com/weather_ingest"
    },
    "task_key": "weather_ocean_data"
  }
]
Job Namenamestring

An optional name for the job

Tagstagsobject

A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags

Job ClustersjobClustersstring[]

A list of job cluster specifications that can be shared and reused by tasks of this job. JSON string format. See the API documentation for job cluster specification details.

Example:

[
  {
    "job_cluster_key": "auto_scaling_cluster",
    "new_cluster": {
      "autoscale": {
        "max_workers": 16,
        "min_workers": 2
      },
      "node_type_id": null,
      "spark_conf": {
        "spark.speculation": true
      },
      "spark_version": "7.3.x-scala2.12"
    }
  }
]
Email NotificationsemailNotificationsstring

An optional set of email addresses to notify when runs of this job begin, complete, or when the job is deleted. Specify as a JSON object with keys for each notification type. See the API documentation for details on each field.

Example:

{
  "on_start": ["user1@example.com"],
  "on_success": ["user2@example.com"],
  "on_failure": ["user3@example.com"],
  "on_duration_warning_threshold_exceeded": ["user4@example.com"],
  "on_streaming_backlog_exceeded": ["user5@example.com"]
}
Webhook NotificationswebhookNotificationsstring

A collection of system notification IDs to notify when runs of this job begin, complete, or encounter specific events. Specify as a JSON object with keys for each notification type. Each key accepts an array of objects with an id property (system notification ID). A maximum of 3 destinations can be specified for each property.

Supported keys:

  • on_start: Notified when the run starts.
  • on_success: Notified when the run completes successfully.
  • on_failure: Notified when the run fails.
  • on_duration_warning_threshold_exceeded: Notified when the run duration exceeds the specified threshold.
  • on_streaming_backlog_exceeded: Notified when streaming backlog thresholds are exceeded.

See the API documentation for details.

Example:

{
  "on_success": [
    { "id": "https://eoiqkb8yzox6u2n.m.pipedream.net" }
  ],
  "on_failure": [
    { "id": "https://another-webhook-url.com/notify" }
  ]
}
Timeout SecondstimeoutSecondsinteger

An optional timeout applied to each run of this job. The default behavior is to have no timeout

Scheduleschedulestring

An optional periodic schedule for this job, specified as a JSON object. By default, the job only runs when triggered manually or via the API. The schedule object must include:

  • quartz_cron_expression (required): A Cron expression using Quartz syntax that defines when the job runs. See Cron Trigger details.
  • timezone_id (required): A Java timezone ID (e.g., "Europe/London") that determines the timezone for the schedule. See Java TimeZone details.
  • pause_status (optional): Set to "UNPAUSED" (default) or "PAUSED" to control whether the schedule is active.

Example:

{
  "quartz_cron_expression": "0 0 12 * * ?",
  "timezone_id": "Asia/Ho_Chi_Minh",
  "pause_status": "UNPAUSED"
}
Max Concurrent RunsmaxConcurrentRunsinteger

An optional maximum allowed number of concurrent runs of the job. Defaults to 1

Git SourcegitSourcestring

An optional specification for a remote Git repository containing the source code used by tasks. Provide as a JSON string.

This enables version-controlled source code for notebook, dbt, Python script, and SQL File tasks. If git_source is set, these tasks retrieve files from the remote repository by default (can be overridden per task by setting source to WORKSPACE). Note: dbt and SQL File tasks require git_source to be defined. See the API documentation for more details.

Fields:

  • git_url (required): URL of the repository to be cloned (e.g., "https://github.com/databricks/databricks-cli").
  • git_provider (required): Service hosting the repository. One of: gitHub, bitbucketCloud, azureDevOpsServices, gitHubEnterprise, bitbucketServer, gitLab, gitLabEnterpriseEdition, awsCodeCommit.
  • git_branch: Name of the branch to check out (cannot be used with git_tag or git_commit).
  • git_tag: Name of the tag to check out (cannot be used with git_branch or git_commit).
  • git_commit: Commit hash to check out (cannot be used with git_branch or git_tag).

Example:

{
  "git_url": "https://github.com/databricks/databricks-cli",
  "git_provider": "gitHub",
  "git_branch": "main"
}
Access Control ListaccessControlListstring[]

A list of permissions to set on the job, specified as a JSON array of objects. Each object can define permissions for a user, group, or service principal.

Each object may include:

  • user_name: Name of the user.
  • group_name: Name of the group.
  • service_principal_name: Application ID of a service principal.
  • permission_level: Permission level. One of: CAN_MANAGE, IS_OWNER, CAN_MANAGE_RUN, CAN_VIEW.

Example:

[
  {
    "permission_level": "IS_OWNER",
    "user_name": "jorge.c@turing.com"
  },
  {
    "permission_level": "CAN_VIEW",
    "group_name": "data-scientists"
  }
]

See the API documentation for more details.

Action Authentication

Databricks uses API keys for authentication. When you connect your Databricks account, Pipedream securely stores the keys so you can easily authenticate to Databricks APIs in both code and no-code steps.

About Databricks

Databricks is the lakehouse company, helping data teams solve the world’s toughest problems.

More Ways to Connect Databricks + Google Cloud

List Runs with Databricks API on BigQuery - New Row from Google Cloud API
Google Cloud + Databricks
 
Try it
List Runs with Databricks API on BigQuery - Query Results from Google Cloud API
Google Cloud + Databricks
 
Try it
List Runs with Databricks API on New Pub/Sub Messages from Google Cloud API
Google Cloud + Databricks
 
Try it
Get Run Output with Databricks API on BigQuery - New Row from Google Cloud API
Google Cloud + Databricks
 
Try it
Get Run Output with Databricks API on BigQuery - Query Results from Google Cloud API
Google Cloud + Databricks
 
Try it
New Pub/Sub Messages from the Google Cloud API

Emit new Pub/Sub topic in your GCP account. Messages published to this topic are emitted from the Pipedream source.

 
Try it
BigQuery - New Row from the Google Cloud API

Emit new events when a new row is added to a table

 
Try it
BigQuery - Query Results from the Google Cloud API

Emit new events with the results of an arbitrary query

 
Try it
Bigquery Insert Rows with the Google Cloud API

Inserts rows into a BigQuery table. See the docs and for an example here

 
Try it
Create Bucket with the Google Cloud API

Creates a bucket on Google Cloud Storage See the docs

 
Try it
Create Scheduled Query with the Google Cloud API

Creates a scheduled query in Google Cloud. See the documentation

 
Try it
Get Bucket Metadata with the Google Cloud API

Gets Google Cloud Storage bucket metadata. See the docs

 
Try it
Get Object with the Google Cloud API

Downloads an object from a Google Cloud Storage bucket, See the docs

 
Try it

Explore Other Apps

1
-
24
of
3,000+
apps by most popular

Node
Node
Anything you can do with Node.js, you can do in a Pipedream workflow. This includes using most of npm's 400,000+ packages.
Python
Python
Anything you can do in Python can be done in a Pipedream Workflow. This includes using any of the 350,000+ PyPi packages available in your Python powered workflows.
Notion
Notion
Notion is a new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team.
OpenAI (ChatGPT)
OpenAI (ChatGPT)
OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Anthropic (Claude)
Anthropic (Claude)
AI research and products that put safety at the frontier. Introducing Claude, a next-generation AI assistant for your tasks, no matter the scale.
Google Sheets
Google Sheets
Use Google Sheets to create and edit online spreadsheets. Get insights together with secure sharing in real-time and from any device.
Telegram
Telegram
Telegram, is a cloud-based, cross-platform, encrypted instant messaging (IM) service.
Google Drive
Google Drive
Google Drive is a file storage and synchronization service which allows you to create and share your work online, and access your documents from anywhere.
HTTP / Webhook
HTTP / Webhook
Get a unique URL where you can send HTTP or webhook requests
Google Calendar
Google Calendar
With Google Calendar, you can quickly schedule meetings and events and get reminders about upcoming activities, so you always know what’s next.
Schedule
Schedule
Trigger workflows on an interval or cron schedule.
Pipedream Utils
Pipedream Utils
Utility functions to use within your Pipedream workflows
Shopify
Shopify
Shopify is a complete commerce platform that lets anyone start, manage, and grow a business. You can use Shopify to build an online store, manage sales, market to customers, and accept payments in digital and physical locations.
Supabase
Supabase
Supabase is an open source Firebase alternative.
MySQL
MySQL
MySQL is an open-source relational database management system.
PostgreSQL
PostgreSQL
PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
AWS
AWS
Premium
Amazon Web Services (AWS) offers reliable, scalable, and inexpensive cloud computing services.
Twilio SendGrid
Twilio SendGrid
Premium
Send marketing and transactional email through the Twilio SendGrid platform with the Email API, proprietary mail transfer agent, and infrastructure for scalable delivery.
Amazon SES
Amazon SES
Amazon SES is a cloud-based email service provider that can integrate into any application for high volume email automation
Klaviyo
Klaviyo
Premium
Klaviyo unifies your data, channels, and AI agents in one platform—text, WhatsApp, email marketing, and more—driving growth with every interaction.
Zendesk
Zendesk
Premium
Zendesk is award-winning customer service software trusted by 200K+ customers. Make customers happy via text, mobile, phone, email, live chat, social media.
ServiceNow
ServiceNow
Premium
Beta
The smarter way to workflow
Slack
Slack
Slack is the AI-powered platform for work bringing all of your conversations, apps, and customers together in one place. Around the world, Slack is helping businesses of all sizes grow and send productivity through the roof.
Microsoft Teams
Microsoft Teams
Microsoft Teams has communities, events, chats, channels, meetings, storage, tasks, and calendars in one place.