← HTTP / Webhook + Databricks integrations

Create Job with Databricks API on New Requests (Payload Only) from HTTP / Webhook API

Pipedream makes it easy to connect APIs for Databricks, HTTP / Webhook and 2,800+ other apps remarkably fast.

Trigger workflow on
New Requests (Payload Only) from the HTTP / Webhook API
Next, do this
Create Job with the Databricks API
No credit card required
Intro to Pipedream
Watch us build a workflow
Watch us build a workflow
8 min
Watch now ➜

Trusted by 1,000,000+ developers from startups to Fortune 500 companies

Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo
Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo

Developers Pipedream

Getting Started

This integration creates a workflow with a HTTP / Webhook trigger and Databricks action. When you configure and deploy the workflow, it will run on Pipedream's servers 24x7 for free.

  1. Select this integration
  2. Configure the New Requests (Payload Only) trigger
    1. Connect your HTTP / Webhook account
  3. Configure the Create Job action
    1. Connect your Databricks account
    2. Configure Tasks
    3. Optional- Configure Job Name
    4. Optional- Configure Tags
    5. Optional- Configure Job Clusters
    6. Optional- Configure Email Notifications
    7. Optional- Configure Webhook Notifications
    8. Optional- Configure Timeout Seconds
    9. Optional- Configure Schedule
    10. Optional- Configure Max Concurrent Runs
    11. Optional- Configure Git Source
    12. Optional- Configure Access Control List
  4. Deploy the workflow
  5. Send a test event to validate your setup
  6. Turn on the trigger

Details

This integration uses pre-built, source-available components from Pipedream's GitHub repo. These components are developed by Pipedream and the community, and verified and maintained by Pipedream.

To contribute an update to an existing component or create a new component, create a PR on GitHub. If you're new to Pipedream component development, you can start with quickstarts for trigger span and action development, and then review the component API reference.

Trigger

Description:Get a URL and emit the HTTP body as an event on every request
Version:0.1.1
Key:http-new-requests-payload-only

HTTP / Webhook Overview

Build, test, and send HTTP requests without code using your Pipedream workflows. The HTTP / Webhook action is a tool to build HTTP requests with a Postman-like graphical interface.

An interface for configuring an HTTP request within Pipedream's workflow system. The current selection is a GET request with fields for the request URL, authorization type (set to 'None' with a note explaining "This request does not use authorization"), parameters, headers (with a count of 1, though the detail is not visible), and body. Below the main configuration area is an option to "Include Response Headers," and a button labeled "Configure to test." The overall layout suggests a user-friendly, no-code approach to setting up custom HTTP requests.

Point and click HTTP requests

Define the target URL, HTTP verb, headers, query parameters, and payload body without writing custom code.

A screenshot of Pipedream's HTTP Request Configuration interface with a GET request type selected. The request URL is set to 'https://api.openai.com/v1/models'. The 'Auth' tab is highlighted, indicating that authentication is required for this request. In the headers section, there are two headers configured: 'User-Agent' is set to 'pipedream/1', and 'Authorization' is set to 'Bearer {{openai_api_key}}', showing how the OpenAI account's API key is dynamically inserted into the headers to handle authentication automatically.

Here's an example workflow that uses the HTTP / Webhook action to send an authenticated API request to OpenAI.

Focus on integrating, not authenticating

This action can also use your connected accounts with third-party APIs. Selecting an integrated app will automatically update the request’s headers to authenticate with the app properly, and even inject your token dynamically.

This GIF depicts the process of selecting an application within Pipedream's HTTP Request Builder. A user hovers the cursor over the 'Auth' tab and clicks on a dropdown menu labeled 'Authorization Type', then scrolls through a list of applications to choose from for authorization purposes. The interface provides a streamlined and intuitive method for users to authenticate their HTTP requests by selecting the relevant app in the configuration settings.

Pipedream integrates with thousands of APIs, but if you can’t find a Pipedream integration simply use Environment Variables in your request headers to authenticate with.

Compatible with no code actions or Node.js and Python

The HTTP/Webhook action exports HTTP response data for use in subsequent workflow steps, enabling easy data transformation, further API calls, database storage, and more.

Response data is available for both coded (Node.js, Python) and no-code steps within your workflow.

An image showing the Pipedream interface where the HTTP Webhook action has returned response data as a step export. The interface highlights a structured view of the returned data with collapsible sections. We can see 'steps.custom_request1' expanded to show 'return_value' which is an object containing a 'list'. Inside the list, an item 'data' is expanded to reveal an element with an 'id' of 'whisper-1', indicating a model created by and owned by 'openai-internal'. Options to 'Copy Path' and 'Copy Value' are available for easy access to the data points.

Trigger Code

import http from "../../http.app.mjs";

// Core HTTP component
// Returns a 200 OK response, emits the HTTP payload as an event
export default {
  key: "http-new-requests-payload-only",
  name: "New Requests (Payload Only)",
  // eslint-disable-next-line
  description: "Get a URL and emit the HTTP body as an event on every request",
  version: "0.1.1",
  type: "source",
  props: {
    // eslint-disable-next-line
    httpInterface: {
      type: "$.interface.http",
      customResponse: true,
    },
    http,
  },
  async run(event) {
    const { body } = event;
    this.httpInterface.respond({
      status: 200,
      body,
    });
    // Emit the HTTP payload
    this.$emit({
      body,
    });
  },
};

Trigger Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI and CLI.
LabelPropTypeDescription
N/AhttpInterface$.interface.httpThis component uses $.interface.http to generate a unique URL when the component is first instantiated. Each request to the URL will trigger the run() method of the component.
HTTP / WebhookhttpappThis component uses the HTTP / Webhook app.

Trigger Authentication

The HTTP / Webhook API does not require authentication.

About HTTP / Webhook

Get a unique URL where you can send HTTP or webhook requests

Action

Description:Create a job. [See the documentation](https://docs.databricks.com/api/workspace/jobs/create)
Version:0.0.1
Key:databricks-create-job

Databricks Overview

The Databricks API allows you to interact programmatically with Databricks services, enabling you to manage clusters, jobs, notebooks, and other resources within Databricks environments. Through Pipedream, you can leverage these APIs to create powerful automations and integrate with other apps for enhanced data processing, transformation, and analytics workflows. This unlocks possibilities like automating cluster management, dynamically running jobs based on external triggers, and orchestrating complex data pipelines with ease.

Action Code

import app from "../../databricks.app.mjs";
import utils from "../../common/utils.mjs";

export default {
  key: "databricks-create-job",
  name: "Create Job",
  description: "Create a job. [See the documentation](https://docs.databricks.com/api/workspace/jobs/create)",
  version: "0.0.1",
  type: "action",
  props: {
    app,
    tasks: {
      type: "string[]",
      label: "Tasks",
      description: `A list of task specifications to be executed by this job. JSON string format. [See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#tasks) for task specification details.

**Example:**
\`\`\`json
[
  {
    "notebook_task": {
      "notebook_path": "/Workspace/Users/sharky@databricks.com/weather_ingest"
    },
    "task_key": "weather_ocean_data"
  }
]
\`\`\`
      `,
    },
    name: {
      type: "string",
      label: "Job Name",
      description: "An optional name for the job",
      optional: true,
    },
    tags: {
      type: "object",
      label: "Tags",
      description: "A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags",
      optional: true,
    },
    jobClusters: {
      type: "string[]",
      label: "Job Clusters",
      description: `A list of job cluster specifications that can be shared and reused by tasks of this job. JSON string format. [See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#job_clusters) for job cluster specification details.

**Example:**
\`\`\`json
[
  {
    "job_cluster_key": "auto_scaling_cluster",
    "new_cluster": {
      "autoscale": {
        "max_workers": 16,
        "min_workers": 2
      },
      "node_type_id": null,
      "spark_conf": {
        "spark.speculation": true
      },
      "spark_version": "7.3.x-scala2.12"
    }
  }
]
\`\`\`
      `,
      optional: true,
    },
    emailNotifications: {
      type: "string",
      label: "Email Notifications",
      description: `An optional set of email addresses to notify when runs of this job begin, complete, or when the job is deleted. Specify as a JSON object with keys for each notification type. [See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#email_notifications) for details on each field.

**Example:**
\`\`\`json
{
  "on_start": ["user1@example.com"],
  "on_success": ["user2@example.com"],
  "on_failure": ["user3@example.com"],
  "on_duration_warning_threshold_exceeded": ["user4@example.com"],
  "on_streaming_backlog_exceeded": ["user5@example.com"]
}
\`\`\`
`,
      optional: true,
    },
    webhookNotifications: {
      type: "string",
      label: "Webhook Notifications",
      description: `A collection of system notification IDs to notify when runs of this job begin, complete, or encounter specific events. Specify as a JSON object with keys for each notification type. Each key accepts an array of objects with an \`id\` property (system notification ID). A maximum of 3 destinations can be specified for each property.

Supported keys:
- \`on_start\`: Notified when the run starts.
- \`on_success\`: Notified when the run completes successfully.
- \`on_failure\`: Notified when the run fails.
- \`on_duration_warning_threshold_exceeded\`: Notified when the run duration exceeds the specified threshold.
- \`on_streaming_backlog_exceeded\`: Notified when streaming backlog thresholds are exceeded.

[See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#webhook_notifications) for details.

**Example:**
\`\`\`json
{
  "on_success": [
    { "id": "https://eoiqkb8yzox6u2n.m.pipedream.net" }
  ],
  "on_failure": [
    { "id": "https://another-webhook-url.com/notify" }
  ]
}
\`\`\`
`,
      optional: true,
    },
    timeoutSeconds: {
      type: "integer",
      label: "Timeout Seconds",
      description: "An optional timeout applied to each run of this job. The default behavior is to have no timeout",
      optional: true,
    },
    schedule: {
      type: "string",
      label: "Schedule",
      description: `An optional periodic schedule for this job, specified as a JSON object. By default, the job only runs when triggered manually or via the API. The schedule object must include:

- \`quartz_cron_expression\` (**required**): A Cron expression using Quartz syntax that defines when the job runs. [See Cron Trigger details](https://docs.databricks.com/api/workspace/jobs/create#schedule).
- \`timezone_id\` (**required**): A Java timezone ID (e.g., "Europe/London") that determines the timezone for the schedule. [See Java TimeZone details](https://docs.databricks.com/api/workspace/jobs/create#schedule).
- \`pause_status\` (optional): Set to \`"UNPAUSED"\` (default) or \`"PAUSED"\` to control whether the schedule is active.

**Example:**
\`\`\`json
{
  "quartz_cron_expression": "0 0 12 * * ?",
  "timezone_id": "Asia/Ho_Chi_Minh",
  "pause_status": "UNPAUSED"
}
\`\`\`
`,
      optional: true,
    },
    maxConcurrentRuns: {
      type: "integer",
      label: "Max Concurrent Runs",
      description: "An optional maximum allowed number of concurrent runs of the job. Defaults to 1",
      optional: true,
    },
    gitSource: {
      type: "string",
      label: "Git Source",
      description: `An optional specification for a remote Git repository containing the source code used by tasks. Provide as a JSON string.

This enables version-controlled source code for notebook, dbt, Python script, and SQL File tasks. If \`git_source\` is set, these tasks retrieve files from the remote repository by default (can be overridden per task by setting \`source\` to \`WORKSPACE\`). **Note:** dbt and SQL File tasks require \`git_source\` to be defined. [See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#git_source) for more details.

**Fields:**
- \`git_url\` (**required**): URL of the repository to be cloned (e.g., "https://github.com/databricks/databricks-cli").
- \`git_provider\` (**required**): Service hosting the repository. One of: \`gitHub\`, \`bitbucketCloud\`, \`azureDevOpsServices\`, \`gitHubEnterprise\`, \`bitbucketServer\`, \`gitLab\`, \`gitLabEnterpriseEdition\`, \`awsCodeCommit\`.
- \`git_branch\`: Name of the branch to check out (cannot be used with \`git_tag\` or \`git_commit\`).
- \`git_tag\`: Name of the tag to check out (cannot be used with \`git_branch\` or \`git_commit\`).
- \`git_commit\`: Commit hash to check out (cannot be used with \`git_branch\` or \`git_tag\`).

**Example:**
\`\`\`json
{
  "git_url": "https://github.com/databricks/databricks-cli",
  "git_provider": "gitHub",
  "git_branch": "main"
}
\`\`\`
`,
      optional: true,
    },
    accessControlList: {
      type: "string[]",
      label: "Access Control List",
      description: `A list of permissions to set on the job, specified as a JSON array of objects. Each object can define permissions for a user, group, or service principal. 

Each object may include:
- \`user_name\`: Name of the user.
- \`group_name\`: Name of the group.
- \`service_principal_name\`: Application ID of a service principal.
- \`permission_level\`: Permission level. One of: \`CAN_MANAGE\`, \`IS_OWNER\`, \`CAN_MANAGE_RUN\`, \`CAN_VIEW\`.

**Example:**
\`\`\`json
[
  {
    "permission_level": "IS_OWNER",
    "user_name": "jorge.c@turing.com"
  },
  {
    "permission_level": "CAN_VIEW",
    "group_name": "data-scientists"
  }
]
\`\`\`
[See the API documentation](https://docs.databricks.com/api/workspace/jobs/create#access_control_list) for more details.`,
      optional: true,
    },
  },
  async run({ $ }) {
    const {
      app,
      tasks,
      name,
      tags,
      jobClusters,
      emailNotifications,
      webhookNotifications,
      timeoutSeconds,
      schedule,
      maxConcurrentRuns,
      gitSource,
      accessControlList,
    } = this;

    const response = await app.createJob({
      $,
      data: {
        name,
        tags,
        tasks: utils.parseJsonInput(tasks),
        job_clusters: utils.parseJsonInput(jobClusters),
        email_notifications: utils.parseJsonInput(emailNotifications),
        webhook_notifications: utils.parseJsonInput(webhookNotifications),
        timeout_seconds: timeoutSeconds,
        schedule: utils.parseJsonInput(schedule),
        max_concurrent_runs: maxConcurrentRuns,
        git_source: utils.parseJsonInput(gitSource),
        access_control_list: utils.parseJsonInput(accessControlList),
      },
    });

    $.export("$summary", `Successfully created job with ID \`${response.job_id}\``);

    return response;
  },
};

Action Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI.

LabelPropTypeDescription
DatabricksappappThis component uses the Databricks app.
Taskstasksstring[]

A list of task specifications to be executed by this job. JSON string format. See the API documentation for task specification details.

Example:

[
  {
    "notebook_task": {
      "notebook_path": "/Workspace/Users/sharky@databricks.com/weather_ingest"
    },
    "task_key": "weather_ocean_data"
  }
]
Job Namenamestring

An optional name for the job

Tagstagsobject

A map of tags associated with the job. These are forwarded to the cluster as cluster tags for jobs clusters, and are subject to the same limitations as cluster tags

Job ClustersjobClustersstring[]

A list of job cluster specifications that can be shared and reused by tasks of this job. JSON string format. See the API documentation for job cluster specification details.

Example:

[
  {
    "job_cluster_key": "auto_scaling_cluster",
    "new_cluster": {
      "autoscale": {
        "max_workers": 16,
        "min_workers": 2
      },
      "node_type_id": null,
      "spark_conf": {
        "spark.speculation": true
      },
      "spark_version": "7.3.x-scala2.12"
    }
  }
]
Email NotificationsemailNotificationsstring

An optional set of email addresses to notify when runs of this job begin, complete, or when the job is deleted. Specify as a JSON object with keys for each notification type. See the API documentation for details on each field.

Example:

{
  "on_start": ["user1@example.com"],
  "on_success": ["user2@example.com"],
  "on_failure": ["user3@example.com"],
  "on_duration_warning_threshold_exceeded": ["user4@example.com"],
  "on_streaming_backlog_exceeded": ["user5@example.com"]
}
Webhook NotificationswebhookNotificationsstring

A collection of system notification IDs to notify when runs of this job begin, complete, or encounter specific events. Specify as a JSON object with keys for each notification type. Each key accepts an array of objects with an id property (system notification ID). A maximum of 3 destinations can be specified for each property.

Supported keys:

  • on_start: Notified when the run starts.
  • on_success: Notified when the run completes successfully.
  • on_failure: Notified when the run fails.
  • on_duration_warning_threshold_exceeded: Notified when the run duration exceeds the specified threshold.
  • on_streaming_backlog_exceeded: Notified when streaming backlog thresholds are exceeded.

See the API documentation for details.

Example:

{
  "on_success": [
    { "id": "https://eoiqkb8yzox6u2n.m.pipedream.net" }
  ],
  "on_failure": [
    { "id": "https://another-webhook-url.com/notify" }
  ]
}
Timeout SecondstimeoutSecondsinteger

An optional timeout applied to each run of this job. The default behavior is to have no timeout

Scheduleschedulestring

An optional periodic schedule for this job, specified as a JSON object. By default, the job only runs when triggered manually or via the API. The schedule object must include:

  • quartz_cron_expression (required): A Cron expression using Quartz syntax that defines when the job runs. See Cron Trigger details.
  • timezone_id (required): A Java timezone ID (e.g., "Europe/London") that determines the timezone for the schedule. See Java TimeZone details.
  • pause_status (optional): Set to "UNPAUSED" (default) or "PAUSED" to control whether the schedule is active.

Example:

{
  "quartz_cron_expression": "0 0 12 * * ?",
  "timezone_id": "Asia/Ho_Chi_Minh",
  "pause_status": "UNPAUSED"
}
Max Concurrent RunsmaxConcurrentRunsinteger

An optional maximum allowed number of concurrent runs of the job. Defaults to 1

Git SourcegitSourcestring

An optional specification for a remote Git repository containing the source code used by tasks. Provide as a JSON string.

This enables version-controlled source code for notebook, dbt, Python script, and SQL File tasks. If git_source is set, these tasks retrieve files from the remote repository by default (can be overridden per task by setting source to WORKSPACE). Note: dbt and SQL File tasks require git_source to be defined. See the API documentation for more details.

Fields:

  • git_url (required): URL of the repository to be cloned (e.g., "https://github.com/databricks/databricks-cli").
  • git_provider (required): Service hosting the repository. One of: gitHub, bitbucketCloud, azureDevOpsServices, gitHubEnterprise, bitbucketServer, gitLab, gitLabEnterpriseEdition, awsCodeCommit.
  • git_branch: Name of the branch to check out (cannot be used with git_tag or git_commit).
  • git_tag: Name of the tag to check out (cannot be used with git_branch or git_commit).
  • git_commit: Commit hash to check out (cannot be used with git_branch or git_tag).

Example:

{
  "git_url": "https://github.com/databricks/databricks-cli",
  "git_provider": "gitHub",
  "git_branch": "main"
}
Access Control ListaccessControlListstring[]

A list of permissions to set on the job, specified as a JSON array of objects. Each object can define permissions for a user, group, or service principal.

Each object may include:

  • user_name: Name of the user.
  • group_name: Name of the group.
  • service_principal_name: Application ID of a service principal.
  • permission_level: Permission level. One of: CAN_MANAGE, IS_OWNER, CAN_MANAGE_RUN, CAN_VIEW.

Example:

[
  {
    "permission_level": "IS_OWNER",
    "user_name": "jorge.c@turing.com"
  },
  {
    "permission_level": "CAN_VIEW",
    "group_name": "data-scientists"
  }
]

See the API documentation for more details.

Action Authentication

Databricks uses API keys for authentication. When you connect your Databricks account, Pipedream securely stores the keys so you can easily authenticate to Databricks APIs in both code and no-code steps.

About Databricks

Databricks is the lakehouse company, helping data teams solve the world’s toughest problems.

More Ways to Connect Databricks + HTTP / Webhook

Get Run Output with Databricks API on New Requests (Payload Only) from HTTP / Webhook API
HTTP / Webhook + Databricks
 
Try it
Get Run Output with Databricks API on New Requests from HTTP / Webhook API
HTTP / Webhook + Databricks
 
Try it
List Runs with Databricks API on New Requests (Payload Only) from HTTP / Webhook API
HTTP / Webhook + Databricks
 
Try it
List Runs with Databricks API on New Requests from HTTP / Webhook API
HTTP / Webhook + Databricks
 
Try it
Run Job Now with Databricks API on New Requests (Payload Only) from HTTP / Webhook API
HTTP / Webhook + Databricks
 
Try it
New Requests from the HTTP / Webhook API

Get a URL and emit the full HTTP event on every request (including headers and query parameters). You can also configure the HTTP response code, body, and more.

 
Try it
New Requests (Payload Only) from the HTTP / Webhook API

Get a URL and emit the HTTP body as an event on every request

 
Try it
New event when the content of the URL changes. from the HTTP / Webhook API

Emit new event when the content of the URL changes.

 
Try it
Send any HTTP Request with the HTTP / Webhook API

Send an HTTP request using any method and URL. Optionally configure query string parameters, headers, and basic auth.

 
Try it
Send GET Request with the HTTP / Webhook API

Send an HTTP GET request to any URL. Optionally configure query string parameters, headers and basic auth.

 
Try it
Send POST Request with the HTTP / Webhook API

Send an HTTP POST request to any URL. Optionally configure query string parameters, headers and basic auth.

 
Try it
Send PUT Request with the HTTP / Webhook API

Send an HTTP PUT request to any URL. Optionally configure query string parameters, headers and basic auth.

 
Try it
Return HTTP Response with the HTTP / Webhook API

Use with an HTTP trigger that uses "Return a custom response from your workflow" as its HTTP Response

 
Try it

Explore Other Apps

1
-
24
of
2,800+
apps by most popular

HTTP / Webhook
HTTP / Webhook
Get a unique URL where you can send HTTP or webhook requests
Node
Node
Anything you can do with Node.js, you can do in a Pipedream workflow. This includes using most of npm's 400,000+ packages.
Python
Python
Anything you can do in Python can be done in a Pipedream Workflow. This includes using any of the 350,000+ PyPi packages available in your Python powered workflows.
Schedule
Schedule
Trigger workflows on an interval or cron schedule.
Pipedream Utils
Pipedream Utils
Utility functions to use within your Pipedream workflows
Notion
Notion
Notion is a new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team.
OpenAI (ChatGPT)
OpenAI (ChatGPT)
OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Anthropic (Claude)
Anthropic (Claude)
AI research and products that put safety at the frontier. Introducing Claude, a next-generation AI assistant for your tasks, no matter the scale.
Google Sheets
Google Sheets
Use Google Sheets to create and edit online spreadsheets. Get insights together with secure sharing in real-time and from any device.
Telegram
Telegram
Telegram, is a cloud-based, cross-platform, encrypted instant messaging (IM) service.
Google Drive
Google Drive
Google Drive is a file storage and synchronization service which allows you to create and share your work online, and access your documents from anywhere.
Google Calendar
Google Calendar
With Google Calendar, you can quickly schedule meetings and events and get reminders about upcoming activities, so you always know what’s next.
Shopify
Shopify
Shopify is a complete commerce platform that lets anyone start, manage, and grow a business. You can use Shopify to build an online store, manage sales, market to customers, and accept payments in digital and physical locations.
Supabase
Supabase
Supabase is an open source Firebase alternative.
MySQL
MySQL
MySQL is an open-source relational database management system.
PostgreSQL
PostgreSQL
PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
Premium
AWS
AWS
Amazon Web Services (AWS) offers reliable, scalable, and inexpensive cloud computing services.
Premium
Twilio SendGrid
Twilio SendGrid
Send marketing and transactional email through the Twilio SendGrid platform with the Email API, proprietary mail transfer agent, and infrastructure for scalable delivery.
Amazon SES
Amazon SES
Amazon SES is a cloud-based email service provider that can integrate into any application for high volume email automation
Premium
Klaviyo
Klaviyo
Email Marketing and SMS Marketing Platform
Premium
Zendesk
Zendesk
Zendesk is award-winning customer service software trusted by 200K+ customers. Make customers happy via text, mobile, phone, email, live chat, social media.
Premium
ServiceNow
ServiceNow
The smarter way to workflow
Slack
Slack
Slack is a channel-based messaging platform. With Slack, people can work together more effectively, connect all their software tools and services, and find the information they need to do their best work — all within a secure, enterprise-grade environment.
Microsoft Teams
Microsoft Teams
Microsoft Teams has communities, events, chats, channels, meetings, storage, tasks, and calendars in one place.