← Plainly + Databricks integrations

Run Job Now with Databricks API on New Video Created from Plainly API

Pipedream makes it easy to connect APIs for Databricks, Plainly and 3,000+ other apps remarkably fast.

Trigger workflow on
New Video Created from the Plainly API
Next, do this
Run Job Now with the Databricks API
No credit card required
Intro to Pipedream
Watch us build a workflow
Watch us build a workflow
8 min
Watch now ➜

Trusted by 1,000,000+ developers from startups to Fortune 500 companies

Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo
Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo

Developers Pipedream

Getting Started

This integration creates a workflow with a Plainly trigger and Databricks action. When you configure and deploy the workflow, it will run on Pipedream's servers 24x7 for free.

  1. Select this integration
  2. Configure the New Video Created trigger
    1. Connect your Plainly account
    2. Configure timer
    3. Select a Brand ID
    4. Select a Article ID
  3. Configure the Run Job Now action
    1. Connect your Databricks account
    2. Select a Job
    3. Optional- Configure JAR Params
    4. Optional- Configure Notebook Params
    5. Optional- Configure Python Params
    6. Optional- Configure Spark Submit Params
    7. Optional- Configure Idempotency Token
  4. Deploy the workflow
  5. Send a test event to validate your setup
  6. Turn on the trigger

Details

This integration uses pre-built, source-available components from Pipedream's GitHub repo. These components are developed by Pipedream and the community, and verified and maintained by Pipedream.

To contribute an update to an existing component or create a new component, create a PR on GitHub. If you're new to Pipedream component development, you can start with quickstarts for trigger span and action development, and then review the component API reference.

Trigger

Description:Emit new event when a video is created or uploaded in Plainly.
Version:0.0.1
Key:plainly-new-video-created

Trigger Code

import common from "../common/base.mjs";

export default {
  ...common,
  key: "plainly-new-video-created",
  name: "New Video Created",
  description: "Emit new event when a video is created or uploaded in Plainly.",
  version: "0.0.1",
  type: "source",
  dedupe: "unique",
  props: {
    ...common.props,
    brandId: {
      propDefinition: [
        common.props.plainly,
        "brandId",
      ],
    },
    articleId: {
      propDefinition: [
        common.props.plainly,
        "articleId",
        (c) => ({
          brandId: c.brandId,
        }),
      ],
    },
  },
  methods: {
    ...common.methods,
    getResourceFn() {
      return this.plainly.listVideos;
    },
    getArgs() {
      return {
        brandId: this.brandId,
        articleId: this.articleId,
      };
    },
    usePagination() {
      return false;
    },
    generateMeta(item) {
      return {
        id: item.id,
        summary: `New Video Created with ID: ${item.id}`,
        ts: Date.parse(item.createdDate),
      };
    },
  },
};

Trigger Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI and CLI.
LabelPropTypeDescription
PlainlyplainlyappThis component uses the Plainly app.
N/Adb$.service.dbThis component uses $.service.db to maintain state between executions.
timer$.interface.timer
Brand IDbrandIdstringSelect a value from the drop down menu.
Article IDarticleIdstringSelect a value from the drop down menu.

Trigger Authentication

Plainly uses API keys for authentication. When you connect your Plainly account, Pipedream securely stores the keys so you can easily authenticate to Plainly APIs in both code and no-code steps.

About Plainly

Automate video creation

Action

Description:Run a job now and return the id of the triggered run. [See the documentation](https://docs.databricks.com/en/workflows/jobs/jobs-2.0-api.html#runs-list)
Version:0.0.6
Key:databricks-run-job-now

Databricks Overview

The Databricks API allows you to interact programmatically with Databricks services, enabling you to manage clusters, jobs, notebooks, and other resources within Databricks environments. Through Pipedream, you can leverage these APIs to create powerful automations and integrate with other apps for enhanced data processing, transformation, and analytics workflows. This unlocks possibilities like automating cluster management, dynamically running jobs based on external triggers, and orchestrating complex data pipelines with ease.

Action Code

import databricks from "../../databricks.app.mjs";

export default {
  key: "databricks-run-job-now",
  name: "Run Job Now",
  description: "Run a job now and return the id of the triggered run. [See the documentation](https://docs.databricks.com/en/workflows/jobs/jobs-2.0-api.html#runs-list)",
  version: "0.0.6",
  annotations: {
    destructiveHint: false,
    openWorldHint: true,
    readOnlyHint: false,
  },
  type: "action",
  props: {
    databricks,
    jobId: {
      propDefinition: [
        databricks,
        "jobId",
      ],
    },
    jarParams: {
      type: "string[]",
      label: "JAR Params",
      description: "A list of parameters for jobs with JAR tasks, e.g. \"jar_params\": [\"john doe\", \"35\"]. The parameters will be used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now, it will default to an empty list. jar_params cannot be specified in conjunction with notebook_params. The JSON representation of this field (i.e. {\"jar_params\":[\"john doe\",\"35\"]}) cannot exceed 10,000 bytes.",
      optional: true,
    },
    notebookParams: {
      type: "object",
      label: "Notebook Params",
      description: "A map from keys to values for jobs with notebook task, e.g. \"notebook_params\": {\"name\": \"john doe\", \"age\":  \"35\"}. The map is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job’s base parameters. You cannot specify notebook_params in conjunction with jar_params. The JSON representation of this field (i.e. {\"notebook_params\":{\"name\":\"john doe\",\"age\":\"35\"}}) cannot exceed 10,000 bytes.",
      optional: true,
    },
    pythonParams: {
      type: "string[]",
      label: "Python Params",
      description: "A list of parameters for jobs with Python tasks, e.g. \"python_params\": [\"john doe\", \"35\"]. The parameters will be passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (i.e. {\"python_params\":[\"john doe\",\"35\"]}) cannot exceed 10,000 bytes.",
      optional: true,
    },
    sparkSubmitParams: {
      type: "string[]",
      label: "Spark Submit Params",
      description: "A list of parameters for jobs with spark submit task, e.g. \"spark_submit_params\": [\"--class\", \"org.apache.spark.examples.SparkPi\"]. The parameters will be passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field cannot exceed 10,000 bytes.",
      optional: true,
    },
    idempotencyToken: {
      type: "string",
      label: "Idempotency Token",
      description: "An optional token to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see [How to ensure idempotency for jobs](https://kb.databricks.com/jobs/jobs-idempotency.html).",
      optional: true,
    },
  },
  async run({ $ }) {
    const response = await this.databricks.runJobNow({
      data: {
        job_id: this.jobId,
        jar_params: this.jarParams,
        notebook_params: this.notebookParams,
        python_params: this.pythonParams,
        spark_submit_params: this.sparkSubmitParams,
        idempotency_token: this.idempotencyToken,
      },
      $,
    });

    if (response) {
      $.export("$summary", `Successfully initiated job with ID ${this.jobId}.`);
    }

    return response;
  },
};

Action Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI.

LabelPropTypeDescription
DatabricksdatabricksappThis component uses the Databricks app.
JobjobIdstringSelect a value from the drop down menu.
JAR ParamsjarParamsstring[]

A list of parameters for jobs with JAR tasks, e.g. "jar_params": ["john doe", "35"]. The parameters will be used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now, it will default to an empty list. jar_params cannot be specified in conjunction with notebook_params. The JSON representation of this field (i.e. {"jar_params":["john doe","35"]}) cannot exceed 10,000 bytes.

Notebook ParamsnotebookParamsobject

A map from keys to values for jobs with notebook task, e.g. "notebook_params": {"name": "john doe", "age": "35"}. The map is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job’s base parameters. You cannot specify notebook_params in conjunction with jar_params. The JSON representation of this field (i.e. {"notebook_params":{"name":"john doe","age":"35"}}) cannot exceed 10,000 bytes.

Python ParamspythonParamsstring[]

A list of parameters for jobs with Python tasks, e.g. "python_params": ["john doe", "35"]. The parameters will be passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field (i.e. {"python_params":["john doe","35"]}) cannot exceed 10,000 bytes.

Spark Submit ParamssparkSubmitParamsstring[]

A list of parameters for jobs with spark submit task, e.g. "spark_submit_params": ["--class", "org.apache.spark.examples.SparkPi"]. The parameters will be passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. The JSON representation of this field cannot exceed 10,000 bytes.

Idempotency TokenidempotencyTokenstring

An optional token to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the existing run instead. If a run with the provided token is deleted, an error is returned. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one run is launched with that idempotency token. This token must have at most 64 characters. For more information, see How to ensure idempotency for jobs

Action Authentication

Databricks uses API keys for authentication. When you connect your Databricks account, Pipedream securely stores the keys so you can easily authenticate to Databricks APIs in both code and no-code steps.

About Databricks

Databricks is the lakehouse company, helping data teams solve the world’s toughest problems.

More Ways to Connect Databricks + Plainly

Get Run Output with Databricks API on New Render Completed from Plainly API
Plainly + Databricks
 
Try it
List Runs with Databricks API on New Render Completed from Plainly API
Plainly + Databricks
 
Try it
Run Job Now with Databricks API on New Render Completed from Plainly API
Plainly + Databricks
 
Try it
Get Run Output with Databricks API on New Render Failed from Plainly API
Plainly + Databricks
 
Try it
List Runs with Databricks API on New Render Failed from Plainly API
Plainly + Databricks
 
Try it
New Render Completed from the Plainly API

Emit new event when a video render job finishes successfully.

 
Try it
New Render Failed from the Plainly API

Emit new event when a video render fails.

 
Try it
New Video Created from the Plainly API

Emit new event when a video is created or uploaded in Plainly.

 
Try it
Create Render Job with the Plainly API

Creates a render job for a video template. See the documentation

 
Try it
Get Render Status with the Plainly API

Retrieves the current status of a render job in Plainly. See the documentation

 
Try it
List Templates with the Plainly API

Fetches a list of available video templates in a project in the user's Plainly account. See the documentation

 
Try it
Cancel All Runs with the Databricks API

Cancel all active runs for a job. The runs are canceled asynchronously, so it doesn't prevent new runs from being started. See the documentation

 
Try it
Cancel Run with the Databricks API

Cancel a job run. The run is canceled asynchronously, so it may still be running when this request completes. See the documentation

 
Try it

Explore Other Apps

1
-
24
of
3,000+
apps by most popular

Node
Node
Anything you can do with Node.js, you can do in a Pipedream workflow. This includes using most of npm's 400,000+ packages.
Python
Python
Anything you can do in Python can be done in a Pipedream Workflow. This includes using any of the 350,000+ PyPi packages available in your Python powered workflows.
Notion
Notion
Notion is a new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team.
OpenAI (ChatGPT)
OpenAI (ChatGPT)
OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Anthropic (Claude)
Anthropic (Claude)
AI research and products that put safety at the frontier. Introducing Claude, a next-generation AI assistant for your tasks, no matter the scale.
Google Sheets
Google Sheets
Use Google Sheets to create and edit online spreadsheets. Get insights together with secure sharing in real-time and from any device.
Telegram
Telegram
Telegram, is a cloud-based, cross-platform, encrypted instant messaging (IM) service.
Google Drive
Google Drive
Google Drive is a file storage and synchronization service which allows you to create and share your work online, and access your documents from anywhere.
HTTP / Webhook
HTTP / Webhook
Get a unique URL where you can send HTTP or webhook requests
Google Calendar
Google Calendar
With Google Calendar, you can quickly schedule meetings and events and get reminders about upcoming activities, so you always know what’s next.
Schedule
Schedule
Trigger workflows on an interval or cron schedule.
Pipedream Utils
Pipedream Utils
Utility functions to use within your Pipedream workflows
Shopify
Shopify
Shopify is a complete commerce platform that lets anyone start, manage, and grow a business. You can use Shopify to build an online store, manage sales, market to customers, and accept payments in digital and physical locations.
Supabase
Supabase
Supabase is an open source Firebase alternative.
MySQL
MySQL
MySQL is an open-source relational database management system.
PostgreSQL
PostgreSQL
PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
AWS
AWS
Premium
Amazon Web Services (AWS) offers reliable, scalable, and inexpensive cloud computing services.
Twilio SendGrid
Twilio SendGrid
Premium
Send marketing and transactional email through the Twilio SendGrid platform with the Email API, proprietary mail transfer agent, and infrastructure for scalable delivery.
Amazon SES
Amazon SES
Amazon SES is a cloud-based email service provider that can integrate into any application for high volume email automation
Klaviyo
Klaviyo
Premium
Klaviyo unifies your data, channels, and AI agents in one platform—text, WhatsApp, email marketing, and more—driving growth with every interaction.
Zendesk
Zendesk
Premium
Zendesk is award-winning customer service software trusted by 200K+ customers. Make customers happy via text, mobile, phone, email, live chat, social media.
ServiceNow
ServiceNow
Premium
Beta
The smarter way to workflow
Slack
Slack
Slack is the AI-powered platform for work bringing all of your conversations, apps, and customers together in one place. Around the world, Slack is helping businesses of all sizes grow and send productivity through the roof.
Microsoft Teams
Microsoft Teams
Microsoft Teams has communities, events, chats, channels, meetings, storage, tasks, and calendars in one place.