← Snowflake + Cloudinary integrations

Upload Media Asset with Cloudinary API on New Row from Snowflake API

Pipedream makes it easy to connect APIs for Cloudinary, Snowflake and 2,400+ other apps remarkably fast.

Trigger workflow on
New Row from the Snowflake API
Next, do this
Upload Media Asset with the Cloudinary API
No credit card required
Intro to Pipedream
Watch us build a workflow
Watch us build a workflow
8 min
Watch now ➜

Trusted by 1,000,000+ developers from startups to Fortune 500 companies

Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo
Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo

Developers Pipedream

Getting Started

This integration creates a workflow with a Snowflake trigger and Cloudinary action. When you configure and deploy the workflow, it will run on Pipedream's servers 24x7 for free.

  1. Select this integration
  2. Configure the New Row trigger
    1. Connect your Snowflake account
    2. Configure timer
    3. Select a Database
    4. Select a Schema
    5. Select a Table Name
    6. Select a Unique Key
    7. Optional- Configure Emit individual events
  3. Configure the Upload Media Asset action
    1. Connect your Cloudinary account
    2. Configure File
    3. Optional- Configure Public Id
    4. Optional- Configure Folder
    5. Optional- Configure Use Filename
    6. Optional- Configure Unique Filename
    7. Optional- Select a Resource Type
    8. Optional- Select a Type
    9. Optional- Configure Access Control
    10. Optional- Select a Access Mode
    11. Optional- Configure Discard Original Filename
    12. Optional- Configure Overwrite
    13. Optional- Configure Tags
    14. Optional- Configure Context
    15. Optional- Configure Colors
    16. Optional- Configure Faces
    17. Optional- Configure Quality Analysis
    18. Optional- Configure Accessibility Analysis
    19. Optional- Configure Cinemagraph Analysis
    20. Optional- Configure Image Metadata
    21. Optional- Configure pHash
    22. Optional- Configure Responsive Breakpoints
    23. Optional- Configure Auto Tagging
    24. Optional- Configure Categorization
    25. Optional- Configure Detection
    26. Optional- Configure OCR
    27. Optional- Configure Eager
    28. Optional- Configure Eager Async
    29. Optional- Configure Eager Notification URL
    30. Optional- Configure Transformation
    31. Optional- Configure Format
    32. Optional- Configure Custom Coordinates
    33. Optional- Configure Face Coordinates
    34. Optional- Configure Background Removal
    35. Optional- Configure Raw Convert
    36. Optional- Configure Allowed Formats
    37. Optional- Configure Async
    38. Optional- Configure Backup
    39. Optional- Configure Eval
    40. Optional- Configure Headers
    41. Optional- Configure Invalidate
    42. Optional- Configure Moderation
    43. Optional- Configure Notification URL
    44. Optional- Configure Proxy
    45. Optional- Configure Return Deleted Token
  4. Deploy the workflow
  5. Send a test event to validate your setup
  6. Turn on the trigger

Details

This integration uses pre-built, source-available components from Pipedream's GitHub repo. These components are developed by Pipedream and the community, and verified and maintained by Pipedream.

To contribute an update to an existing component or create a new component, create a PR on GitHub. If you're new to Pipedream component development, you can start with quickstarts for trigger span and action development, and then review the component API reference.

Trigger

Description:Emit new event when a row is added to a table
Version:0.2.2
Key:snowflake-new-row

Snowflake Overview

Snowflake offers a cloud database and related tools to help developers create robust, secure, and scalable data warehouses. See Snowflake's Key Concepts & Architecture.

Getting Started

1. Create a user, role and warehouse in Snowflake

Snowflake recommends you create a new user, role, and warehouse when you integrate a third-party tool like Pipedream. This way, you can control permissions via the user / role, and separate Pipedream compute and costs with the warehouse. You can do this directly in the Snowflake UI.

We recommend you create a read-only account if you only need to query Snowflake. If you need to insert data into Snowflake, add permissions on the appropriate objects after you create your user.

2. Enter those details in Pipedream

Visit https://pipedream.com/accounts. Click the button to Connect an App. Enter the required Snowflake account data.

You'll only need to connect your account once in Pipedream. You can connect this account to multiple workflows to run queries against Snowflake, insert data, and more.

3. Build your first workflow

Visit https://pipedream.com/new to build your first workflow. Pipedream workflows let you connect Snowflake with 1,000+ other apps. You can trigger workflows on Snowflake queries, sending results to Slack, Google Sheets, or any app that exposes an API. Or you can accept data from another app, transform it with Python, Node.js, Go or Bash code, and insert it into Snowflake.

Learn more at Pipedream University.

Trigger Code

import common from "../common-table-scan.mjs";

export default {
  ...common,
  type: "source",
  key: "snowflake-new-row",
  name: "New Row",
  description: "Emit new event when a row is added to a table",
  version: "0.2.2",
  methods: {
    ...common.methods,
    async getStatement(lastResultId) {
      const sqlText = `
        SELECT *
        FROM IDENTIFIER(:1)
        WHERE ${this.uniqueKey} > :2
        ORDER BY ${this.uniqueKey} ASC
      `;
      const binds = [
        this.tableName,
        lastResultId,
      ];
      return {
        sqlText,
        binds,
      };
    },
  },
};

Trigger Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI and CLI.
LabelPropTypeDescription
SnowflakesnowflakeappThis component uses the Snowflake app.
N/Adb$.service.dbThis component uses $.service.db to maintain state between executions.
timer$.interface.timer

Watch for changes on this schedule

DatabasedatabasestringSelect a value from the drop down menu.
SchemaschemastringSelect a value from the drop down menu.
Table NametableNamestringSelect a value from the drop down menu.
Unique KeyuniqueKeystringSelect a value from the drop down menu.
Emit individual eventsemitIndividualEventsboolean

Defaults to true, triggering workflows on each record in the result set. Set to false to emit records in batch (advanced)

Trigger Authentication

Snowflake uses API keys for authentication. When you connect your Snowflake account, Pipedream securely stores the keys so you can easily authenticate to Snowflake APIs in both code and no-code steps.

Snowflake recommends you create a new user, role, and warehouse when you integrate a third-party tool like Pipedream. This way, you can control permissions via the user / role, and separate Pipedream compute and costs with the warehouse. You can do this directly in the Snowflake UI.

We recommend you create a read-only account if you only need to query Snowflake. If you need to insert data into Snowflake, add permissions on the appropriate objects after you create your user.

About Snowflake

A data warehouse built for the cloud

Action

Description:Uploads media assets in the cloud such as images or videos, and allows configuration options to be set on the upload. [See the documentation](https://cloudinary.com/documentation/image_upload_api_reference#upload_method)
Version:0.5.3
Key:cloudinary-upload-media-asset

Cloudinary Overview

The Cloudinary API empowers developers to manage media assets in the cloud with ease. It allows for uploading, storing, optimizing, and delivering images and videos with automated transformations to ensure the content is tailored for any device or platform. This API's versatility is key for automating workflows that require dynamic media handling, such as resizing images on-the-fly, converting video formats, or even extracting metadata for asset management.

Action Code

import cloudinary from "../../cloudinary.app.mjs";

export default {
  key: "cloudinary-upload-media-asset",
  name: "Upload Media Asset",
  description: "Uploads media assets in the cloud such as images or videos, and allows configuration options to be set on the upload. [See the documentation](https://cloudinary.com/documentation/image_upload_api_reference#upload_method)",
  version: "0.5.3",
  type: "action",
  props: {
    cloudinary,
    file: {
      type: "string",
      label: "File",
      description: "The file to upload. It can be:\n* a local file path\n* the actual data (byte array buffer).\nFor example, this could be an IO input stream of the data (e.g., File.open(file, \"rb\")).\n* the Data URI (Base64 encoded), max ~60 MB (62,910,000 chars)\n* the remote FTP, HTTP or HTTPS URL address of an existing file\n* a private storage bucket (S3 or Google Storage) URL of a **whitelisted** bucket\nFor details and examples, see: [file source options](https://cloudinary.com/documentation/upload_images#file_source_options).",
    },
    publicId: {
      type: "string",
      label: "Public Id",
      description: "The identifier that is used for accessing the uploaded asset. The Public ID may contain a full path including folders separated by a slash (`/`).\nIf not specified, then the Public ID of the asset will either be comprised of random characters or will use the original file's filename, depending whether `use_filename` was set to true.\n\n**Note**: The Public ID value for images and videos should not include a file extension. Include the file extension for `raw` files only.",
      optional: true,
    },
    folder: {
      type: "string",
      label: "Folder",
      description: "An optional folder name where the uploaded asset will be stored. The public ID contains the full path of the uploaded asset, including the folder name.",
      optional: true,
    },
    useFilename: {
      type: "boolean",
      label: "Use Filename",
      description: "Whether to use the original file name of the uploaded asset. Relevant only if the `public_id` parameter isn't set.\nWhen false and the `public_id` parameter is also not defined, the Public ID will be comprised of random characters.\n\nWhen true and the `public_id` parameter is not defined, the uploaded file's original filename becomes the Public ID. Random characters are appended to the filename value to ensure Public ID uniqueness if `unique_filename` is true.\n\nDefault: `false`.",
      optional: true,
    },
    uniqueFilename: {
      type: "boolean",
      label: "Unique Filename",
      description: "When set to false, does not add random characters at the end of the filename that guarantee its uniqueness. In this case, if the `overwrite` parameter is also false, the upload returns an error. This parameter is relevant only if `use_filename` is also set to true. Default: `true`.",
      optional: true,
    },
    resourceType: {
      propDefinition: [
        cloudinary,
        "uploadResourceType",
      ],
    },
    type: {
      propDefinition: [
        cloudinary,
        "uploadDeliveryType",
      ],
    },
    accessControl: {
      type: "boolean",
      label: "Access Control",
      description: "An array of access types for the asset. The asset is accessible as long as one of the access types is valid.\nPossible values for each access type:\n\n- `token` requires either [Token-based authentication](https://cloudinary.com/documentation/control_access_to_media#token_based_authentication_premium_feature) or [Cookie-based authentication](https://cloudinary.com/documentation/control_access_to_media#cookie_based_authentication_premium_feature) for accessing the asset.\nFor example: `access_type: \"token\"`\n- `anonymous` allows public access to the asset. The anonymous access type can optionally include `start` and/or `end` dates (in ISO 8601 format) that define when the asset is publically available. Note that you can only include a single 'anonymous' access type. For example:\n`access_type: \"anonymous\", start: \"2017-12-15T12:00Z\", end: \"2018-01-20T12:00Z\"`",
      optional: true,
    },
    accessMode: {
      propDefinition: [
        cloudinary,
        "accessMode",
      ],
    },
    discardOriginalFilename: {
      type: "boolean",
      label: "Discard Original Filename",
      description: "Whether to discard the name of the original uploaded file. Relevant when delivering assets as attachments (setting the `flag` transformation parameter to `attachment`). Default: `false`.",
      optional: true,
    },
    overwrite: {
      type: "boolean",
      label: "Overwrite",
      description: "Whether to overwrite existing assets with the same public ID. When set to false, return immediately if an asset with the same Public ID was found. Default: `true` (when using unsigned upload, the default is false and cannot be changed to true).\n**Important**: Depending on the settings for your account, overwriting an asset may clear the tags, contextual, and structured metadata values for that asset. If you prefer these values to always be preserved on overwrite (unless other values are specifically set when uploading the new version), you can [submit a request](https://support.cloudinary.com/hc/en-us/requests/new) to change this behavior for your account.",
      optional: true,
    },
    tags: {
      type: "any",
      label: "Tags",
      description: "An array of tag names to assign to the uploaded asset for later group reference.",
      optional: true,
    },
    context: {
      type: "object",
      label: "Context",
      description: "A map of the key-value pairs of general textual context metadata to attach to an uploaded asset. The context values of uploaded files can be retrieved using the Admin API. For example: `alt=My image?caption=Profile image` (the `=` and `?` characters can be supported as values when escaped with a prepended backslash (`\\`)). Note that key values are limited to 1024 characters and an asset can have a maximum of 1000 context key-value pairs.",
      optional: true,
    },
    colors: {
      type: "boolean",
      label: "Colors",
      description: "Whether to retrieve predominant colors & color histogram of the uploaded image.\n**Note:** If all returned colors are opaque, then 6-digit RGB hex values are returned. If one or more colors contain an alpha channel, then 8-digit RGBA hex quadruplet values are returned.\nDefault: `false`. Relevant for images only.",
      optional: true,
    },
    faces: {
      type: "boolean",
      label: "Faces",
      description: "Whether to return the coordinates of faces contained in an uploaded image (automatically detected or manually defined). Each face is specified by the X & Y coordinates of the top left corner and the width & height of the face. The coordinates for each face are returned as an array (using the SDKs), and individual faces are separated with a pipe (`?`). For example: `10,20,150,130?213,345,82,61`.\nDefault: `false`. Relevant for images only.",
      optional: true,
    },
    qualityAnalysis: {
      type: "boolean",
      label: "Quality Analysis",
      description: "Whether to return a quality analysis value for the image between 0 and 1, where 0 means the image is blurry and out of focus and 1 means the image is sharp and in focus. Default: `false`. Relevant for images only.\nPaid customers can [request to take part](https://support.cloudinary.com/hc/en-us/requests/new) in the extended quality analysis Beta trial. When activated, this parameter returns quality scores for various other factors in addition to `focus`, such as `jpeg_quality`, `noise`, `exposure`, `lighting` and `resolution`, together with an overall weighted `quality_score`. The `quality_score`, `color_quality_score` and `pixel_quality_score` fields can be used in the Search API.",
      optional: true,
    },
    accessibilityAnalysis: {
      type: "boolean",
      label: "Accessibility Analysis",
      description: "Currently available only to paid customers [requesting to take part](https://support.cloudinary.com/hc/en-us/requests/new) in the [accessibility analysis](https://cloudinary.com/documentation/analysis_on_upload#accessibility_analysis) Beta trial. Set to `true` to return accessibility analysis values for the image and to enable the `colorblind_accessibility_score` field to be used in the Search API.\nDefault: `false`. Relevant for images only.",
      optional: true,
    },
    cinemagraphAnalysis: {
      type: "boolean",
      label: "Cinemagraph Analysis",
      description: "Whether to return a cinemagraph analysis value for the media asset between 0 and 1, where 0 means the asset is **not** a cinemagraph and 1 means the asset **is** a cinemagraph. Default: `false`. Relevant for animated images and video only. A static image will return 0.",
      optional: true,
    },
    imageMetadata: {
      type: "string",
      label: "Image Metadata",
      description: "Whether to return IPTC, XMP, and detailed Exif metadata of the uploaded asset in the response.\nDefault: `false`. Supported for images, video, and audio.\nReturned metadata for images includes: `PixelsPerUnitX`, `PixelsPerUnitY`, `PixelUnits`, `Colorspace`, and `DPI`.\nReturned metadata for audio and video includes: `audio_codec`, `audio_bit_rate`, `audio_frequency`, `channels`, `channel_layout`.\nAdditional metadata for video includes: `pix_format`, `codec`, `level`, `profile`, `video_bit_rate`, `dar`.",
      optional: true,
    },
    phash: {
      type: "boolean",
      label: "pHash",
      description: "Whether to return the perceptual hash (pHash) on the uploaded image. The pHash acts as a fingerprint that allows checking image similarity.\nDefault: `false`. Relevant for images only.",
      optional: true,
    },
    responsiveBreakpoints: {
      type: "object",
      label: "Responsive Breakpoints",
      description: "Requests that Cloudinary automatically find the best breakpoints. The parameter value is an array of breakpoint request settings, where each request setting can include the following parameters:\n* `create_derived`(Boolean - Required) If true, create and keep the derived images of the selected breakpoints during the API call. If false, images * generated during the analysis process are thrown away.\n* `format` (String - Optional) Sets the file extension of the derived resources to the format indicated (as opposed to changing the format as part of a transformation - which would be included as part of the transformation component (e.g., f_jpg)).\n* `transformation` (String - Optional) The base transformation to first apply to the image before finding the best breakpoints. The API accepts a string representation of a chained transformation (same as the regular transformation parameter of the upload API).\n* `max_width` (Integer - Optional) The maximum width needed for this image. If specifying a width bigger than the original image, the width of the original image is used instead. Default: `1000`.\n* `min_width` (Integer - Optional) The minimum width needed for this image. Default: `50`.\n* `bytes_step` (Integer - Optional) The minimum number of bytes between two consecutive breakpoints (images). Default: `20000`.\n* `max_images` (Integer - Optional) The maximum number of breakpoints to find, between 3 and 200. This means that there might be size differences bigger than the given bytes_step value between consecutive images. Default: `20`.\nThe return response will include an array of the selected breakpoints for each breakpoint request, where the following information is given for each breakpoint: `transformation`, `width`, `height`, `bytes`, `url` and `secure_url`.\nRelevant for images only.",
      optional: true,
    },
    autoTagging: {
      type: "integer",
      label: "Auto Tagging",
      description: "Whether to assign tags to an asset according to detected scene categories with a confidence score higher than the given value (between 0.0 and 1.0). See the [Google Automatic Video Tagging](https://cloudinary.com/documentation/google_automatic_video_tagging_addon), [Google Auto Tagging](https://cloudinary.com/documentation/google_auto_tagging_addon), [Imagga Auto Tagging](https://cloudinary.com/documentation/imagga_auto_tagging_addon), [Amazon Rekognition Auto Tagging](https://cloudinary.com/documentation/aws_rekognition_auto_tagging_addon), and [Amazon Rekognition Celebrity Detection](https://cloudinary.com/documentation/aws_rekognition_celebrity_and_face_detection_addon) add-ons for more details.",
      optional: true,
    },
    categorization: {
      type: "string",
      label: "Categorization",
      description: "A comma-separated list of the categorization add-ons to run on the asset. Set to `google_tagging`, `google_video_tagging`, `imagga_tagging` and/or `aws_rek_tagging` to automatically classify the scenes of the uploaded asset. See the [Google Automatic Video Tagging](https://cloudinary.com/documentation/google_automatic_video_tagging_addon), [Google Auto Tagging](https://cloudinary.com/documentation/google_auto_tagging_addon), [Imagga Auto Tagging](https://cloudinary.com/documentation/imagga_auto_tagging_addon), and [Amazon Rekognition Auto Tagging](https://cloudinary.com/documentation/aws_rekognition_auto_tagging_addon) add-ons for more details.",
      optional: true,
    },
    detection: {
      type: "string",
      label: "Detection",
      description: "Set to `adv_face` or `aws_rek_face` to extract an extensive list of face attributes from an image using the [Advanced Facial Attribute Detection](https://cloudinary.com/documentation/advanced_facial_attributes_detection_addon) or [Amazon Rekognition Celebrity Detection](https://cloudinary.com/documentation/aws_rekognition_celebrity_and_face_detection_addon) add-ons.\nRelevant for images only.",
      optional: true,
    },
    ocr: {
      type: "string",
      label: "OCR",
      description: "Set to `adv_ocr` to extract all text elements in an image as well as the bounding box coordinates of each detected element using the [OCR text detection and extraction add-on](https://cloudinary.com/documentation/ocr_text_detection_and_extraction_addon). Relevant for images only.",
      optional: true,
    },
    eager: {
      type: "any",
      label: "Eager",
      description: "An array of transformation representations. This generates derived resources in advance, instead of lazily creating each of the derived resources when first accessed by your site's visitors.",
      optional: true,
    },
    eagerAsync: {
      type: "boolean",
      label: "Eager Async",
      description: "Whether to generate the eager transformations asynchronously in the background after the upload request is completed rather than online as part of the upload call. Default: `false`",
      optional: true,
    },
    eagerNotificationUrl: {
      type: "string",
      label: "Eager Notification URL",
      description: "An HTTP or HTTPS URL to send a notification to (a webhook) when the generation of eager transformations is completed.",
      optional: true,
    },
    transformation: {
      type: "string",
      label: "Transformation",
      description: "An incoming transformation to run on the uploaded asset before saving it in the cloud. T his parameter is given as a hash of transformation parameters (or an array of hashes for chained transformations).",
      optional: true,
    },
    format: {
      type: "string",
      label: "Format",
      description: "An optional format to convert the uploaded asset to before saving in the cloud. For example: `jpg`.",
      optional: true,
    },
    customCoordinates: {
      type: "any",
      label: "Custom Coordinates",
      description: "Sets the coordinates of a single region contained in an uploaded image that is subsequently used for cropping uploaded images using the `custom` gravity mode. The region is specified by the X & Y coordinates of the top left corner and the width & height of the region, as an array. For example: `85,120,220,310.`\nRelevant for images only.",
      optional: true,
    },
    faceCoordinates: {
      type: "any",
      label: "Face Coordinates",
      description: "Sets the coordinates of faces contained in an uploaded image and overrides the automatically detected faces. Each face is specified by the X & Y coordinates of the top left corner and the width & height of the face. The coordinates for each face are given as an array.\nRelevant for images only.",
      optional: true,
    },
    backgroundRemoval: {
      type: "string",
      label: "Background Removal",
      description: "Automatically remove the background of an image using an add-on.\nSet to `cloudinary_ai` to use the deep-learning based [Cloudinary AI Background Removal](https://cloudinary.com/documentation/cloudinary_ai_background_removal_addon) add-on.\nSet to `pixelz` to use the human-powered [Pizelz Remove-The-Background Editing](https://cloudinary.com/documentation/remove_the_background_image_editing_addon) add-on service.\nRelevant for images only.",
      optional: true,
    },
    rawConvert: {
      type: "string",
      label: "Raw Convert",
      description: "Asynchronously generates a related file based on the uploaded file.\n* Set to `aspose` to automatically create a PDF or other image format from a `raw` Office document using the [Aspose Document Conversion](https://cloudinary.com/documentation/aspose_document_conversion_addon) add-on.\n* Set to `google_speech` to instruct the [Google AI Video Transcription](https://cloudinary.com/documentation/google_ai_video_transcription_addon) add-on to generate an automatic transcript `raw` file from an uploaded video.\n* Set to `extract_text` to extract all the text from a PDF file and store it in a raw file. The public ID of the generated `raw` file will be in the format: **[pdf_public_id].extract_text.json.**\nSee also: [Converting raw files](https://cloudinary.com/documentation/upload_images#converting_raw_files).",
      optional: true,
    },
    allowedFormats: {
      type: "any",
      label: "Allowed Formats",
      description: "An array of file formats that are allowed for uploading. Files of other types will be rejected. The formats can be any combination of image types, video formats or raw file extensions. For example: `mp4,ogv,jpg,png,pdf`. Default: any supported format for images and videos, and any kind of raw file (i.e. no restrictions by default).",
      optional: true,
    },
    async: {
      type: "boolean",
      label: "Async",
      description: "Whether to perform the request in the background (asynchronously). Default: `false`.",
      optional: true,
    },
    backup: {
      type: "boolean",
      label: "Backup",
      description: "Tell Cloudinary whether to [back up](https://cloudinary.com/documentation/backups_and_version_management) the uploaded asset. Overrides the default backup settings of your account.",
      optional: true,
    },
    eval: {
      type: "string",
      label: "Eval",
      description: "Allows you to modify upload parameters by specifying custom logic with JavaScript. This can be useful for conditionally adding tags, context, metadata or eager transformations depending on specific criteria of the uploaded file. For more details see [Evaluating and modifying upload parameters](https://cloudinary.com/documentation/analysis_on_upload#evaluating_and_modifying_upload_parameters).",
      optional: true,
    },
    headers: {
      type: "string",
      label: "Headers",
      description: "An HTTP header or a list of headers lines for adding as response HTTP headers when delivering the asset to your users. Supported headers: `Link`, `Authorization`, `X-Robots-Tag`. For example: `X-Robots-Tag: noindex`.",
      optional: true,
    },
    invalidate: {
      type: "boolean",
      label: "Invalidate",
      description: "Whether to invalidate CDN cached copies of a previously uploaded asset (and all transformed versions that share the same Public ID). Default: `false`.\nIt usually takes between a few seconds and a few minutes for the invalidation to fully propagate through the CDN. There are also a number of other [important considerations](https://cloudinary.com/documentation/managing_assets#invalidating_cached_media_assets_on_the_cdn) when using the invalidate functionality.",
      optional: true,
    },
    moderation: {
      type: "string",
      label: "Moderation",
      description: "**For all asset types**: Set to `manual` to add the uploaded asset to a queue of pending assets that can be moderated using the Admin API or the [Cloudinary Management Console](https://cloudinary.com/console/media_library), or set to `metascan` to automatically moderate the uploaded asset using the [MetaDefender Anti-Malware Protection](https://cloudinary.com/documentation/metadefender_anti_malware_protection_addon) add-on.\n**For images only**: Set to `webpurify` or `aws_rek` to automatically moderate the uploaded image using the [WebPurify Image Moderation](https://cloudinary.com/documentation/webpurify_image_moderation_addon) add-on or the [Amazon Rekognition AI Moderation](https://cloudinary.com/documentation/aws_rekognition_ai_moderation_addon) add-on respectively.",
      optional: true,
    },
    notificationUrl: {
      type: "string",
      label: "Notification URL",
      description: "An HTTP or HTTPS URL to receive the upload response (a webhook) when the upload or any requested asynchronous action is completed. If not specified, the response is sent to the global **Notification URL** (if defined) in the **Upload** settings of your account console.",
      optional: true,
    },
    proxy: {
      type: "string",
      label: "Proxy",
      description: "Tells Cloudinary to upload assets from remote URLs through the given proxy. Format: `https://hostname:port.`",
      optional: true,
    },
    returnDeleteToken: {
      type: "boolean",
      label: "Return Deleted Token",
      description: "Whether to return a deletion token in the upload response. The token can be used to delete the uploaded asset within 10 minutes using an unauthenticated API request. Default: `false`.",
      optional: true,
    },
  },
  async run({ $ }) {
    const options = {
      public_id: this.publicId,
      folder: this.folder,
      use_filename: this.useFilename,
      unique_filename: this.uniqueFilename,
      resource_type: this.resourceType,
      type: this.type,
      access_control: this.accessControl,
      access_mode: this.accessMode,
      discard_original_filename: this.discardOriginalFilename,
      overwrite: this.overwrite,
      tags: this.tags,
      context: this.context,
      colors: this.colors,
      faces: this.faces,
      quality_analysis: this.qualityAnalysis,
      accessibility_analysis: this.accessibilityAnalysis,
      cinemagraph_analysis: this.cinemagraphAnalysis,
      image_metadata: this.imageMetadata,
      phash: this.phash,
      responsive_breakpoints: this.responsiveBreakpoints,
      auto_tagging: this.autoTagging,
      categorization: this.categorization,
      detection: this.detection,
      ocr: this.ocr,
      eager: this.eager,
      eager_async: this.eagerAsync,
      eager_notification_url: this.eagerNotificationUrl,
      transformation: this.transformation,
      format: this.format,
      custom_coordinates: this.customCoordinates,
      face_coordinates: this.faceCoordinates,
      background_removal: this.backgroundRemoval,
      raw_convert: this.rawConvert,
      allowed_formats: this.allowedFormats,
      async: this.async,
      backup: this.backup,
      eval: this.eval,
      headers: this.headers,
      invalidate: this.invalidate,
      moderation: this.moderation,
      notification_url: this.notification_url,
      proxy: this.proxy,
      return_delete_token: this.returnDeleteToken,
    };

    try {
      const response = await this.cloudinary.uploadMedia(this.file, options);
      if (response) {
        $.export("$summary", "Successfully uploaded media asset");
      }
      return response;
    } catch (e) {
      throw new Error(`${e.name} - ${e.http_code} - ${e.message}`);
    }
  },
};

Action Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI.

LabelPropTypeDescription
CloudinarycloudinaryappThis component uses the Cloudinary app.
Filefilestring

The file to upload. It can be:

  • a local file path
  • the actual data (byte array buffer).
    For example, this could be an IO input stream of the data (e.g., File.open(file, "rb")).
  • the Data URI (Base64 encoded), max ~60 MB (62,910,000 chars)
  • the remote FTP, HTTP or HTTPS URL address of an existing file
  • a private storage bucket (S3 or Google Storage) URL of a whitelisted bucket
    For details and examples, see: file source options.
Public IdpublicIdstring

The identifier that is used for accessing the uploaded asset. The Public ID may contain a full path including folders separated by a slash (/).
If not specified, then the Public ID of the asset will either be comprised of random characters or will use the original file's filename, depending whether use_filename was set to true.

Note: The Public ID value for images and videos should not include a file extension. Include the file extension for raw files only.

Folderfolderstring

An optional folder name where the uploaded asset will be stored. The public ID contains the full path of the uploaded asset, including the folder name.

Use FilenameuseFilenameboolean

Whether to use the original file name of the uploaded asset. Relevant only if the public_id parameter isn't set.
When false and the public_id parameter is also not defined, the Public ID will be comprised of random characters.

When true and the public_id parameter is not defined, the uploaded file's original filename becomes the Public ID. Random characters are appended to the filename value to ensure Public ID uniqueness if unique_filename is true.

Default: false.

Unique FilenameuniqueFilenameboolean

When set to false, does not add random characters at the end of the filename that guarantee its uniqueness. In this case, if the overwrite parameter is also false, the upload returns an error. This parameter is relevant only if use_filename is also set to true. Default: true.

Resource TyperesourceTypestringSelect a value from the drop down menu:imagerawvideoauto
TypetypestringSelect a value from the drop down menu:uploadprivateauthenticated
Access ControlaccessControlboolean

An array of access types for the asset. The asset is accessible as long as one of the access types is valid.
Possible values for each access type:

  • token requires either Token-based authentication or Cookie-based authentication for accessing the asset.
    For example: access_type: "token"
  • anonymous allows public access to the asset. The anonymous access type can optionally include start and/or end dates (in ISO 8601 format) that define when the asset is publically available. Note that you can only include a single 'anonymous' access type. For example:
    access_type: "anonymous", start: "2017-12-15T12:00Z", end: "2018-01-20T12:00Z"
Access ModeaccessModestringSelect a value from the drop down menu:publicauthenticated
Discard Original FilenamediscardOriginalFilenameboolean

Whether to discard the name of the original uploaded file. Relevant when delivering assets as attachments (setting the flag transformation parameter to attachment). Default: false.

Overwriteoverwriteboolean

Whether to overwrite existing assets with the same public ID. When set to false, return immediately if an asset with the same Public ID was found. Default: true (when using unsigned upload, the default is false and cannot be changed to true).
Important: Depending on the settings for your account, overwriting an asset may clear the tags, contextual, and structured metadata values for that asset. If you prefer these values to always be preserved on overwrite (unless other values are specifically set when uploading the new version), you can submit a request to change this behavior for your account.

Tagstagsany

An array of tag names to assign to the uploaded asset for later group reference.

Contextcontextobject

A map of the key-value pairs of general textual context metadata to attach to an uploaded asset. The context values of uploaded files can be retrieved using the Admin API. For example: alt=My image?caption=Profile image (the = and ? characters can be supported as values when escaped with a prepended backslash (\)). Note that key values are limited to 1024 characters and an asset can have a maximum of 1000 context key-value pairs.

Colorscolorsboolean

Whether to retrieve predominant colors & color histogram of the uploaded image.
Note: If all returned colors are opaque, then 6-digit RGB hex values are returned. If one or more colors contain an alpha channel, then 8-digit RGBA hex quadruplet values are returned.
Default: false. Relevant for images only.

Facesfacesboolean

Whether to return the coordinates of faces contained in an uploaded image (automatically detected or manually defined). Each face is specified by the X & Y coordinates of the top left corner and the width & height of the face. The coordinates for each face are returned as an array (using the SDKs), and individual faces are separated with a pipe (?). For example: 10,20,150,130?213,345,82,61.
Default: false. Relevant for images only.

Quality AnalysisqualityAnalysisboolean

Whether to return a quality analysis value for the image between 0 and 1, where 0 means the image is blurry and out of focus and 1 means the image is sharp and in focus. Default: false. Relevant for images only.
Paid customers can request to take part in the extended quality analysis Beta trial. When activated, this parameter returns quality scores for various other factors in addition to focus, such as jpeg_quality, noise, exposure, lighting and resolution, together with an overall weighted quality_score. The quality_score, color_quality_score and pixel_quality_score fields can be used in the Search API.

Accessibility AnalysisaccessibilityAnalysisboolean

Currently available only to paid customers requesting to take part in the accessibility analysis Beta trial. Set to true to return accessibility analysis values for the image and to enable the colorblind_accessibility_score field to be used in the Search API.
Default: false. Relevant for images only.

Cinemagraph AnalysiscinemagraphAnalysisboolean

Whether to return a cinemagraph analysis value for the media asset between 0 and 1, where 0 means the asset is not a cinemagraph and 1 means the asset is a cinemagraph. Default: false. Relevant for animated images and video only. A static image will return 0.

Image MetadataimageMetadatastring

Whether to return IPTC, XMP, and detailed Exif metadata of the uploaded asset in the response.
Default: false. Supported for images, video, and audio.
Returned metadata for images includes: PixelsPerUnitX, PixelsPerUnitY, PixelUnits, Colorspace, and DPI.
Returned metadata for audio and video includes: audio_codec, audio_bit_rate, audio_frequency, channels, channel_layout.
Additional metadata for video includes: pix_format, codec, level, profile, video_bit_rate, dar.

pHashphashboolean

Whether to return the perceptual hash (pHash) on the uploaded image. The pHash acts as a fingerprint that allows checking image similarity.
Default: false. Relevant for images only.

Responsive BreakpointsresponsiveBreakpointsobject

Requests that Cloudinary automatically find the best breakpoints. The parameter value is an array of breakpoint request settings, where each request setting can include the following parameters:

  • create_derived(Boolean - Required) If true, create and keep the derived images of the selected breakpoints during the API call. If false, images * generated during the analysis process are thrown away.
  • format (String - Optional) Sets the file extension of the derived resources to the format indicated (as opposed to changing the format as part of a transformation - which would be included as part of the transformation component (e.g., f_jpg)).
  • transformation (String - Optional) The base transformation to first apply to the image before finding the best breakpoints. The API accepts a string representation of a chained transformation (same as the regular transformation parameter of the upload API).
  • max_width (Integer - Optional) The maximum width needed for this image. If specifying a width bigger than the original image, the width of the original image is used instead. Default: 1000.
  • min_width (Integer - Optional) The minimum width needed for this image. Default: 50.
  • bytes_step (Integer - Optional) The minimum number of bytes between two consecutive breakpoints (images). Default: 20000.
  • max_images (Integer - Optional) The maximum number of breakpoints to find, between 3 and 200. This means that there might be size differences bigger than the given bytes_step value between consecutive images. Default: 20.
    The return response will include an array of the selected breakpoints for each breakpoint request, where the following information is given for each breakpoint: transformation, width, height, bytes, url and secure_url.
    Relevant for images only.
Auto TaggingautoTagginginteger

Whether to assign tags to an asset according to detected scene categories with a confidence score higher than the given value (between 0.0 and 1.0). See the Google Automatic Video Tagging, Google Auto Tagging, Imagga Auto Tagging, Amazon Rekognition Auto Tagging, and Amazon Rekognition Celebrity Detection add-ons for more details.

Categorizationcategorizationstring

A comma-separated list of the categorization add-ons to run on the asset. Set to google_tagging, google_video_tagging, imagga_tagging and/or aws_rek_tagging to automatically classify the scenes of the uploaded asset. See the Google Automatic Video Tagging, Google Auto Tagging, Imagga Auto Tagging, and Amazon Rekognition Auto Tagging add-ons for more details.

Detectiondetectionstring

Set to adv_face or aws_rek_face to extract an extensive list of face attributes from an image using the Advanced Facial Attribute Detection or Amazon Rekognition Celebrity Detection add-ons.
Relevant for images only.

OCRocrstring

Set to adv_ocr to extract all text elements in an image as well as the bounding box coordinates of each detected element using the OCR text detection and extraction add-on. Relevant for images only.

Eagereagerany

An array of transformation representations. This generates derived resources in advance, instead of lazily creating each of the derived resources when first accessed by your site's visitors.

Eager AsynceagerAsyncboolean

Whether to generate the eager transformations asynchronously in the background after the upload request is completed rather than online as part of the upload call. Default: false

Eager Notification URLeagerNotificationUrlstring

An HTTP or HTTPS URL to send a notification to (a webhook) when the generation of eager transformations is completed.

Transformationtransformationstring

An incoming transformation to run on the uploaded asset before saving it in the cloud. T his parameter is given as a hash of transformation parameters (or an array of hashes for chained transformations).

Formatformatstring

An optional format to convert the uploaded asset to before saving in the cloud. For example: jpg.

Custom CoordinatescustomCoordinatesany

Sets the coordinates of a single region contained in an uploaded image that is subsequently used for cropping uploaded images using the custom gravity mode. The region is specified by the X & Y coordinates of the top left corner and the width & height of the region, as an array. For example: 85,120,220,310.
Relevant for images only.

Face CoordinatesfaceCoordinatesany

Sets the coordinates of faces contained in an uploaded image and overrides the automatically detected faces. Each face is specified by the X & Y coordinates of the top left corner and the width & height of the face. The coordinates for each face are given as an array.
Relevant for images only.

Background RemovalbackgroundRemovalstring

Automatically remove the background of an image using an add-on.
Set to cloudinary_ai to use the deep-learning based Cloudinary AI Background Removal add-on.
Set to pixelz to use the human-powered Pizelz Remove-The-Background Editing add-on service.
Relevant for images only.

Raw ConvertrawConvertstring

Asynchronously generates a related file based on the uploaded file.

  • Set to aspose to automatically create a PDF or other image format from a raw Office document using the Aspose Document Conversion add-on.
  • Set to google_speech to instruct the Google AI Video Transcription add-on to generate an automatic transcript raw file from an uploaded video.
  • Set to extract_text to extract all the text from a PDF file and store it in a raw file. The public ID of the generated raw file will be in the format: [pdf_public_id].extract_text.json.
    See also: Converting raw files.
Allowed FormatsallowedFormatsany

An array of file formats that are allowed for uploading. Files of other types will be rejected. The formats can be any combination of image types, video formats or raw file extensions. For example: mp4,ogv,jpg,png,pdf. Default: any supported format for images and videos, and any kind of raw file (i.e. no restrictions by default).

Asyncasyncboolean

Whether to perform the request in the background (asynchronously). Default: false.

Backupbackupboolean

Tell Cloudinary whether to back up the uploaded asset. Overrides the default backup settings of your account.

Evalevalstring

Allows you to modify upload parameters by specifying custom logic with JavaScript. This can be useful for conditionally adding tags, context, metadata or eager transformations depending on specific criteria of the uploaded file. For more details see Evaluating and modifying upload parameters.

Headersheadersstring

An HTTP header or a list of headers lines for adding as response HTTP headers when delivering the asset to your users. Supported headers: Link, Authorization, X-Robots-Tag. For example: X-Robots-Tag: noindex.

Invalidateinvalidateboolean

Whether to invalidate CDN cached copies of a previously uploaded asset (and all transformed versions that share the same Public ID). Default: false.
It usually takes between a few seconds and a few minutes for the invalidation to fully propagate through the CDN. There are also a number of other important considerations when using the invalidate functionality.

Moderationmoderationstring

For all asset types: Set to manual to add the uploaded asset to a queue of pending assets that can be moderated using the Admin API or the Cloudinary Management Console, or set to metascan to automatically moderate the uploaded asset using the MetaDefender Anti-Malware Protection add-on.
For images only: Set to webpurify or aws_rek to automatically moderate the uploaded image using the WebPurify Image Moderation add-on or the Amazon Rekognition AI Moderation add-on respectively.

Notification URLnotificationUrlstring

An HTTP or HTTPS URL to receive the upload response (a webhook) when the upload or any requested asynchronous action is completed. If not specified, the response is sent to the global Notification URL (if defined) in the Upload settings of your account console.

Proxyproxystring

Tells Cloudinary to upload assets from remote URLs through the given proxy. Format: https://hostname:port.

Return Deleted TokenreturnDeleteTokenboolean

Whether to return a deletion token in the upload response. The token can be used to delete the uploaded asset within 10 minutes using an unauthenticated API request. Default: false.

Action Authentication

Cloudinary uses API keys for authentication. When you connect your Cloudinary account, Pipedream securely stores the keys so you can easily authenticate to Cloudinary APIs in both code and no-code steps.

Enter the cloud name, API key and API secret to connect your Cloudinary account. The API credentials are listed on the Programmable Media page of your Cloudinary console.

cloudinary console with api credentials

About Cloudinary

Create, manage and deliver digital experiences

More Ways to Connect Cloudinary + Snowflake

Get Account Usage Details with Cloudinary API on Query Results from Snowflake API
Snowflake + Cloudinary
 
Try it
Get Account Usage Details with Cloudinary API on New Row from Snowflake API
Snowflake + Cloudinary
 
Try it
Image Transformation with Cloudinary API on Query Results from Snowflake API
Snowflake + Cloudinary
 
Try it
Image Transformation with Cloudinary API on New Row from Snowflake API
Snowflake + Cloudinary
 
Try it
Resource Transformation with Cloudinary API on Query Results from Snowflake API
Snowflake + Cloudinary
 
Try it
New Row from the Snowflake API

Emit new event when a row is added to a table

 
Try it
New Query Results from the Snowflake API

Run a SQL query on a schedule, triggering a workflow for each row of results

 
Try it
Failed Task in Schema from the Snowflake API

Emit new events when a task fails in a database schema

 
Try it
New Database from the Snowflake API

Emit new event when a database is created

 
Try it
New Deleted Role from the Snowflake API

Emit new event when a role is deleted

 
Try it
Execute SQL Query with the Snowflake API

Execute a custom Snowflake query. See our docs to learn more about working with SQL in Pipedream.

 
Try it
Insert Multiple Rows with the Snowflake API

Insert multiple rows into a table

 
Try it
Insert Single Row with the Snowflake API

Insert a row into a table

 
Try it
Query SQL Database with the Snowflake API

Execute a SQL Query. See our docs to learn more about working with SQL in Pipedream.

 
Try it
Get Account Usage Details with the Cloudinary API

Enables you to get a report on the status of your Cloudinary account usage details, including storage, credits, bandwidth, requests, number of resources, and add-on usage. See the documentation

 
Try it

Explore Other Apps

1
-
24
of
2,400+
apps by most popular

HTTP / Webhook
HTTP / Webhook
Get a unique URL where you can send HTTP or webhook requests
Node
Node
Anything you can do with Node.js, you can do in a Pipedream workflow. This includes using most of npm's 400,000+ packages.
Python
Python
Anything you can do in Python can be done in a Pipedream Workflow. This includes using any of the 350,000+ PyPi packages available in your Python powered workflows.
OpenAI (ChatGPT)
OpenAI (ChatGPT)
OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Premium
Salesforce
Salesforce
Web services API for interacting with Salesforce
Premium
HubSpot
HubSpot
HubSpot's CRM platform contains the marketing, sales, service, operations, and website-building software you need to grow your business.
Premium
Zoho CRM
Zoho CRM
Zoho CRM is an online Sales CRM software that manages your sales, marketing, and support in one CRM platform.
Premium
Stripe
Stripe
Stripe powers online and in-person payment processing and financial solutions for businesses of all sizes.
Shopify
Shopify
Shopify is a complete commerce platform that lets anyone start, manage, and grow a business. You can use Shopify to build an online store, manage sales, market to customers, and accept payments in digital and physical locations.
Premium
WooCommerce
WooCommerce
WooCommerce is the open-source ecommerce platform for WordPress.
Premium
Snowflake
Snowflake
A data warehouse built for the cloud
Premium
MongoDB
MongoDB
MongoDB is an open source NoSQL database management program.
Supabase
Supabase
Supabase is an open source Firebase alternative.
MySQL
MySQL
MySQL is an open-source relational database management system.
PostgreSQL
PostgreSQL
PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
Premium
AWS
AWS
Amazon Web Services (AWS) offers reliable, scalable, and inexpensive cloud computing services.
Premium
Twilio SendGrid
Twilio SendGrid
Send marketing and transactional email through the Twilio SendGrid platform with the Email API, proprietary mail transfer agent, and infrastructure for scalable delivery.
Amazon SES
Amazon SES
Amazon SES is a cloud-based email service provider that can integrate into any application for high volume email automation
Premium
Klaviyo
Klaviyo
Email Marketing and SMS Marketing Platform
Premium
Zendesk
Zendesk
Zendesk is award-winning customer service software trusted by 200K+ customers. Make customers happy via text, mobile, phone, email, live chat, social media.
Notion
Notion
Notion is a new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team.
Slack
Slack
Slack is a channel-based messaging platform. With Slack, people can work together more effectively, connect all their software tools and services, and find the information they need to do their best work — all within a secure, enterprise-grade environment.
Microsoft Teams
Microsoft Teams
Microsoft Teams has communities, events, chats, channels, meetings, storage, tasks, and calendars in one place.
Schedule
Schedule
Trigger workflows on an interval or cron schedule.