← Phantombuster + OpenAI (ChatGPT) integrations

Create Transcription (Whisper) with OpenAI (ChatGPT) API on New Output Created Event from Phantombuster API

Pipedream makes it easy to connect APIs for OpenAI (ChatGPT), Phantombuster and 2,000+ other apps remarkably fast.

Trigger workflow on
New Output Created Event from the Phantombuster API
Next, do this
Create Transcription (Whisper) with the OpenAI (ChatGPT) API
No credit card required
Intro to Pipedream
Watch us build a workflow
Watch us build a workflow
4 min
Watch now ➜

Trusted by 800,000+ developers from startups to Fortune 500 companies

Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo
Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo

Developers Pipedream

Getting Started

This integration creates a workflow with a Phantombuster trigger and OpenAI (ChatGPT) action. When you configure and deploy the workflow, it will run on Pipedream's servers 24x7 for free.

  1. Select this integration
  2. Configure the New Output Created Event trigger
    1. Connect your Phantombuster account
    2. Configure timer
    3. Select a Agent ID
  3. Configure the Create Transcription (Whisper) action
    1. Connect your OpenAI (ChatGPT) account
    2. Select a Audio Upload Type
    3. Optional- Select a Language
  4. Deploy the workflow
  5. Send a test event to validate your setup
  6. Turn on the trigger

Details

This integration uses pre-built, source-available components from Pipedream's GitHub repo. These components are developed by Pipedream and the community, and verified and maintained by Pipedream.

To contribute an update to an existing component or create a new component, create a PR on GitHub. If you're new to Pipedream component development, you can start with quickstarts for trigger span and action development, and then review the component API reference.

Trigger

Description:Emit new events when new outputs are created. [See the docs here](https://hub.phantombuster.com/reference/get_agents-fetch-output-1)
Version:0.0.1
Key:phantombuster-new-output

Phantombuster Overview

Phantombuster is a powerful API that enables users to build efficient web
automation solutions. It provides a pool of services and tools to help users
quickly, easily and securely interact with multiple websites at once.

With Phantombuster, users can create custom automated solutions to perform
various tasks such as data extraction, lead generation, marketing automation,
and web scraping. Here are a few of the things users can build using
Phantombuster:

  • Data extraction - Phantombuster's API allows users to quickly and securely
    extract data from multiple sites and APIs, allowing users to access large
    amounts of data at once and extract only what they need.
  • Lead generation - Phantombuster's API connects to multiple social networks
    and websites, allowing users to quickly and accurately gather leads or
    potential contacts and store them into a database.
  • Automated marketing campaigns - Phantombuster's API enables users to create
    and launch automated marketing campaigns, automating the entire process and
    saving time and money.
  • Web scraping - Phantombuster's API allows users to scrape entire webpages or
    just parts of it, enabling streamlined data years.
  • Robot monitoring - Phantombuster's API enables users to monitor robots,
    allowing them to keep an eye on their tasks and ensuring their bots are
    running as efficiently as possible.
  • Data analytics - Phantombuster's API allows users to easily analyze and
    visualize their gathered data, allowing them to quickly make decisions based
    on their collected data.

Trigger Code

import app from "../../phantombuster.app.mjs";
import { DEFAULT_POLLING_SOURCE_TIMER_INTERVAL } from "@pipedream/platform";

export default {
  key: "phantombuster-new-output",
  name: "New Output Created Event",
  description: "Emit new events when new outputs are created. [See the docs here](https://hub.phantombuster.com/reference/get_agents-fetch-output-1)",
  version: "0.0.1",
  type: "source",
  props: {
    app,
    timer: {
      type: "$.interface.timer",
      default: {
        intervalSeconds: DEFAULT_POLLING_SOURCE_TIMER_INTERVAL,
      },
    },
    db: "$.service.db",
    agentId: {
      propDefinition: [
        app,
        "agentId",
      ],
    },
  },
  methods: {
    setLastUpdated(lastUpdated) {
      this.db.set("lastUpdated", lastUpdated);
    },
    getLastUpdated() {
      return this.db.get("lastUpdated") || 0;
    },
    setLastContainerId(lastContainerId) {
      this.db.set("lastContainerId", lastContainerId);
    },
    getLastContainerId() {
      return this.db.get("lastContainerId") || 0;
    },
  },
  async run() {
    const resp = await this.app.getOutput({ //always there is one last output for an agent
      params: {
        id: this.agentId,
      },
    });
    if (this.getLastContainerId() != resp.containerId ||
      (resp.mostRecentEndedAt && this.getLastUpdated() < resp.mostRecentEndedAt)) {
      this.$emit(
        resp,
        {
          id: resp.mostRecentEndedAt || Date.now(),
          summary: resp.output,
          ts: resp.mostRecentEndedAt || Date.now(),
        },
      );
      this.setLastUpdated(resp.mostRecentEndedAt);
      this.setLastContainerId(resp.containerId);
    }
  },
};

Trigger Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI and CLI.
LabelPropTypeDescription
PhantombusterappappThis component uses the Phantombuster app.
timer$.interface.timer
N/Adb$.service.dbThis component uses $.service.db to maintain state between executions.
Agent IDagentIdstringSelect a value from the drop down menu.

Trigger Authentication

Phantombuster uses API keys for authentication. When you connect your Phantombuster account, Pipedream securely stores the keys so you can easily authenticate to Phantombuster APIs in both code and no-code steps.

Your API key resides in your Phantombuster Org settings page. This page is also accessible through your navbar's menu, under the label "Org settings". Please notice that for security reasons, you key will only be shown once, on creation. You better copy it in a safe place before refreshing or leaving the page.

For more info, refer to Phantombuster's API documentation.

About Phantombuster

Code-free automations and data extraction. Chain actions and data extraction on the web to generate business leads, marketing audiences and overall growth. Phantombuster gives you the tools and know-how to grow your business faster.

Action

Description:Transcribes audio into the input language. [See docs here](https://platform.openai.com/docs/api-reference/audio/create).
Version:0.1.8
Key:openai-create-transcription

OpenAI (ChatGPT) Overview

The OpenAI API is a powerful tool that provides access to a range of
high-powered machine learning models. With the OpenAI API, developers can
create products, services, and tools that enable humanizing AI experiences, and
build applications that extend the power of human language.

Using the OpenAI API, developers can create language-driven applications such
as:

  • Natural language understanding and sentiment analysis
  • Text-based search
  • Question answering
  • Dialogue systems and conversation agents
  • Intelligent text completion
  • Text summarization
  • Text classification
  • Machine translation
  • Language generation
  • Multi-factor authentication
  • Anomaly detection
  • Text analysis

Action Code

import ffmpegInstaller from "@ffmpeg-installer/ffmpeg";
import { ConfigurationError } from "@pipedream/platform";
import axios from "axios";
import Bottleneck from "bottleneck";
import { exec } from "child_process";
import FormData from "form-data";
import fs from "fs";
import {
  extname,
  join,
} from "path";
import stream from "stream";
import { promisify } from "util";
import openai from "../../openai.app.mjs";
import common from "../common/common.mjs";
import constants from "../common/constants.mjs";
import lang from "../common/lang.mjs";

const COMMON_AUDIO_FORMATS_TEXT = "Your audio file must be in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.";
const CHUNK_SIZE_MB = 20;

const execAsync = promisify(exec);
const pipelineAsync = promisify(stream.pipeline);

export default {
  name: "Create Transcription (Whisper)",
  version: "0.1.8",
  key: "openai-create-transcription",
  description: "Transcribes audio into the input language. [See docs here](https://platform.openai.com/docs/api-reference/audio/create).",
  type: "action",
  props: {
    openai,
    uploadType: {
      label: "Audio Upload Type",
      description: "Are you uploading an audio file from [your workflow's `/tmp` directory](https://pipedream.com/docs/code/nodejs/working-with-files/#the-tmp-directory), or providing a URL to the file?",
      type: "string",
      options: [
        "File",
        "URL",
      ],
      reloadProps: true,
    },
    language: {
      label: "Language",
      description: "**Optional**. The language of the input audio. Supplying the input language will improve accuracy and latency.",
      type: "string",
      optional: true,
      options: lang.LANGUAGES.map((l) => ({
        label: l.label,
        value: l.value,
      })),
    },
  },
  async additionalProps() {
    const props = {};
    switch (this.uploadType) {
    case "File":
      props.path = {
        type: "string",
        label: "File Path",
        description: `A path to your audio file to transcribe, e.g. \`/tmp/audio.mp3\`. ${COMMON_AUDIO_FORMATS_TEXT} Add the appropriate extension (mp3, mp4, etc.) on your filename — OpenAI uses the extension to determine the file type. [See the Pipedream docs on saving files to \`/tmp\`](https://pipedream.com/docs/code/nodejs/working-with-files/#writing-a-file-to-tmp)`,
      };
      break;
    case "URL":
      props.url = {
        type: "string",
        label: "URL",
        description: `A public URL to the audio file to transcribe. This URL must point directly to the audio file, not a webpage that links to the audio file. ${COMMON_AUDIO_FORMATS_TEXT}`,
      };
      break;
    default:
      throw new ConfigurationError("Invalid upload type specified. Please provide 'File' or 'URL'.");
    }
    // Because we need to display the file or URL above, and not below, these optional props
    // TODO: Will be fixed when we render optional props correctly when used with additionalProps
    props.prompt = {
      label: "Prompt",
      description: "**Optional** text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text/prompting) should match the audio language.",
      type: "string",
      optional: true,
    };
    props.responseFormat = {
      label: "Response Format",
      description: "**Optional**. The format of the response. The default is `json`.",
      type: "string",
      default: "json",
      optional: true,
      options: constants.TRANSCRIPTION_FORMATS,
    };
    props.temperature = common.props.temperature;

    return props;
  },
  methods: {
    createForm({
      file, outputDir,
    }) {
      const form = new FormData();
      form.append("model", "whisper-1");
      if (this.prompt) form.append("prompt", this.prompt);
      if (this.temperature) form.append("temperature", this.temperature);
      if (this.language) form.append("language", this.language);
      if (this.responseFormat) form.append("response_format", this.responseFormat);
      const readStream = fs.createReadStream(join(outputDir, file));
      form.append("file", readStream);
      return form;
    },
    async splitLargeChunks(files, outputDir) {
      for (const file of files) {
        if (fs.statSync(`${outputDir}/${file}`).size / (1024 * 1024) > CHUNK_SIZE_MB) {
          await this.chunkFile({
            file: `${outputDir}/${file}`,
            outputDir,
            index: file.slice(6, 9),
          });
          await execAsync(`rm -f "${outputDir}/${file}"`);
        }
      }
    },
    async chunkFileAndTranscribe({
      file, $,
    }) {
      const outputDir = join("/tmp", "chunks");
      await execAsync(`mkdir -p "${outputDir}"`);
      await execAsync(`rm -f "${outputDir}/*"`);

      await this.chunkFile({
        file,
        outputDir,
      });

      let files = await fs.promises.readdir(outputDir);
      // ffmpeg will sometimes return chunks larger than the allowed size,
      // so we need to identify large chunks and break them down further
      await this.splitLargeChunks(files, outputDir);
      files = await fs.promises.readdir(outputDir);

      return this.transcribeFiles({
        files,
        outputDir,
        $,
      });
    },
    async chunkFile({
      file, outputDir, index,
    }) {
      const ffmpegPath = ffmpegInstaller.path;
      const ext = extname(file);

      const fileSizeInMB = fs.statSync(file).size / (1024 * 1024);
      // We're limited to 26MB per request. Because of how ffmpeg splits files,
      // we need to be conservative in the number of chunks we create
      const conservativeChunkSizeMB = CHUNK_SIZE_MB;
      const numberOfChunks = !index
        ? Math.ceil(fileSizeInMB / conservativeChunkSizeMB)
        : 2;

      if (numberOfChunks === 1) {
        await execAsync(`cp "${file}" "${outputDir}/chunk-000${ext}"`);
        return;
      }

      const { stdout } = await execAsync(`${ffmpegPath} -i "${file}" 2>&1 | grep "Duration"`);
      const duration = stdout.match(/\d{2}:\d{2}:\d{2}\.\d{2}/s)[0];
      const [
        hours,
        minutes,
        seconds,
      ] = duration.split(":").map(parseFloat);

      const totalSeconds = (hours * 60 * 60) + (minutes * 60) + seconds;
      const segmentTime = Math.ceil(totalSeconds / numberOfChunks);

      const command = `${ffmpegPath} -i "${file}" -f segment -segment_time ${segmentTime} -c copy "${outputDir}/chunk-${index
        ? `${index}-`
        : ""}%03d${ext}"`;
      await execAsync(command);
    },
    transcribeFiles({
      files, outputDir, $,
    }) {
      const limiter = new Bottleneck({
        maxConcurrent: 1,
        minTime: 1000 / 59,
      });

      return Promise.all(files.map((file) => {
        return limiter.schedule(() => this.transcribe({
          file,
          outputDir,
          $,
        }));
      }));
    },
    transcribe({
      file, outputDir, $,
    }) {
      const form = this.createForm({
        file,
        outputDir,
      });
      return this.openai.createTranscription({
        $,
        form,
      });
    },
    getFullText(transcriptions = []) {
      return transcriptions.map((t) => t.text || t).join(" ");
    },
  },
  async run({ $ }) {
    const {
      url,
      path,
    } = this;

    if (!url && !path) {
      throw new ConfigurationError("Must specify either File URL or File Path");
    }

    let file;

    if (path) {
      if (!fs.existsSync(path)) {
        throw new ConfigurationError(`${path} does not exist`);
      }

      file = path;
    } else if (url) {
      const ext = extname(url);

      const response = await axios({
        method: "GET",
        url,
        responseType: "stream",
        timeout: 250000,
      });

      const bufferStream = new stream.PassThrough();
      response.data.pipe(bufferStream);

      const downloadPath = join("/tmp", `audio${ext}`);
      const writeStream = fs.createWriteStream(downloadPath);

      await pipelineAsync(bufferStream, writeStream);

      file = downloadPath;
    }

    const transcriptions = await this.chunkFileAndTranscribe({
      file,
      $,
    });

    if (transcriptions.length) {
      $.export("$summary", "Successfully created transcription");
    }

    return {
      transcription: this.getFullText(transcriptions),
      transcriptions,
    };
  },
};

Action Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI.

LabelPropTypeDescription
OpenAI (ChatGPT)openaiappThis component uses the OpenAI (ChatGPT) app.
Audio Upload TypeuploadTypestringSelect a value from the drop down menu:FileURL
LanguagelanguagestringSelect a value from the drop down menu:{ "label": "Afar", "value": "aa" }{ "label": "Abkhaz", "value": "ab" }{ "label": "Avestan", "value": "ae" }{ "label": "Afrikaans", "value": "af" }{ "label": "Akan", "value": "ak" }{ "label": "Amharic", "value": "am" }{ "label": "Aragonese", "value": "an" }{ "label": "Arabic", "value": "ar" }{ "label": "Arabic (Algeria)", "value": "ar-dz" }{ "label": "Arabic (Bahrain)", "value": "ar-bh" }{ "label": "Arabic (Egypt)", "value": "ar-eg" }{ "label": "Arabic (Iraq)", "value": "ar-iq" }{ "label": "Arabic (Jordan)", "value": "ar-jo" }{ "label": "Arabic (Kuwait)", "value": "ar-kw" }{ "label": "Arabic (Lebanon)", "value": "ar-lb" }{ "label": "Arabic (Libya)", "value": "ar-ly" }{ "label": "Arabic (Morocco)", "value": "ar-ma" }{ "label": "Arabic (Oman)", "value": "ar-om" }{ "label": "Arabic (Qatar)", "value": "ar-qa" }{ "label": "Arabic (Saudi Arabia)", "value": "ar-sa" }{ "label": "Arabic (Syria)", "value": "ar-sy" }{ "label": "Arabic (Tunisia)", "value": "ar-tn" }{ "label": "Arabic (U.A.E.)", "value": "ar-ae" }{ "label": "Arabic (Yemen)", "value": "ar-ye" }{ "label": "Assamese", "value": "as" }{ "label": "Avaric", "value": "av" }{ "label": "Aymara", "value": "ay" }{ "label": "Azerbaijani", "value": "az" }{ "label": "Bashkir", "value": "ba" }{ "label": "Belarusian", "value": "be" }{ "label": "Bulgarian", "value": "bg" }{ "label": "Bislama", "value": "bi" }{ "label": "Bambara", "value": "bm" }{ "label": "Bengali", "value": "bn" }{ "label": "Tibetan", "value": "bo" }{ "label": "Breton", "value": "br" }{ "label": "Bosnian", "value": "bs" }{ "label": "Catalan", "value": "ca" }{ "label": "Chechen", "value": "ce" }{ "label": "Chamorro", "value": "ch" }{ "label": "Corsican", "value": "co" }{ "label": "Cree", "value": "cr" }{ "label": "Czech", "value": "cs" }{ "label": "Old Church Slavonic", "value": "cu" }{ "label": "Chuvash", "value": "cv" }{ "label": "Welsh", "value": "cy" }{ "label": "Danish", "value": "da" }{ "label": "German", "value": "de" }{ "label": "Divehi", "value": "dv" }{ "label": "Dzongkha", "value": "dz" }{ "label": "Ewe", "value": "ee" }{ "label": "Greek", "value": "el" }{ "label": "English", "value": "en" }{ "label": "Esperanto", "value": "eo" }{ "label": "Spanish", "value": "es" }{ "label": "Estonian", "value": "et" }{ "label": "Basque", "value": "eu" }{ "label": "Persian", "value": "fa" }{ "label": "Fula", "value": "ff" }{ "label": "Finnish", "value": "fi" }{ "label": "Fijian", "value": "fj" }{ "label": "Faroese", "value": "fo" }{ "label": "French", "value": "fr" }{ "label": "Western Frisian", "value": "fy" }{ "label": "Irish", "value": "ga" }{ "label": "Scottish Gaelic", "value": "gd" }{ "label": "Galician", "value": "gl" }{ "label": "Guaraní", "value": "gn" }{ "label": "Gujarati", "value": "gu" }{ "label": "Manx", "value": "gv" }{ "label": "Hausa", "value": "ha" }{ "label": "Hebrew", "value": "he" }{ "label": "Hindi", "value": "hi" }{ "label": "Hiri Motu", "value": "ho" }{ "label": "Croatian", "value": "hr" }{ "label": "Haitian", "value": "ht" }{ "label": "Hungarian", "value": "hu" }{ "label": "Armenian", "value": "hy" }{ "label": "Herero", "value": "hz" }{ "label": "Interlingua", "value": "ia" }{ "label": "Indonesian", "value": "id" }{ "label": "Interlingue", "value": "ie" }{ "label": "Igbo", "value": "ig" }{ "label": "Nuosu", "value": "ii" }{ "label": "Inupiaq", "value": "ik" }{ "label": "Ido", "value": "io" }{ "label": "Icelandic", "value": "is" }{ "label": "Italian", "value": "it" }{ "label": "Inuktitut", "value": "iu" }{ "label": "Japanese", "value": "ja" }{ "label": "Javanese", "value": "jv" }{ "label": "Georgian", "value": "ka" }{ "label": "Kongo", "value": "kg" }{ "label": "Kikuyu", "value": "ki" }{ "label": "Kwanyama", "value": "kj" }{ "label": "Kazakh", "value": "kk" }{ "label": "Kalaallisut", "value": "kl" }{ "label": "Khmer", "value": "km" }{ "label": "Kannada", "value": "kn" }{ "label": "Korean", "value": "ko" }{ "label": "Kanuri", "value": "kr" }{ "label": "Kashmiri", "value": "ks" }{ "label": "Kurdish", "value": "ku" }{ "label": "Komi", "value": "kv" }{ "label": "Cornish", "value": "kw" }{ "label": "Kyrgyz", "value": "ky" }{ "label": "Latin", "value": "la" }{ "label": "Luxembourgish", "value": "lb" }{ "label": "Ganda", "value": "lg" }{ "label": "Limburgish", "value": "li" }{ "label": "Lingala", "value": "ln" }{ "label": "Lao", "value": "lo" }{ "label": "Lithuanian", "value": "lt" }{ "label": "Luba-Katanga", "value": "lu" }{ "label": "Latvian", "value": "lv" }{ "label": "Malagasy", "value": "mg" }{ "label": "Marshallese", "value": "mh" }{ "label": "Māori", "value": "mi" }{ "label": "Macedonian", "value": "mk" }{ "label": "Malayalam", "value": "ml" }{ "label": "Mongolian", "value": "mn" }{ "label": "Marathi", "value": "mr" }{ "label": "Malay", "value": "ms" }{ "label": "Maltese", "value": "mt" }{ "label": "Burmese", "value": "my" }{ "label": "Nauru", "value": "na" }{ "label": "Norwegian Bokmål", "value": "nb" }{ "label": "Northern Ndebele", "value": "nd" }{ "label": "Nepali", "value": "ne" }{ "label": "Ndonga", "value": "ng" }{ "label": "Dutch", "value": "nl" }{ "label": "Norwegian Nynorsk", "value": "nn" }{ "label": "Norwegian", "value": "no" }{ "label": "Southern Ndebele", "value": "nr" }{ "label": "Navajo", "value": "nv" }{ "label": "Chichewa", "value": "ny" }{ "label": "Occitan", "value": "oc" }{ "label": "Ojibwe", "value": "oj" }{ "label": "Oromo", "value": "om" }{ "label": "Oriya", "value": "or" }{ "label": "Ossetian", "value": "os" }{ "label": "Panjabi", "value": "pa" }{ "label": "Pāli", "value": "pi" }{ "label": "Polish", "value": "pl" }{ "label": "Pashto", "value": "ps" }{ "label": "Portuguese", "value": "pt" }{ "label": "Quechua", "value": "qu" }{ "label": "Romansh", "value": "rm" }{ "label": "Kirundi", "value": "rn" }{ "label": "Romanian", "value": "ro" }{ "label": "Russian", "value": "ru" }{ "label": "Kinyarwanda", "value": "rw" }{ "label": "Sanskrit", "value": "sa" }{ "label": "Sardinian", "value": "sc" }{ "label": "Sindhi", "value": "sd" }{ "label": "Northern Sami", "value": "se" }{ "label": "Sango", "value": "sg" }{ "label": "Sinhala", "value": "si" }{ "label": "Slovak", "value": "sk" }{ "label": "Slovenian", "value": "sl" }{ "label": "Samoan", "value": "sm" }{ "label": "Shona", "value": "sn" }{ "label": "Somali", "value": "so" }{ "label": "Albanian", "value": "sq" }{ "label": "Serbian", "value": "sr" }{ "label": "Swati", "value": "ss" }{ "label": "Southern Sotho", "value": "st" }{ "label": "Sundanese", "value": "su" }{ "label": "Swedish", "value": "sv" }{ "label": "Swahili", "value": "sw" }{ "label": "Tamil", "value": "ta" }{ "label": "Telugu", "value": "te" }{ "label": "Tajik", "value": "tg" }{ "label": "Thai", "value": "th" }{ "label": "Tigrinya", "value": "ti" }{ "label": "Turkmen", "value": "tk" }{ "label": "Tagalog", "value": "tl" }{ "label": "Tswana", "value": "tn" }{ "label": "Tonga", "value": "to" }{ "label": "Turkish", "value": "tr" }{ "label": "Tsonga", "value": "ts" }{ "label": "Tatar", "value": "tt" }{ "label": "Twi", "value": "tw" }{ "label": "Tahitian", "value": "ty" }{ "label": "Uyghur", "value": "ug" }{ "label": "Ukrainian", "value": "uk" }{ "label": "Urdu", "value": "ur" }{ "label": "Uzbek", "value": "uz" }{ "label": "Venda", "value": "ve" }{ "label": "Vietnamese", "value": "vi" }{ "label": "Volapük", "value": "vo" }{ "label": "Walloon", "value": "wa" }{ "label": "Wolof", "value": "wo" }{ "label": "Xhosa", "value": "xh" }{ "label": "Yiddish", "value": "yi" }{ "label": "Yoruba", "value": "yo" }{ "label": "Zhuang", "value": "za" }{ "label": "Chinese", "value": "zh" }{ "label": "Zulu", "value": "zu" }

Action Authentication

OpenAI (ChatGPT) uses API keys for authentication. When you connect your OpenAI (ChatGPT) account, Pipedream securely stores the keys so you can easily authenticate to OpenAI (ChatGPT) APIs in both code and no-code steps.

About OpenAI (ChatGPT)

OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.

More Ways to Connect OpenAI (ChatGPT) + Phantombuster

Launch Phantom with Phantombuster API on New Fine Tuning Job Created from OpenAI (ChatGPT) API
OpenAI (ChatGPT) + Phantombuster
 
Try it
Launch Phantom with Phantombuster API on New File Created from OpenAI (ChatGPT) API
OpenAI (ChatGPT) + Phantombuster
 
Try it
Launch Phantom with Phantombuster API on New Run State Changed from OpenAI (ChatGPT) API
OpenAI (ChatGPT) + Phantombuster
 
Try it
Create Image with OpenAI API on New Output Created Event from Phantombuster API
Phantombuster + OpenAI (ChatGPT)
 
Try it
Send Prompt with OpenAI API on New Output Created Event from Phantombuster API
Phantombuster + OpenAI (ChatGPT)
 
Try it
New Output Created Event from the Phantombuster API

Emit new events when new outputs are created. See the docs here

 
Try it
New File Created from the OpenAI (ChatGPT) API

Emit new event when a new file is created in OpenAI. See the documentation

 
Try it
New Fine Tuning Job Created from the OpenAI (ChatGPT) API

Emit new event when a new fine-tuning job is created in OpenAI. See the documentation

 
Try it
New Run State Changed from the OpenAI (ChatGPT) API

Emit new event every time a run changes its status. See the documentation

 
Try it
Launch Phantom with the Phantombuster API

Adds an agent to the launch queue, See the docs

 
Try it
Chat with the OpenAI (ChatGPT) API

The Chat API, using the gpt-3.5-turbo or gpt-4 model. See docs here

 
Try it
Summarize Text with the OpenAI (ChatGPT) API

Summarizes text using the Chat API

 
Try it
Classify Items into Categories with the OpenAI (ChatGPT) API

Classify items into specific categories using the Chat API

 
Try it
Translate Text (Whisper) with the OpenAI (ChatGPT) API

Translate text from one language to another using the Chat API

 
Try it

Explore Other Apps

1
-
24
of
2,000+
apps by most popular

HTTP / Webhook
HTTP / Webhook
Get a unique URL where you can send HTTP or webhook requests
Notion
Notion
Notion is a new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team.
OpenAI (ChatGPT)
OpenAI (ChatGPT)
OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Schedule
Schedule
Trigger workflows on an interval or cron schedule.
Google Drive
Google Drive
Google Drive is a file storage and synchronization service which allows you to create and share your work online, and access your documents from anywhere.
Google Sheets
Google Sheets
Use Google Sheets to create and edit online spreadsheets. Get insights together with secure sharing in real-time and from any device.
Filter
Filter
Specify a condition that your workflow must meet and whether you'd like to proceed or end workflow execution.
Python
Python
Anything you can do in Python can be done in a Pipedream Workflow. This includes using any of the 350,000+ PyPi packages available in your Python powered workflows.
Slack
Slack
Slack is a channel-based messaging platform. With Slack, people can work together more effectively, connect all their software tools and services, and find the information they need to do their best work — all within a secure, enterprise-grade environment.
Data Stores
Data Stores
Use Pipedream Data Stores to manage state throughout your workflows.
GitHub
GitHub
Where the world builds software. Millions of developers and companies build, ship, and maintain their software on GitHub—the largest and most advanced development platform in the world.
Formatting
Formatting
Pre-built actions to make formatting and manipulating data within your workflows easier.
Node
Node
Anything you can do with Node.js, you can do in a Pipedream workflow. This includes using most of npm's 400,000+ packages.
Airtable (OAuth)
Airtable (OAuth)
Airtable is a low-code platform to build next-gen apps. Move beyond rigid tools, operationalize your critical data, and reimagine workflows with AI.
Zoom
Zoom
Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars.
Google Calendar
Google Calendar
With Google Calendar, you can quickly schedule meetings and events and get reminders about upcoming activities, so you always know what’s next.
Gmail
Gmail
Gmail offers private and secure email by Google at no cost, for business and consumer accounts.
Gmail (Developer App)
Gmail (Developer App)
Private and secure email by Google at no cost, for business and consumer accounts. Use this app to connect your own developer account credentials.
Email
Email
Trigger workflows on new emails, and send emails to yourself as part of a Pipedream workflow.
Delay
Delay
Delay, pause, suspend, or have the execution of your workflow wait for as little as one millisecond, or as long as one year.
Go
Go
Anything you can do in Go, you can do in a Pipedream Workflow. You can use any of Go packages available with a simple import.
Premium
Zoom Admin
Zoom Admin
Video conferencing (includes account-level scopes) for Zoom Admins.
Twilio
Twilio
Twilio is a cloud communications platform for building SMS, Voice & Messaging applications on an API built for global scale.
Bash
Bash
Run any Bash in a Pipedream step within your workflow, including making curl requests.