← Google Forms + OpenAI (ChatGPT) integrations

Create Transcription with OpenAI (ChatGPT) API on New Form Answer Update from Google Forms API

Pipedream makes it easy to connect APIs for OpenAI (ChatGPT), Google Forms and 1,600+ other apps remarkably fast.

Trigger workflow on
New Form Answer Update from the Google Forms API
Next, do this
Create Transcription with the OpenAI (ChatGPT) API
No credit card required
Intro to Pipedream
Watch us build a workflow
Watch us build a workflow
7 min
Watch now ➜

Trusted by 750,000+ developers from startups to Fortune 500 companies

Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo
Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo

Developers Pipedream

Getting Started

This integration creates a workflow with a Google Forms trigger and OpenAI (ChatGPT) action. When you configure and deploy the workflow, it will run on Pipedream's servers 24x7 for free.

  1. Select this integration
  2. Configure the New Form Answer Update trigger
    1. Connect your Google Forms account
    2. Configure timer
    3. Configure Form ID
  3. Configure the Create Transcription action
    1. Connect your OpenAI (ChatGPT) account
    2. Select a Audio Upload Type
    3. Optional- Select a Language
  4. Deploy the workflow
  5. Send a test event to validate your setup
  6. Turn on the trigger

Details

This integration uses pre-built, source-available components from Pipedream's GitHub repo. These components are developed by Pipedream and the community, and verified and maintained by Pipedream.

To contribute an update to an existing component or create a new component, create a PR on GitHub. If you're new to Pipedream component development, you can start with quickstarts for trigger span and action development, and then review the component API reference.

Trigger

Description:Emit a new event when an answer is sent or updated. [See the documentation](https://developers.google.com/forms/api/reference/rest/v1/forms.responses/list)
Version:0.0.1
Key:google_forms-new-form-answer-update

Trigger Code

import base from "../common/base.mjs";

export default {
  ...base,
  key: "google_forms-new-form-answer-update",
  name: "New Form Answer Update",
  description: "Emit a new event when an answer is sent or updated. [See the documentation](https://developers.google.com/forms/api/reference/rest/v1/forms.responses/list)",
  version: "0.0.1",
  dedupe: "last",
  type: "source",
  methods: {
    ...base.methods,
    generateMeta(response) {
      console.log(response);
      return {
        id: new Date(response.lastSubmittedTime).getTime(),
        summary: "New Answer Update",
      };
    },
  },
  async run({ $ }) {
    const lastSubmittedTime = this.getLastSubmittedTime();
    const { responses } = await this.googleForms.listFormResponses({
      formId: this.formId,
      $,
      params: {
        filter: lastSubmittedTime && `timestamp >= ${lastSubmittedTime}`,
      },
    });
    if (!responses || !Array.isArray(responses)) {
      return;
    }
    const responseSorted = this.sortResponses(responses);
    this.setLastSubmittedTime(
      responseSorted.length && responseSorted[0].lastSubmittedTime,
    );
    this.emitResponses(responseSorted.reverse());
  },
};

Trigger Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI and CLI.
LabelPropTypeDescription
Google FormsgoogleFormsappThis component uses the Google Forms app.
timer$.interface.timer
N/Adb$.service.dbThis component uses $.service.db to maintain state between executions.
Form IDformIdstring

Identifier of a Google Form

Trigger Authentication

Google Forms uses OAuth authentication. When you connect your Google Forms account, Pipedream will open a popup window where you can sign into Google Forms and grant Pipedream permission to connect to your account. Pipedream securely stores and automatically refreshes the OAuth tokens so you can easily authenticate any Google Forms API.

Pipedream requests the following authorization scopes when you connect your account:

emailprofilehttps://www.googleapis.com/auth/forms.bodyhttps://www.googleapis.com/auth/forms.body.readonlyhttps://www.googleapis.com/auth/forms.responses.readonly

About Google Forms

Get insights quickly, with Google Forms. Easily create and share online forms and surveys, and analyze responses in real-time.

Action

Description:Transcribes audio into the input language. [See docs here](https://platform.openai.com/docs/api-reference/audio/create).
Version:0.1.6
Key:openai-create-transcription

OpenAI (ChatGPT) Overview

The OpenAI API is a powerful tool that provides access to a range of
high-powered machine learning models. With the OpenAI API, developers can
create products, services, and tools that enable humanizing AI experiences, and
build applications that extend the power of human language.

Using the OpenAI API, developers can create language-driven applications such
as:

  • Natural language understanding and sentiment analysis
  • Text-based search
  • Question answering
  • Dialogue systems and conversation agents
  • Intelligent text completion
  • Text summarization
  • Text classification
  • Machine translation
  • Language generation
  • Multi-factor authentication
  • Anomaly detection
  • Text analysis

Action Code

import ffmpegInstaller from "@ffmpeg-installer/ffmpeg";
import { ConfigurationError } from "@pipedream/platform";
import axios from "axios";
import Bottleneck from "bottleneck";
import { exec } from "child_process";
import FormData from "form-data";
import fs from "fs";
import {
  extname,
  join,
} from "path";
import stream from "stream";
import { promisify } from "util";
import openai from "../../openai.app.mjs";
import common from "../common/common.mjs";
import constants from "../common/constants.mjs";
import lang from "../common/lang.mjs";

const COMMON_AUDIO_FORMATS_TEXT = "Your audio file must be in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.";
const CHUNK_SIZE_MB = 20;

const execAsync = promisify(exec);
const pipelineAsync = promisify(stream.pipeline);

export default {
  name: "Create Transcription",
  version: "0.1.6",
  key: "openai-create-transcription",
  description: "Transcribes audio into the input language. [See docs here](https://platform.openai.com/docs/api-reference/audio/create).",
  type: "action",
  props: {
    openai,
    uploadType: {
      label: "Audio Upload Type",
      description: "Are you uploading an audio file from [your workflow's `/tmp` directory](https://pipedream.com/docs/code/nodejs/working-with-files/#the-tmp-directory), or providing a URL to the file?",
      type: "string",
      options: [
        "File",
        "URL",
      ],
      reloadProps: true,
    },
    language: {
      label: "Language",
      description: "**Optional**. The language of the input audio. Supplying the input language will improve accuracy and latency.",
      type: "string",
      optional: true,
      options: lang.LANGUAGES.map((l) => ({
        label: l.label,
        value: l.value,
      })),
    },
  },
  async additionalProps() {
    const props = {};
    switch (this.uploadType) {
    case "File":
      props.path = {
        type: "string",
        label: "File Path",
        description: `A path to your audio file to transcribe, e.g. \`/tmp/audio.mp3\`. ${COMMON_AUDIO_FORMATS_TEXT} Add the appropriate extension (mp3, mp4, etc.) on your filename — OpenAI uses the extension to determine the file type. [See the Pipedream docs on saving files to \`/tmp\`](https://pipedream.com/docs/code/nodejs/working-with-files/#writing-a-file-to-tmp)`,
      };
      break;
    case "URL":
      props.url = {
        type: "string",
        label: "URL",
        description: `A public URL to the audio file to transcribe. This URL must point directly to the audio file, not a webpage that links to the audio file. ${COMMON_AUDIO_FORMATS_TEXT}`,
      };
      break;
    default:
      throw new ConfigurationError("Invalid upload type specified. Please provide 'File' or 'URL'.");
    }
    // Because we need to display the file or URL above, and not below, these optional props
    // TODO: Will be fixed when we render optional props correctly when used with additionalProps
    props.prompt = {
      label: "Prompt",
      description: "**Optional** text to guide the model's style or continue a previous audio segment. The [prompt](https://platform.openai.com/docs/guides/speech-to-text/prompting) should match the audio language.",
      type: "string",
      optional: true,
    };
    props.responseFormat = {
      label: "Response Format",
      description: "**Optional**. The format of the response. The default is `json`.",
      type: "string",
      default: "json",
      optional: true,
      options: constants.TRANSCRIPTION_FORMATS,
    };
    props.temperature = common.props.temperature;

    return props;
  },
  methods: {
    createForm({
      file, outputDir,
    }) {
      const form = new FormData();
      form.append("model", "whisper-1");
      if (this.prompt) form.append("prompt", this.prompt);
      if (this.temperature) form.append("temperature", this.temperature);
      if (this.language) form.append("language", this.language);
      if (this.responseFormat) form.append("response_format", this.responseFormat);
      const readStream = fs.createReadStream(join(outputDir, file));
      form.append("file", readStream);
      return form;
    },
    async splitLargeChunks(files, outputDir) {
      for (const file of files) {
        if (fs.statSync(`${outputDir}/${file}`).size / (1024 * 1024) > CHUNK_SIZE_MB) {
          await this.chunkFile({
            file: `${outputDir}/${file}`,
            outputDir,
            index: file.slice(6, 9),
          });
          await execAsync(`rm -f "${outputDir}/${file}"`);
        }
      }
    },
    async chunkFileAndTranscribe({
      file, $,
    }) {
      const outputDir = join("/tmp", "chunks");
      await execAsync(`mkdir -p "${outputDir}"`);
      await execAsync(`rm -f "${outputDir}/*"`);

      await this.chunkFile({
        file,
        outputDir,
      });

      let files = await fs.promises.readdir(outputDir);
      // ffmpeg will sometimes return chunks larger than the allowed size,
      // so we need to identify large chunks and break them down further
      await this.splitLargeChunks(files, outputDir);
      files = await fs.promises.readdir(outputDir);

      return this.transcribeFiles({
        files,
        outputDir,
        $,
      });
    },
    async chunkFile({
      file, outputDir, index,
    }) {
      const ffmpegPath = ffmpegInstaller.path;
      const ext = extname(file);

      const fileSizeInMB = fs.statSync(file).size / (1024 * 1024);
      // We're limited to 26MB per request. Because of how ffmpeg splits files,
      // we need to be conservative in the number of chunks we create
      const conservativeChunkSizeMB = CHUNK_SIZE_MB;
      const numberOfChunks = !index
        ? Math.ceil(fileSizeInMB / conservativeChunkSizeMB)
        : 2;

      if (numberOfChunks === 1) {
        await execAsync(`cp "${file}" "${outputDir}/chunk-000${ext}"`);
        return;
      }

      const { stdout } = await execAsync(`${ffmpegPath} -i "${file}" 2>&1 | grep "Duration"`);
      const duration = stdout.match(/\d{2}:\d{2}:\d{2}\.\d{2}/s)[0];
      const [
        hours,
        minutes,
        seconds,
      ] = duration.split(":").map(parseFloat);

      const totalSeconds = (hours * 60 * 60) + (minutes * 60) + seconds;
      const segmentTime = Math.ceil(totalSeconds / numberOfChunks);

      const command = `${ffmpegPath} -i "${file}" -f segment -segment_time ${segmentTime} -c copy "${outputDir}/chunk-${index
        ? `${index}-`
        : ""}%03d${ext}"`;
      await execAsync(command);
    },
    transcribeFiles({
      files, outputDir, $,
    }) {
      const limiter = new Bottleneck({
        maxConcurrent: 1,
        minTime: 1000 / 59,
      });

      return Promise.all(files.map((file) => {
        return limiter.schedule(() => this.transcribe({
          file,
          outputDir,
          $,
        }));
      }));
    },
    transcribe({
      file, outputDir, $,
    }) {
      const form = this.createForm({
        file,
        outputDir,
      });
      return this.openai.createTranscription({
        $,
        form,
      });
    },
    getFullText(transcriptions = []) {
      return transcriptions.map((t) => t.text || t).join(" ");
    },
  },
  async run({ $ }) {
    const {
      url,
      path,
    } = this;

    if (!url && !path) {
      throw new ConfigurationError("Must specify either File URL or File Path");
    }

    let file;

    if (path) {
      if (!fs.existsSync(path)) {
        throw new ConfigurationError(`${path} does not exist`);
      }

      file = path;
    } else if (url) {
      const ext = extname(url);

      const response = await axios({
        method: "GET",
        url,
        responseType: "stream",
        timeout: 250000,
      });

      const bufferStream = new stream.PassThrough();
      response.data.pipe(bufferStream);

      const downloadPath = join("/tmp", `audio${ext}`);
      const writeStream = fs.createWriteStream(downloadPath);

      await pipelineAsync(bufferStream, writeStream);

      file = downloadPath;
    }

    const transcriptions = await this.chunkFileAndTranscribe({
      file,
      $,
    });

    if (transcriptions.length) {
      $.export("$summary", "Successfully created transcription");
    }

    return {
      transcription: this.getFullText(transcriptions),
      transcriptions,
    };
  },
};

Action Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI.

LabelPropTypeDescription
OpenAI (ChatGPT)openaiappThis component uses the OpenAI (ChatGPT) app.
Audio Upload TypeuploadTypestringSelect a value from the drop down menu:FileURL
LanguagelanguagestringSelect a value from the drop down menu:{ "label": "Afar", "value": "aa" }{ "label": "Abkhaz", "value": "ab" }{ "label": "Avestan", "value": "ae" }{ "label": "Afrikaans", "value": "af" }{ "label": "Akan", "value": "ak" }{ "label": "Amharic", "value": "am" }{ "label": "Aragonese", "value": "an" }{ "label": "Arabic", "value": "ar" }{ "label": "Arabic (Algeria)", "value": "ar-dz" }{ "label": "Arabic (Bahrain)", "value": "ar-bh" }{ "label": "Arabic (Egypt)", "value": "ar-eg" }{ "label": "Arabic (Iraq)", "value": "ar-iq" }{ "label": "Arabic (Jordan)", "value": "ar-jo" }{ "label": "Arabic (Kuwait)", "value": "ar-kw" }{ "label": "Arabic (Lebanon)", "value": "ar-lb" }{ "label": "Arabic (Libya)", "value": "ar-ly" }{ "label": "Arabic (Morocco)", "value": "ar-ma" }{ "label": "Arabic (Oman)", "value": "ar-om" }{ "label": "Arabic (Qatar)", "value": "ar-qa" }{ "label": "Arabic (Saudi Arabia)", "value": "ar-sa" }{ "label": "Arabic (Syria)", "value": "ar-sy" }{ "label": "Arabic (Tunisia)", "value": "ar-tn" }{ "label": "Arabic (U.A.E.)", "value": "ar-ae" }{ "label": "Arabic (Yemen)", "value": "ar-ye" }{ "label": "Assamese", "value": "as" }{ "label": "Avaric", "value": "av" }{ "label": "Aymara", "value": "ay" }{ "label": "Azerbaijani", "value": "az" }{ "label": "Bashkir", "value": "ba" }{ "label": "Belarusian", "value": "be" }{ "label": "Bulgarian", "value": "bg" }{ "label": "Bislama", "value": "bi" }{ "label": "Bambara", "value": "bm" }{ "label": "Bengali", "value": "bn" }{ "label": "Tibetan", "value": "bo" }{ "label": "Breton", "value": "br" }{ "label": "Bosnian", "value": "bs" }{ "label": "Catalan", "value": "ca" }{ "label": "Chechen", "value": "ce" }{ "label": "Chamorro", "value": "ch" }{ "label": "Corsican", "value": "co" }{ "label": "Cree", "value": "cr" }{ "label": "Czech", "value": "cs" }{ "label": "Old Church Slavonic", "value": "cu" }{ "label": "Chuvash", "value": "cv" }{ "label": "Welsh", "value": "cy" }{ "label": "Danish", "value": "da" }{ "label": "German", "value": "de" }{ "label": "Divehi", "value": "dv" }{ "label": "Dzongkha", "value": "dz" }{ "label": "Ewe", "value": "ee" }{ "label": "Greek", "value": "el" }{ "label": "English", "value": "en" }{ "label": "Esperanto", "value": "eo" }{ "label": "Spanish", "value": "es" }{ "label": "Estonian", "value": "et" }{ "label": "Basque", "value": "eu" }{ "label": "Persian", "value": "fa" }{ "label": "Fula", "value": "ff" }{ "label": "Finnish", "value": "fi" }{ "label": "Fijian", "value": "fj" }{ "label": "Faroese", "value": "fo" }{ "label": "French", "value": "fr" }{ "label": "Western Frisian", "value": "fy" }{ "label": "Irish", "value": "ga" }{ "label": "Scottish Gaelic", "value": "gd" }{ "label": "Galician", "value": "gl" }{ "label": "Guaraní", "value": "gn" }{ "label": "Gujarati", "value": "gu" }{ "label": "Manx", "value": "gv" }{ "label": "Hausa", "value": "ha" }{ "label": "Hebrew", "value": "he" }{ "label": "Hindi", "value": "hi" }{ "label": "Hiri Motu", "value": "ho" }{ "label": "Croatian", "value": "hr" }{ "label": "Haitian", "value": "ht" }{ "label": "Hungarian", "value": "hu" }{ "label": "Armenian", "value": "hy" }{ "label": "Herero", "value": "hz" }{ "label": "Interlingua", "value": "ia" }{ "label": "Indonesian", "value": "id" }{ "label": "Interlingue", "value": "ie" }{ "label": "Igbo", "value": "ig" }{ "label": "Nuosu", "value": "ii" }{ "label": "Inupiaq", "value": "ik" }{ "label": "Ido", "value": "io" }{ "label": "Icelandic", "value": "is" }{ "label": "Italian", "value": "it" }{ "label": "Inuktitut", "value": "iu" }{ "label": "Japanese", "value": "ja" }{ "label": "Javanese", "value": "jv" }{ "label": "Georgian", "value": "ka" }{ "label": "Kongo", "value": "kg" }{ "label": "Kikuyu", "value": "ki" }{ "label": "Kwanyama", "value": "kj" }{ "label": "Kazakh", "value": "kk" }{ "label": "Kalaallisut", "value": "kl" }{ "label": "Khmer", "value": "km" }{ "label": "Kannada", "value": "kn" }{ "label": "Korean", "value": "ko" }{ "label": "Kanuri", "value": "kr" }{ "label": "Kashmiri", "value": "ks" }{ "label": "Kurdish", "value": "ku" }{ "label": "Komi", "value": "kv" }{ "label": "Cornish", "value": "kw" }{ "label": "Kyrgyz", "value": "ky" }{ "label": "Latin", "value": "la" }{ "label": "Luxembourgish", "value": "lb" }{ "label": "Ganda", "value": "lg" }{ "label": "Limburgish", "value": "li" }{ "label": "Lingala", "value": "ln" }{ "label": "Lao", "value": "lo" }{ "label": "Lithuanian", "value": "lt" }{ "label": "Luba-Katanga", "value": "lu" }{ "label": "Latvian", "value": "lv" }{ "label": "Malagasy", "value": "mg" }{ "label": "Marshallese", "value": "mh" }{ "label": "Māori", "value": "mi" }{ "label": "Macedonian", "value": "mk" }{ "label": "Malayalam", "value": "ml" }{ "label": "Mongolian", "value": "mn" }{ "label": "Marathi", "value": "mr" }{ "label": "Malay", "value": "ms" }{ "label": "Maltese", "value": "mt" }{ "label": "Burmese", "value": "my" }{ "label": "Nauru", "value": "na" }{ "label": "Norwegian Bokmål", "value": "nb" }{ "label": "Northern Ndebele", "value": "nd" }{ "label": "Nepali", "value": "ne" }{ "label": "Ndonga", "value": "ng" }{ "label": "Dutch", "value": "nl" }{ "label": "Norwegian Nynorsk", "value": "nn" }{ "label": "Norwegian", "value": "no" }{ "label": "Southern Ndebele", "value": "nr" }{ "label": "Navajo", "value": "nv" }{ "label": "Chichewa", "value": "ny" }{ "label": "Occitan", "value": "oc" }{ "label": "Ojibwe", "value": "oj" }{ "label": "Oromo", "value": "om" }{ "label": "Oriya", "value": "or" }{ "label": "Ossetian", "value": "os" }{ "label": "Panjabi", "value": "pa" }{ "label": "Pāli", "value": "pi" }{ "label": "Polish", "value": "pl" }{ "label": "Pashto", "value": "ps" }{ "label": "Portuguese", "value": "pt" }{ "label": "Quechua", "value": "qu" }{ "label": "Romansh", "value": "rm" }{ "label": "Kirundi", "value": "rn" }{ "label": "Romanian", "value": "ro" }{ "label": "Russian", "value": "ru" }{ "label": "Kinyarwanda", "value": "rw" }{ "label": "Sanskrit", "value": "sa" }{ "label": "Sardinian", "value": "sc" }{ "label": "Sindhi", "value": "sd" }{ "label": "Northern Sami", "value": "se" }{ "label": "Sango", "value": "sg" }{ "label": "Sinhala", "value": "si" }{ "label": "Slovak", "value": "sk" }{ "label": "Slovenian", "value": "sl" }{ "label": "Samoan", "value": "sm" }{ "label": "Shona", "value": "sn" }{ "label": "Somali", "value": "so" }{ "label": "Albanian", "value": "sq" }{ "label": "Serbian", "value": "sr" }{ "label": "Swati", "value": "ss" }{ "label": "Southern Sotho", "value": "st" }{ "label": "Sundanese", "value": "su" }{ "label": "Swedish", "value": "sv" }{ "label": "Swahili", "value": "sw" }{ "label": "Tamil", "value": "ta" }{ "label": "Telugu", "value": "te" }{ "label": "Tajik", "value": "tg" }{ "label": "Thai", "value": "th" }{ "label": "Tigrinya", "value": "ti" }{ "label": "Turkmen", "value": "tk" }{ "label": "Tagalog", "value": "tl" }{ "label": "Tswana", "value": "tn" }{ "label": "Tonga", "value": "to" }{ "label": "Turkish", "value": "tr" }{ "label": "Tsonga", "value": "ts" }{ "label": "Tatar", "value": "tt" }{ "label": "Twi", "value": "tw" }{ "label": "Tahitian", "value": "ty" }{ "label": "Uyghur", "value": "ug" }{ "label": "Ukrainian", "value": "uk" }{ "label": "Urdu", "value": "ur" }{ "label": "Uzbek", "value": "uz" }{ "label": "Venda", "value": "ve" }{ "label": "Vietnamese", "value": "vi" }{ "label": "Volapük", "value": "vo" }{ "label": "Walloon", "value": "wa" }{ "label": "Wolof", "value": "wo" }{ "label": "Xhosa", "value": "xh" }{ "label": "Yiddish", "value": "yi" }{ "label": "Yoruba", "value": "yo" }{ "label": "Zhuang", "value": "za" }{ "label": "Chinese", "value": "zh" }{ "label": "Zulu", "value": "zu" }

Action Authentication

OpenAI (ChatGPT) uses API keys for authentication. When you connect your OpenAI (ChatGPT) account, Pipedream securely stores the keys so you can easily authenticate to OpenAI (ChatGPT) APIs in both code and no-code steps.

About OpenAI (ChatGPT)

OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.

More Ways to Connect OpenAI (ChatGPT) + Google Forms

Create Form with Google Forms API on New File Created from OpenAI (ChatGPT) API
OpenAI (ChatGPT) + Google Forms
 
Try it
Create Text Question with Google Forms API on New File Created from OpenAI (ChatGPT) API
OpenAI (ChatGPT) + Google Forms
 
Try it
Get Form with Google Forms API on New File Created from OpenAI (ChatGPT) API
OpenAI (ChatGPT) + Google Forms
 
Try it
List Form Responses with Google Forms API on New File Created from OpenAI (ChatGPT) API
OpenAI (ChatGPT) + Google Forms
 
Try it
Update Form Title with Google Forms API on New File Created from OpenAI (ChatGPT) API
OpenAI (ChatGPT) + Google Forms
 
Try it
New Form Answer from the Google Forms API

Emit a new event when the form is answered. See the documentation

 
Try it
New Form Answer Update from the Google Forms API

Emit a new event when an answer is sent or updated. See the documentation

 
Try it
New File Created from the OpenAI (ChatGPT) API

Emit new event when a new file is created in OpenAI. See the documentation

 
Try it
New Fine Tuning Job Created from the OpenAI (ChatGPT) API

Emit new event when a new fine-tuning job is created in OpenAI. See the documentation

 
Try it
Create Form with the Google Forms API

Creates a new Google Form. See the documentation

 
Try it
Create Text Question with the Google Forms API

Creates a new text question in a Google Form. See the documentation

 
Try it
Get Form with the Google Forms API

Get information about a Google Form. See the documentation

 
Try it
Get Form Response with the Google Forms API

Get a response from a form. See the documentation

 
Try it
List Form Responses with the Google Forms API

List a form's responses. See the documentation

 
Try it

Explore Other Apps

1
-
12
of
1,600+
apps by most popular

HTTP / Webhook
HTTP / Webhook
Get a unique URL where you can send HTTP or webhook requests
Node
Node
Anything you can do with Node.js, you can do in a Pipedream workflow. This includes using most of npm's 400,000+ packages.
Beta
Python
Python
Anything you can do in Python can be done in a Pipedream Workflow. This includes using any of the 350,000+ PyPi packages available in your Python powered workflows.
Schedule
Schedule
Trigger workflows on an interval or cron schedule.
Beta
Data Stores
Data Stores
Use Pipedream Data Stores to manage state throughout your workflows.
Telegram Bot
Telegram Bot
Telegram is a cloud-based instant messaging and voice over IP service
OpenAI (ChatGPT)
OpenAI (ChatGPT)
OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Google Sheets
Google Sheets
With Google Sheets, you can create, edit, and collaborate wherever you are
Discord
Discord
Use this app to create a Discord source that emits messages from your guild to a Pipedream workflow.
GitHub
GitHub
Where the world builds software. Millions of developers and companies build, ship, and maintain their software on GitHub—the largest and most advanced development platform in the world.
Formatting
Formatting
Pre-built actions to make formatting and manipulating data within your workflows easier.
Slack
Slack
Slack is a channel-based messaging platform. With Slack, people can work together more effectively, connect all their software tools and services, and find the information they need to do their best work — all within a secure, enterprise-grade environment.