← Typeform + Vapi integrations

Update Assistant Settings with Vapi API on New Submission from Typeform API

Pipedream makes it easy to connect APIs for Vapi, Typeform and 2,400+ other apps remarkably fast.

Trigger workflow on
New Submission from the Typeform API
Next, do this
Update Assistant Settings with the Vapi API
No credit card required
Intro to Pipedream
Watch us build a workflow
Watch us build a workflow
8 min
Watch now ➜

Trusted by 1,000,000+ developers from startups to Fortune 500 companies

Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo
Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo

Developers Pipedream

Getting Started

This integration creates a workflow with a Typeform trigger and Vapi action. When you configure and deploy the workflow, it will run on Pipedream's servers 24x7 for free.

  1. Select this integration
  2. Configure the New Submission trigger
    1. Connect your Typeform account
    2. Select a Form
  3. Configure the Update Assistant Settings action
    1. Connect your Vapi account
    2. Select a Assistant ID
    3. Optional- Configure Transcriber
    4. Optional- Configure Model
    5. Optional- Configure Voice
    6. Optional- Configure First Message
    7. Optional- Select a First Message Mode
    8. Optional- Configure HIPAA Enabled
    9. Optional- Select one or more Client Messages
    10. Optional- Select one or more Server Messages
    11. Optional- Configure Silence Timeout Seconds
    12. Optional- Configure Max Duration Seconds
    13. Optional- Select a Background Sound
    14. Optional- Configure Background Denoising Enabled
    15. Optional- Configure Model Output in Messages Enabled
    16. Optional- Configure Transport Configurations
    17. Optional- Configure Credentials
    18. Optional- Configure Name
    19. Optional- Configure Voicemail Detection
    20. Optional- Configure Voicemail Message
    21. Optional- Configure End Call Message
    22. Optional- Configure End Call Phrases
    23. Optional- Configure Metadata
    24. Optional- Configure Analysis Plan
    25. Optional- Configure Artifact Plan
    26. Optional- Configure Message Plan
    27. Optional- Configure Start Speaking Plan
    28. Optional- Configure Stop Speaking Plan
    29. Optional- Configure Monitor Plan
    30. Optional- Configure Credential IDs
    31. Optional- Configure Server
  4. Deploy the workflow
  5. Send a test event to validate your setup
  6. Turn on the trigger

Details

This integration uses pre-built, source-available components from Pipedream's GitHub repo. These components are developed by Pipedream and the community, and verified and maintained by Pipedream.

To contribute an update to an existing component or create a new component, create a PR on GitHub. If you're new to Pipedream component development, you can start with quickstarts for trigger span and action development, and then review the component API reference.

Trigger

Description:Emit new submission
Version:0.0.8
Key:typeform-new-submission

Typeform Overview

The Typeform API furnishes you with the means to create dynamic forms and collect user responses in real-time. By leveraging this API within Pipedream's serverless platform, you can automate workflows to process this data, integrate seamlessly with other services, and react to form submissions instantaneously. This empowers you to craft tailored responses, synchronize with databases, trigger email campaigns, or even manage event registrations without manual intervention.

Trigger Code

import { createHmac } from "crypto";
import sampleEmit from "./test-event.mjs";
import { uuid } from "uuidv4";
import common from "../common/common.mjs";
import constants from "../../constants.mjs";
import utils from "../common/utils.mjs";

const { typeform } = common.props;
const { parseIsoDate } = utils;

export default {
  ...common,
  key: "typeform-new-submission",
  name: "New Submission",
  version: "0.0.8",
  type: "source",
  description: "Emit new submission",
  props: {
    ...common.props,
    http: {
      type: "$.interface.http",
      customResponse: true,
    },
    db: "$.service.db",
    formId: {
      propDefinition: [
        typeform,
        "formId",
      ],
    },
  },
  methods: {
    ...common.methods,
    generateSecret() {
      return "" + Math.random();
    },
  },
  hooks: {
    ...common.hooks,
    async activate() {
      const secret = this.generateSecret();
      this._setSecret(secret);

      let tag = this._getTag();
      if (!tag) {
        tag = uuid();
        this._setTag(tag);
      }

      return await this.typeform.createHook({
        endpoint: this.http.endpoint,
        formId: this.formId,
        tag,
        secret,
      });
    },
    async deactivate() {
      const tag = this._getTag();

      return await this.typeform.deleteHook({
        formId: this.formId,
        tag,
      });
    },
  },
  async run(event) {
    const {
      body,
      headers,
    } = event;

    const { [constants.TYPEFORM_SIGNATURE]: typeformSignature } = headers;

    if (typeformSignature) {
      const secret = this._getSecret();

      const hmac =
        createHmac(constants.ALGORITHM, secret)
          .update(body)
          .digest(constants.ENCODING);

      const signature = `${constants.ALGORITHM}=${hmac}`;

      if (typeformSignature !== signature) {
        throw new Error("signature mismatch");
      }
    }

    let formResponseString = "";
    const data = Object.assign({}, body.form_response);
    data.form_response_parsed = {};

    for (let i = 0; i < body.form_response.answers.length; i++) {
      const field = body.form_response.definition.fields[i];
      const answer = body.form_response.answers[i];

      let parsedAnswer;
      let value = answer[answer.type];

      if (value.label) {
        parsedAnswer = value.label;

      } else if (value.labels) {
        parsedAnswer = value.labels.join();

      } else if (value.choice) {
        parsedAnswer = value.choice;

      } else if (value.choices) {
        parsedAnswer = value.choices.join();

      } else {
        parsedAnswer = value;
      }

      data.form_response_parsed[field.title] = parsedAnswer;
      formResponseString += `### ${field.title}\n${parsedAnswer}\n`;
    }

    data.form_response_string = formResponseString;
    data.raw_webhook_event = body;

    if (data.landed_at) {
      data.landed_at = parseIsoDate(data.landed_at);
    }

    if (data.submitted_at) {
      data.submitted_at = parseIsoDate(data.submitted_at);
    }

    data.form_title = body.form_response.definition.title;
    delete data.answers;
    delete data.definition;

    this.$emit(data, {
      summary: JSON.stringify(data),
      id: data.token,
    });

    this.http.respond({
      status: 200,
    });
  },
  sampleEmit,
};

Trigger Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI and CLI.
LabelPropTypeDescription
TypeformtypeformappThis component uses the Typeform app.
N/Ahttp$.interface.httpThis component uses $.interface.http to generate a unique URL when the component is first instantiated. Each request to the URL will trigger the run() method of the component.
N/Adb$.service.dbThis component uses $.service.db to maintain state between executions.
FormformIdstringSelect a value from the drop down menu.

Trigger Authentication

Typeform uses OAuth authentication. When you connect your Typeform account, Pipedream will open a popup window where you can sign into Typeform and grant Pipedream permission to connect to your account. Pipedream securely stores and automatically refreshes the OAuth tokens so you can easily authenticate any Typeform API.

Pipedream requests the following authorization scopes when you connect your account:

offlineaccounts:readforms:writeforms:readimages:writeimages:readthemes:writethemes:readresponses:readresponses:writewebhooks:readwebhooks:writeworkspaces:readworkspaces:write

About Typeform

Typeform lets you build no-code forms, quizzes, and surveys - and get more responses.

Action

Description:Updates the configuration settings for a specific assistant. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update)
Version:0.0.1
Key:vapi-update-assistant-settings

Vapi Overview

The Vapi API delivers voice automation capabilities, letting you build powerful voice response systems. With Vapi, you can automate calls, send voice messages, and create dynamic interactions through speech recognition and text-to-speech. Pipedream's serverless platform allows you to integrate Vapi's API with numerous other services to automate workflows, react to events, and orchestrate complex voice-enabled processes.

Action Code

import {
  BACKGROUND_SOUND,
  CLIENT_MESSAGE_OPTIONS,
  FIRST_MESSAGE_MODE_OPTIONS,
  SERVER_MESSAGE_OPTIONS,
} from "../../common/constants.mjs";
import {
  clearObj,
  parseObject,
} from "../../common/utils.mjs";
import vapi from "../../vapi.app.mjs";

export default {
  key: "vapi-update-assistant-settings",
  name: "Update Assistant Settings",
  description: "Updates the configuration settings for a specific assistant. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update)",
  version: "0.0.1",
  type: "action",
  props: {
    vapi,
    assistantId: {
      propDefinition: [
        vapi,
        "assistantId",
      ],
    },
    transcriber: {
      type: "object",
      label: "Transcriber",
      description: "A formatted JSON object for the assistant's transcriber. **Example: { \"provider\": \"talkscriber\", \"language\": \"en\", \"model\": \"whisper\" }**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    model: {
      type: "object",
      label: "Model",
      description: "A formatted JSON object for the assistant's LLM. **Example: {\"provider\": \"xai\", \"model\": \"grok-beta\", \"emotionRecognitionEnabled\": true, \"knowledgeBase\": {\"server\": {\"url\": \"url\", \"timeoutSeconds\": 20}}, \"knowledgeBaseId\": \"model\", \"maxTokens\": 1.1, \"messages\": [{\"role\": \"assistant\"}], \"numFastTurns\": 1.1, \"temperature\": 1.1, \"toolIds\": [\"model\"], \"tools\": [{\"type\": \"transferCall\", \"async\": false}]}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    voice: {
      type: "object",
      label: "Voice",
      description: "A formatted JSON object for the assistant's voice. **Example: {\"provider\":\"tavus\",\"voiceId\":\"r52da2535a\",\"callbackUrl\":\"voice\",\"chunkPlan\":{\"enabled\":true,\"minCharacters\":30,\"punctuationBoundaries\":[\"。\",\",\",\".\",\"!\",\"?\",\";\",\"،\",\",\",\"।\",\"॥\",\"|\",\"||\",\",\",\":\"],\"formatPlan\":{\"enabled\":true,\"numberToDigitsCutoff\":2025}},\"conversationName\":\"voice\",\"conversationalContext\":\"voice\",\"customGreeting\":\"voice\",\"fallbackPlan\":{\"voices\":[{\"provider\":\"tavus\",\"voiceId\":\"r52da2535a\"}]},\"personaId\":\"voice\",\"properties\":{\"maxCallDuration\":1.1,\"participantLeftTimeout\":1.1,\"participantAbsentTimeout\":1.1,\"enableRecording\":true,\"enableTranscription\":true,\"applyGreenscreen\":true,\"language\":\"language\",\"recordingS3BucketName\":\"recordingS3BucketName\",\"recordingS3BucketRegion\":\"recordingS3BucketRegion\",\"awsAssumeRoleArn\":\"awsAssumeRoleArn\"}}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    firstMessage: {
      type: "string",
      label: "First Message",
      description: "The first message the assistant will say or a URL to an audio file. If unspecified, assistant will wait for user to speak and use the model to respond once they speak.",
      optional: true,
    },
    firstMessageMode: {
      type: "string",
      label: "First Message Mode",
      description: "Mode for the first message",
      optional: true,
      options: FIRST_MESSAGE_MODE_OPTIONS,
    },
    hipaaEnabled: {
      type: "boolean",
      label: "HIPAA Enabled",
      description: "When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server.",
      optional: true,
    },
    clientMessages: {
      type: "string[]",
      label: "Client Messages",
      description: "These are the messages that will be sent to your Client SDKs",
      options: CLIENT_MESSAGE_OPTIONS,
      optional: true,
    },
    serverMessages: {
      type: "string[]",
      label: "Server Messages",
      description: "These are the messages that will be sent to your Server URL",
      options: SERVER_MESSAGE_OPTIONS,
      optional: true,
    },
    silenceTimeoutSeconds: {
      type: "integer",
      label: "Silence Timeout Seconds",
      description: "How many seconds of silence to wait before ending the call.",
      optional: true,
      default: 30,
      min: 10,
      max: 3600,
    },
    maxDurationSeconds: {
      type: "integer",
      label: "Max Duration Seconds",
      description: "This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended.",
      optional: true,
      default: 600,
      min: 10,
      max: 43200,
    },
    backgroundSound: {
      type: "string",
      label: "Background Sound",
      description: "This is the background sound in the call. Default for phone calls is 'office' and default for web calls is 'off'.",
      optional: true,
      options: BACKGROUND_SOUND,
    },
    backgroundDenoisingEnabled: {
      type: "boolean",
      label: "Background Denoising Enabled",
      description: "This enables filtering of noise and background speech while the user is talking. Default false while in beta.",
      optional: true,
    },
    modelOutputInMessagesEnabled: {
      type: "boolean",
      label: "Model Output in Messages Enabled",
      description: "This determines whether the model's output is used in conversation history rather than the transcription of assistant's speech. Default false while in beta.",
      optional: true,
    },
    transportConfigurations: {
      type: "string[]",
      label: "Transport Configurations",
      description: "These are the configurations to be passed to the transport providers of assistant's calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used. **Example: [{\"provider\":\"twilio\",\"timeout\":60,\"record\":false,\"recordingChannels\":\"mono\"}]**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    credentials: {
      type: "string[]",
      label: "Credentials",
      description: "These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials. **Example: [{\"provider\":\"xai\",\"apiKey\":\"credentials\",\"name\":\"credentials\"}]**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    name: {
      type: "string",
      label: "Name",
      description: "Name of the assistant. This is required when you want to transfer between assistants in a call.",
      optional: true,
    },
    voicemailDetection: {
      type: "object",
      label: "Voicemail Detection",
      description: "These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=[VoicemailTool]. This uses Twilio's built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not. **Example: {\"provider\":\"twilio\",\"voicemailDetectionTypes\":[\"machine_end_beep\",\"machine_end_silence\"],\"enabled\":true,\"machineDetectionTimeout\":1.1,\"machineDetectionSpeechThreshold\":1.1,\"machineDetectionSpeechEndThreshold\":1.1,\"machineDetectionSilenceTimeout\":1.1}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    voicemailMessage: {
      type: "string",
      label: "Voicemail Message",
      description: "This is the message that the assistant will say if the call is forwarded to voicemail. If unspecified, it will hang up",
      optional: true,
    },
    endCallMessage: {
      type: "string",
      label: "End Call Message",
      description: "This is the message that the assistant will say if it ends the call. If unspecified, it will hang up without saying anything",
      optional: true,
    },
    endCallPhrases: {
      type: "string[]",
      label: "End Call Phrases",
      description: "A list containing phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.",
      optional: true,
    },
    metadata: {
      type: "object",
      label: "Metadata",
      description: "This is for metadata you want to store on the assistant.",
      optional: true,
    },
    analysisPlan: {
      type: "object",
      label: "Analysis Plan",
      description: "This is the plan for analysis of assistant's calls. Stored in `call.analysis`. **Example: {\"summaryPlan\":{\"messages\":[{\"key\":\"value\"}],\"enabled\":true,\"timeoutSeconds\":1.1},\"structuredDataPlan\":{\"messages\":[{\"key\":\"value\"}],\"enabled\":true,\"schema\":{\"type\":\"string\"},\"timeoutSeconds\":1.1},\"successEvaluationPlan\":{\"rubric\":\"NumericScale\",\"messages\":[{\"key\":\"value\"}],\"enabled\":true,\"timeoutSeconds\":1.1}}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    artifactPlan: {
      type: "object",
      label: "Artifact Plan",
      description: "This is the plan for artifacts generated during assistant's calls. Stored in call.artifact. **Note:** `recordingEnabled` is currently at the root level. It will be moved to `artifactPlan` in the future, but will remain backwards compatible. **Example: {\"recordingEnabled\":true,\"videoRecordingEnabled\":false,\"transcriptPlan\":{\"enabled\":true,\"assistantName\":\"assistantName\",\"userName\":\"userName\"},\"recordingPath\":\"recordingPath\"}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    messagePlan: {
      type: "object",
      label: "Message Plan",
      description: "This is the plan for static predefined messages that can be spoken by the assistant during the call, like idleMessages. **Note:** `firstMessage`, `voicemailMessage`, and `endCallMessage` are currently at the root level. They will be moved to `messagePlan` in the future, but will remain backwards compatible. **Example: {\"idleMessages\":[\"idleMessages\"],\"idleMessageMaxSpokenCount\":1.1,\"idleTimeoutSeconds\":1.1}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    startSpeakingPlan: {
      type: "object",
      label: "Start Speaking Plan",
      description: "This is the plan for when the assistant should start talking. **Example: {\"waitSeconds\":0.4,\"smartEndpointingEnabled\":false,\"customEndpointingRules\":[{\"type\":\"both\",\"assistantRegex\":\"customEndpointingRules\",\"customerRegex\":\"customEndpointingRules\",\"timeoutSeconds\":1.1}],\"transcriptionEndpointingPlan\":{\"onPunctuationSeconds\":0.1,\"onNoPunctuationSeconds\":1.5,\"onNumberSeconds\":0.5}}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    stopSpeakingPlan: {
      type: "object",
      label: "Stop Speaking Plan",
      description: "This is the plan for when assistant should stop talking on customer interruption. **Example: {\"numWords\":0,\"voiceSeconds\":0.2,\"backoffSeconds\":1}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    monitorPlan: {
      type: "object",
      label: "Monitor Plan",
      description: "This is the plan for real-time monitoring of the assistant's calls. **Note:** `serverMessages`, `clientMessages`, `serverUrl` and `serverUrlSecret` are currently at the root level but will be moved to `monitorPlan` in the future. Will remain backwards compatible. **Example: {\"listenEnabled\":false,\"controlEnabled\":false}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
    credentialIds: {
      type: "string[]",
      label: "Credential IDs",
      description: "These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.",
      optional: true,
    },
    server: {
      type: "object",
      label: "Server",
      description: "This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema. **Example: {\"url\":\"url\",\"timeoutSeconds\":20,\"secret\":\"secret\",\"headers\":{\"key\":\"value\"}}**. [See the documentation](https://docs.vapi.ai/api-reference/assistants/update) for further details",
      optional: true,
    },
  },
  async run({ $ }) {
    const {
      vapi,
      assistantId,
      transcriber,
      model,
      voice,
      clientMessages,
      serverMessages,
      transportConfigurations,
      credentials,
      voicemailDetection,
      endCallPhrases,
      metadata,
      analysisPlan,
      artifactPlan,
      messagePlan,
      startSpeakingPlan,
      stopSpeakingPlan,
      monitorPlan,
      credentialIds,
      server,
      ...data
    } = this;

    const response = await vapi.updateAssistant({
      $,
      assistantId,
      data: clearObj({
        ...data,
        transcriber: parseObject(transcriber),
        model: parseObject(model),
        voice: parseObject(voice),
        clientMessages: parseObject(clientMessages),
        serverMessages: parseObject(serverMessages),
        transportConfigurations: parseObject(transportConfigurations),
        credentials: parseObject(credentials),
        voicemailDetection: parseObject(voicemailDetection),
        endCallPhrases: parseObject(endCallPhrases),
        metadata: parseObject(metadata),
        analysisPlan: parseObject(analysisPlan),
        artifactPlan: parseObject(artifactPlan),
        messagePlan: parseObject(messagePlan),
        startSpeakingPlan: parseObject(startSpeakingPlan),
        stopSpeakingPlan: parseObject(stopSpeakingPlan),
        monitorPlan: parseObject(monitorPlan),
        credentialIds: parseObject(credentialIds),
        server: parseObject(server),
      }),
    });
    $.export("$summary", `Updated assistant ${this.assistantId} successfully`);
    return response;
  },
};

Action Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI.

LabelPropTypeDescription
VapivapiappThis component uses the Vapi app.
Assistant IDassistantIdstringSelect a value from the drop down menu.
Transcribertranscriberobject

A formatted JSON object for the assistant's transcriber. Example: { "provider": "talkscriber", "language": "en", "model": "whisper" }. See the documentation for further details

Modelmodelobject

A formatted JSON object for the assistant's LLM. Example: {"provider": "xai", "model": "grok-beta", "emotionRecognitionEnabled": true, "knowledgeBase": {"server": {"url": "url", "timeoutSeconds": 20}}, "knowledgeBaseId": "model", "maxTokens": 1.1, "messages": [{"role": "assistant"}], "numFastTurns": 1.1, "temperature": 1.1, "toolIds": ["model"], "tools": [{"type": "transferCall", "async": false}]}. See the documentation for further details

Voicevoiceobject

A formatted JSON object for the assistant's voice. Example: {"provider":"tavus","voiceId":"r52da2535a","callbackUrl":"voice","chunkPlan":{"enabled":true,"minCharacters":30,"punctuationBoundaries":["。",",",".","!","?",";","،",",","।","॥","|","||",",",":"],"formatPlan":{"enabled":true,"numberToDigitsCutoff":2025}},"conversationName":"voice","conversationalContext":"voice","customGreeting":"voice","fallbackPlan":{"voices":[{"provider":"tavus","voiceId":"r52da2535a"}]},"personaId":"voice","properties":{"maxCallDuration":1.1,"participantLeftTimeout":1.1,"participantAbsentTimeout":1.1,"enableRecording":true,"enableTranscription":true,"applyGreenscreen":true,"language":"language","recordingS3BucketName":"recordingS3BucketName","recordingS3BucketRegion":"recordingS3BucketRegion","awsAssumeRoleArn":"awsAssumeRoleArn"}}. See the documentation for further details

First MessagefirstMessagestring

The first message the assistant will say or a URL to an audio file. If unspecified, assistant will wait for user to speak and use the model to respond once they speak.

First Message ModefirstMessageModestringSelect a value from the drop down menu:{ "label": "Assistant Speaks First", "value": "assistant-speaks-first" }{ "label": "Assistant Waits for User", "value": "assistant-waits-for-user" }{ "label": "Assistant Speaks First with Model Generated Message", "value": "assistant-speaks-first-with-model-generated-message" }
HIPAA EnabledhipaaEnabledboolean

When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server.

Client MessagesclientMessagesstring[]Select a value from the drop down menu:conversation-updatefunction-callfunction-call-resulthanglanguage-changedmetadatamodel-outputspeech-updatestatus-updatetranscripttool-callstool-calls-resulttransfer-updateuser-interruptedvoice-input
Server MessagesserverMessagesstring[]Select a value from the drop down menu:conversation-updateend-of-call-reportfunction-callhanglanguage-changedlanguage-change-detectedmodel-outputphone-call-controlspeech-updatestatus-updatetranscripttranscript[transcriptType="final"]tool-callstransfer-destination-requesttransfer-updateuser-interruptedvoice-input
Silence Timeout SecondssilenceTimeoutSecondsinteger

How many seconds of silence to wait before ending the call.

Max Duration SecondsmaxDurationSecondsinteger

This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended.

Background SoundbackgroundSoundstringSelect a value from the drop down menu:{ "label": "Office", "value": "office" }{ "label": "Off", "value": "off" }
Background Denoising EnabledbackgroundDenoisingEnabledboolean

This enables filtering of noise and background speech while the user is talking. Default false while in beta.

Model Output in Messages EnabledmodelOutputInMessagesEnabledboolean

This determines whether the model's output is used in conversation history rather than the transcription of assistant's speech. Default false while in beta.

Transport ConfigurationstransportConfigurationsstring[]

These are the configurations to be passed to the transport providers of assistant's calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used. Example: [{"provider":"twilio","timeout":60,"record":false,"recordingChannels":"mono"}]. See the documentation for further details

Credentialscredentialsstring[]

These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials. Example: [{"provider":"xai","apiKey":"credentials","name":"credentials"}]. See the documentation for further details

Namenamestring

Name of the assistant. This is required when you want to transfer between assistants in a call.

Voicemail DetectionvoicemailDetectionobject

These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=[VoicemailTool]. This uses Twilio's built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not. Example: {"provider":"twilio","voicemailDetectionTypes":["machine_end_beep","machine_end_silence"],"enabled":true,"machineDetectionTimeout":1.1,"machineDetectionSpeechThreshold":1.1,"machineDetectionSpeechEndThreshold":1.1,"machineDetectionSilenceTimeout":1.1}. See the documentation for further details

Voicemail MessagevoicemailMessagestring

This is the message that the assistant will say if the call is forwarded to voicemail. If unspecified, it will hang up

End Call MessageendCallMessagestring

This is the message that the assistant will say if it ends the call. If unspecified, it will hang up without saying anything

End Call PhrasesendCallPhrasesstring[]

A list containing phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.

Metadatametadataobject

This is for metadata you want to store on the assistant.

Analysis PlananalysisPlanobject

This is the plan for analysis of assistant's calls. Stored in call.analysis. Example: {"summaryPlan":{"messages":[{"key":"value"}],"enabled":true,"timeoutSeconds":1.1},"structuredDataPlan":{"messages":[{"key":"value"}],"enabled":true,"schema":{"type":"string"},"timeoutSeconds":1.1},"successEvaluationPlan":{"rubric":"NumericScale","messages":[{"key":"value"}],"enabled":true,"timeoutSeconds":1.1}}. See the documentation for further details

Artifact PlanartifactPlanobject

This is the plan for artifacts generated during assistant's calls. Stored in call.artifact. Note: recordingEnabled is currently at the root level. It will be moved to artifactPlan in the future, but will remain backwards compatible. Example: {"recordingEnabled":true,"videoRecordingEnabled":false,"transcriptPlan":{"enabled":true,"assistantName":"assistantName","userName":"userName"},"recordingPath":"recordingPath"}. See the documentation for further details

Message PlanmessagePlanobject

This is the plan for static predefined messages that can be spoken by the assistant during the call, like idleMessages. Note: firstMessage, voicemailMessage, and endCallMessage are currently at the root level. They will be moved to messagePlan in the future, but will remain backwards compatible. Example: {"idleMessages":["idleMessages"],"idleMessageMaxSpokenCount":1.1,"idleTimeoutSeconds":1.1}. See the documentation for further details

Start Speaking PlanstartSpeakingPlanobject

This is the plan for when the assistant should start talking. Example: {"waitSeconds":0.4,"smartEndpointingEnabled":false,"customEndpointingRules":[{"type":"both","assistantRegex":"customEndpointingRules","customerRegex":"customEndpointingRules","timeoutSeconds":1.1}],"transcriptionEndpointingPlan":{"onPunctuationSeconds":0.1,"onNoPunctuationSeconds":1.5,"onNumberSeconds":0.5}}. See the documentation for further details

Stop Speaking PlanstopSpeakingPlanobject

This is the plan for when assistant should stop talking on customer interruption. Example: {"numWords":0,"voiceSeconds":0.2,"backoffSeconds":1}. See the documentation for further details

Monitor PlanmonitorPlanobject

This is the plan for real-time monitoring of the assistant's calls. Note: serverMessages, clientMessages, serverUrl and serverUrlSecret are currently at the root level but will be moved to monitorPlan in the future. Will remain backwards compatible. Example: {"listenEnabled":false,"controlEnabled":false}. See the documentation for further details

Credential IDscredentialIdsstring[]

These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.

Serverserverobject

This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema. Example: {"url":"url","timeoutSeconds":20,"secret":"secret","headers":{"key":"value"}}. See the documentation for further details

Action Authentication

Vapi uses API keys for authentication. When you connect your Vapi account, Pipedream securely stores the keys so you can easily authenticate to Vapi APIs in both code and no-code steps.

About Vapi

Vapi is the platform to build, test and deploy voicebots in minutes rather than months.

More Ways to Connect Vapi + Typeform

Create Call with Vapi API on New Submission from Typeform API
Typeform + Vapi
 
Try it
Upload File with Vapi API on New Submission from Typeform API
Typeform + Vapi
 
Try it
Create a Form with Typeform API on New Conversation Started from Vapi API
Vapi + Typeform
 
Try it
Create an Image with Typeform API on New Conversation Started from Vapi API
Vapi + Typeform
 
Try it
Delete Form with Typeform API on New Conversation Started from Vapi API
Vapi + Typeform
 
Try it
New Submission from the Typeform API

Emit new submission

 
Try it
New Conversation Started from the Vapi API

Emit new event when a voicebot starts a conversation.

 
Try it
Create a Form with the Typeform API

Creates a form with its corresponing fields. See the docs here

 
Try it
Create an Image with the Typeform API

Adds an image in your Typeform account. See the docs here

 
Try it
Delete an Image with the Typeform API

Deletes an image from your Typeform account. See the docs here

 
Try it
Delete Form with the Typeform API

Select a form to be deleted. See the docs here.

 
Try it
Duplicate a Form with the Typeform API

Duplicates an existing form in your Typeform account and adds "(copy)" to the end of the title. See the docs here

 
Try it

Explore Other Apps

1
-
24
of
2,400+
apps by most popular

HTTP / Webhook
HTTP / Webhook
Get a unique URL where you can send HTTP or webhook requests
Node
Node
Anything you can do with Node.js, you can do in a Pipedream workflow. This includes using most of npm's 400,000+ packages.
Python
Python
Anything you can do in Python can be done in a Pipedream Workflow. This includes using any of the 350,000+ PyPi packages available in your Python powered workflows.
OpenAI (ChatGPT)
OpenAI (ChatGPT)
OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Premium
Salesforce
Salesforce
Web services API for interacting with Salesforce
Premium
HubSpot
HubSpot
HubSpot's CRM platform contains the marketing, sales, service, operations, and website-building software you need to grow your business.
Premium
Zoho CRM
Zoho CRM
Zoho CRM is an online Sales CRM software that manages your sales, marketing, and support in one CRM platform.
Premium
Stripe
Stripe
Stripe powers online and in-person payment processing and financial solutions for businesses of all sizes.
Shopify
Shopify
Shopify is a complete commerce platform that lets anyone start, manage, and grow a business. You can use Shopify to build an online store, manage sales, market to customers, and accept payments in digital and physical locations.
Premium
WooCommerce
WooCommerce
WooCommerce is the open-source ecommerce platform for WordPress.
Premium
Snowflake
Snowflake
A data warehouse built for the cloud
Premium
MongoDB
MongoDB
MongoDB is an open source NoSQL database management program.
Supabase
Supabase
Supabase is an open source Firebase alternative.
MySQL
MySQL
MySQL is an open-source relational database management system.
PostgreSQL
PostgreSQL
PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
Premium
AWS
AWS
Amazon Web Services (AWS) offers reliable, scalable, and inexpensive cloud computing services.
Premium
Twilio SendGrid
Twilio SendGrid
Send marketing and transactional email through the Twilio SendGrid platform with the Email API, proprietary mail transfer agent, and infrastructure for scalable delivery.
Amazon SES
Amazon SES
Amazon SES is a cloud-based email service provider that can integrate into any application for high volume email automation
Premium
Klaviyo
Klaviyo
Email Marketing and SMS Marketing Platform
Premium
Zendesk
Zendesk
Zendesk is award-winning customer service software trusted by 200K+ customers. Make customers happy via text, mobile, phone, email, live chat, social media.
Notion
Notion
Notion is a new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team.
Slack
Slack
Slack is a channel-based messaging platform. With Slack, people can work together more effectively, connect all their software tools and services, and find the information they need to do their best work — all within a secure, enterprise-grade environment.
Microsoft Teams
Microsoft Teams
Microsoft Teams has communities, events, chats, channels, meetings, storage, tasks, and calendars in one place.
Schedule
Schedule
Trigger workflows on an interval or cron schedule.