Research lab exploring new frontiers of Voice AI. Deploying tools for prime long-form synthetic speech, voice cloning and automatic dubbing.
Write custom Node.js code and use any of the 400k+ npm packages available. Refer to the Pipedream Node docs to learn more.
Download one or more history items to your workflow's tmp
directory. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file. See the documentation
Returns the audio of an history item and converts it to a file. See the documentation
The ElevenLabs API offers text-to-speech capabilities with realistic voice synthesis. Integrating this API on Pipedream allows you to build automated workflows that convert text content into spoken audio files. You can trigger these conversions from various events, process the text data, send it to the ElevenLabs API, and handle the audio output—all within a serverless environment.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
elevenlabs: {
type: "app",
app: "elevenlabs",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.elevenlabs.io/v1/user`,
headers: {
"Accept": `application/json`,
"xi-api-key": `${this.elevenlabs.$auth.api_key}`,
},
})
},
})
Develop, run and deploy your Node.js code in Pipedream workflows, using it between no-code steps, with connected accounts, or integrate Data Stores and File Stores.
This includes installing NPM packages, within your code without having to manage a package.json
file or running npm install
.
Below is an example of installing the axios
package in a Pipedream Node.js code step. Pipedream imports the axios
package, performs the API request, and shares the response with subsequent workflow steps:
// To use previous step data, pass the `steps` object to the run() function
export default defineComponent({
async run({ steps, $ }) {
// Return data to use it in future steps
return steps.trigger.event
},
})