with ElevenLabs and Google Cloud Vision?
Performs feature detection on a local or remote image file. See the documentation
Detects logos within a local or remote image file. See the documentation
Download one or more history items to your workflow's tmp
directory. If one history item ID is provided, we will return a single audio file. If more than one history item IDs are provided, we will provide the history items packed into a .zip file. See the documentation
The ElevenLabs API offers text-to-speech capabilities with realistic voice synthesis. Integrating this API on Pipedream allows you to build automated workflows that convert text content into spoken audio files. You can trigger these conversions from various events, process the text data, send it to the ElevenLabs API, and handle the audio output—all within a serverless environment.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
elevenlabs: {
type: "app",
app: "elevenlabs",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.elevenlabs.io/v1/user`,
headers: {
"Accept": `application/json`,
"xi-api-key": `${this.elevenlabs.$auth.api_key}`,
},
})
},
})
The Google Cloud Vision API allows you to analyze images in the cloud, harnessing Google's machine learning technology. You can detect and classify multiple objects, face and landmark detection, handwriting recognition, and image attributes. Combining this with Pipedream's serverless platform catalyzes the creation of automated workflows to process images, trigger actions, and integrate with other services seamlessly.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
google_cloud_vision_api: {
type: "app",
app: "google_cloud_vision_api",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://www.googleapis.com/oauth2/v1/userinfo`,
headers: {
Authorization: `Bearer ${this.google_cloud_vision_api.$auth.oauth_access_token}`,
},
})
},
})