with Google Cloud Vision and LiveKit?
Emit new event for LiveKit room activities via webhook. See the documentation
Performs feature detection on a local or remote image file. See the documentation
Create a new ingress from url in LiveKit. See the documentation
Detects logos within a local or remote image file. See the documentation
Detects text in a local image or remote image. See the documentation
The Google Cloud Vision API allows you to analyze images in the cloud, harnessing Google's machine learning technology. You can detect and classify multiple objects, face and landmark detection, handwriting recognition, and image attributes. Combining this with Pipedream's serverless platform catalyzes the creation of automated workflows to process images, trigger actions, and integrate with other services seamlessly.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
google_cloud_vision_api: {
type: "app",
app: "google_cloud_vision_api",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://www.googleapis.com/oauth2/v1/userinfo`,
headers: {
Authorization: `Bearer ${this.google_cloud_vision_api.$auth.oauth_access_token}`,
},
})
},
})
import { RoomServiceClient } from 'livekit-server-sdk';
export default defineComponent({
props: {
livekit: {
type: "app",
app: "livekit",
}
},
async run({steps, $}) {
const svc = new RoomServiceClient(
this.livekit.$auth.project_url,
this.livekit.$auth.api_key,
this.livekit.$auth.secret_key);
return await svc.listRooms();
},
})