with LiveKit and Hugging Face?
Emit new event for LiveKit room activities via webhook. See the documentation
Create a new ingress from url in LiveKit. See the documentation
Want to have a nice know-it-all bot that can answer any question?. This action allows you to ask a question and get an answer from a trained model. See the docs
This task reads some image input and outputs the likelihood of classes. This action allows you to classify images into categories. See the docs
import { RoomServiceClient } from 'livekit-server-sdk';
export default defineComponent({
props: {
livekit: {
type: "app",
app: "livekit",
}
},
async run({steps, $}) {
const svc = new RoomServiceClient(
this.livekit.$auth.project_url,
this.livekit.$auth.api_key,
this.livekit.$auth.secret_key);
return await svc.listRooms();
},
})
The Hugging Face API provides access to a vast range of machine learning models, primarily for natural language processing (NLP) tasks like text classification, translation, summarization, and question answering. It lets you leverage pre-trained models and fine-tune them on your data. Using the API within Pipedream, you can automate workflows that involve language processing, integrate AI insights into your apps, or respond to events with AI-generated content.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
hugging_face: {
type: "app",
app: "hugging_face",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://huggingface.co/api/whoami-v2`,
headers: {
Authorization: `Bearer ${this.hugging_face.$auth.access_token}`,
},
})
},
})