with fal.ai and Hugging Face?
Adds a request to the queue for asynchronous processing, including specifying a webhook URL for receiving updates. See the documentation
Want to have a nice know-it-all bot that can answer any question?. This action allows you to ask a question and get an answer from a trained model. See the docs
Cancels a request in the queue. This allows you to stop a long-running task if it's no longer needed. See the documentation
This task reads some image input and outputs the likelihood of classes. This action allows you to classify images into categories. See the docs
Gets the response of a completed request in the queue. This retrieves the results of your asynchronous task. See the documentation
import { fal } from "@fal-ai/client"
export default defineComponent({
props: {
fal_ai: {
type: "app",
app: "fal_ai",
}
},
async run({ steps, $ }) {
fal.config({
credentials: `${this.fal_ai.$auth.api_key}`,
});
const result = await fal.subscribe("fal-ai/lora", {
input: {
model_name: "stabilityai/stable-diffusion-xl-base-1.0",
prompt:
"Photo of a rhino dressed suit and tie sitting at a table in a bar with a bar stools, award winning photography, Elke vogelsang",
},
logs: true,
});
return result;
},
})
The Hugging Face API provides access to a vast range of machine learning models, primarily for natural language processing (NLP) tasks like text classification, translation, summarization, and question answering. It lets you leverage pre-trained models and fine-tune them on your data. Using the API within Pipedream, you can automate workflows that involve language processing, integrate AI insights into your apps, or respond to events with AI-generated content.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
hugging_face: {
type: "app",
app: "hugging_face",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://huggingface.co/api/whoami-v2`,
headers: {
Authorization: `Bearer ${this.hugging_face.$auth.access_token}`,
},
})
},
})