OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Emit new event any time the media status of an item changes to completed.
Emit new event when a new batch is completed in OpenAI. See the documentation
Emit new event when a new file is created in OpenAI. See the documentation
Emit new event when a new fine-tuning job is created in OpenAI. See the documentation
Emit new event every time a run changes its status. See the documentation
The Chat API, using the gpt-3.5-turbo
or gpt-4
model. See the documentation
Creates a new translation for a selected media file. See the documentation
Summarizes text using the Chat API. See the documentation
Gets the text transcript of a selected media file. See the documentation
Classify items into specific categories using the Chat API. See the documentation
OpenAI provides a suite of powerful AI models through its API, enabling developers to integrate advanced natural language processing and generative capabilities into their applications. Here’s an overview of the services offered by OpenAI's API:
Use Python or Node.js code to make fully authenticated API requests with your OpenAI account:
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
openai: {
type: "app",
app: "openai",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.openai.com/v1/models`,
headers: {
Authorization: `Bearer ${this.openai.$auth.api_key}`,
},
})
},
})
The Sonix API enables automated transcription of audio and video files into text, offering functions like uploading media, managing files, and retrieving transcripts. Leveraging Pipedream’s capabilities, you can integrate the Sonix API with various services to streamline media processing workflows, making transcription tasks more efficient. By automating interactions with Sonix, you can trigger actions based on the transcription status, analyze content, and connect transcribed text with other apps for further processing or analysis.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
sonix: {
type: "app",
app: "sonix",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.sonix.ai/v1/media`,
headers: {
Authorization: `Bearer ${this.sonix.$auth.api_key}`,
},
})
},
})