I’m here because YouTuber Thomas Frank recommended it for an AI workflow involving ChatGPT.
It works…sometimes.
I’m testing on the free account, but I’m not enjoying the fact that I’m seeing a third of my credit allotment used up on timeouts. The workflow fails even though I have the timeout up to 4 minutes for OpenAI.
And when it fails, I run out of credits. And I’m not sure if the failure is on the OpenAI API or something on Pipedream’s side.
In short, I don’t think it’s fair to use up credits on processes that timeout, but that said, what would solve this issue: Upping the timeout limit, purchasing a professional OpenAI account, or something else?
I’m facing the same problem. According to the documentation “Pipedream charges one credit per 30 seconds of compute time at 256 megabytes of memory (the default) per workflow execution.”
Free account has 100 credits per day. I’ve set the timeout settings to 300 seconds (max), but stil face timeout. The transcription step in Whisper works fine (even though it takes long), but OpenAI Chat step is the one causing the timeout. This issue happens only with large audio files (80MB), but not for smaller audio files (2MB). My daily credit is not reached, because the small audio is stil processed and the Daily Usage report shows it’s far below 100. According to Thomas Frank blog, audio files up to 100MB should work (after collaborating with Pipedream team).
TIMEOUT
DETAILS
TIMEOUT: TIMEOUT
at Timeout._onTimeout (/var/task/lambda_handler.js:798:23)
at listOnTimeout (internal/timers.js:557:17)
at processTimers (internal/timers.js:500:7)
The step causing the timeout is this code block from Thomas Frank to summarize the transcription using OpenAI chat API in Node.js. This only happens for larger transcriptions.
import { Configuration, OpenAIApi } from "openai"
import { encode, decode } from "gpt-3-encoder"
export default defineComponent({
props: {
openai: {
type: "app",
app: "openai",
}
},
async run({steps, $}) {
// Import the transcript from the previous step
const transcript = steps.create_transcription.$return_value.transcription
// Set the max number of input tokens
const maxTokens = 2000
// Initialize OpenAI
const openAIkey = this.openai.$auth.api_key
const configuration = new Configuration({
apiKey: openAIkey,
});
const openai = new OpenAIApi(configuration);
// Split the transcript into shorter strings if needed, based on GPT token limit
function splitTranscript(encodedTranscript, maxTokens) {
const stringsArray = []
let currentIndex = 0
while (currentIndex < encodedTranscript.length) {
let endIndex = Math.min(currentIndex + maxTokens, encodedTranscript.length)
// Find the next period
while (endIndex < encodedTranscript.length && decode([encodedTranscript[endIndex]]) !== ".") {
endIndex++
}
// Include the period in the current string
if (endIndex < encodedTranscript.length) {
endIndex++
}
// Add the current chunk to the stringsArray
const chunk = encodedTranscript.slice(currentIndex, endIndex)
stringsArray.push(decode(chunk))
currentIndex = endIndex
}
return stringsArray
}
const encoded = encode(transcript)
const stringsArray = splitTranscript(encoded, maxTokens)
const result = await sendToChat(stringsArray)
return result
// Function to send transcript string(s) to Chat API
async function sendToChat (stringsArray) {
const resultsArray = []
for (let arr of stringsArray) {
// Define the prompt
const prompt = `Analyze the transcript provided below, then provide the following:
Key "title:" - add a title.
Key "summary" - create a summary.
Key "main_points" - add an array of the main points. Limit each item to 100 words, and limit the list to 10 items.
Key "action_items:" - add an array of action items. Limit each item to 100 words, and limit the list to 5 items.
Key "follow_up:" - add an array of follow-up questions. Limit each item to 100 words, and limit the list to 5 items.
Key "stories:" - add an array of an stories, examples, or cited works found in the transcript. Limit each item to 200 words, and limit the list to 5 items.
Key "arguments:" - add an array of potential arguments against the transcript. Limit each item to 100 words, and limit the list to 5 items.
Key "related_topics:" - add an array of topics related to the transcript. Limit each item to 100 words, and limit the list to 5 items.
Key "sentiment" - add a sentiment analysis
Ensure that the final element of any array within the JSON object is not followed by a comma.
Transcript:
${arr}`
let retries = 3
while (retries > 0) {
try {
const completion = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
messages: [{role: "user", content: prompt}, {role: "system", content: `You are an assistant that only speaks JSON. Do not write normal text.
Example formatting:
{
"title": "Notion Buttons",
"summary": "A collection of buttons for Notion",
"action_items": [
"item 1",
"item 2",
"item 3"
],
"follow_up": [
"item 1",
"item 2",
"item 3"
],
"arguments": [
"item 1",
"item 2",
"item 3"
],
"related_topics": [
"item 1",
"item 2",
"item 3"
]
"sentiment": "positive"
}
`}],
temperature: 0.2
});
resultsArray.push(completion)
break
} catch (error) {
if(error.response && error.response.status === 500) {
retries--
if (retries == 0) {
throw new Error("Failed to get a response from OpenAI Chat API after 3 attempts.")
}
console.log("OpenAI Chat API returned a 500 error. Retrying...")
} else {
throw error
}
}
}
}
return resultsArray
}
},
})
GPT4 API or changing memory to 512 didn’t solve the issue. The issue is most probably due to the length of the audio. I’ve a 54 minute audio. Shorter audios (same type/bitrate) of let’s say less than 30 minutes work fine (didn’t test the exact length when the issue starts).
I am having the exact same problem on timeouts, I have increased them to 750 second which is the max on my account and increased the memory to 512Mb (consuming 20 credits) . Is there any way to get around this? Is this actually due to chatgpt ? Is there some way to have it improved on the gpt side?