This topic was automatically generated from Slack. You can find the original thread here.
Can anyone help me with this error: Error - Request failed with status code 400
{“error”:{“message”:“This model’s maximum context length is 4097 tokens. However, your messages resulted in 4208 tokens. Please reduce the length of the messages.”,“type”:“invalid_request_error”,“param”:“messages”,“code”:“context_length_exceeded”}}
ok… I did a workflow drive - chatGPT whisper - summarize transcript ChatGPT - Node - notion. Is there anyway I can shorten the text within the workflow?
Sure! Here’s a Node.js code step that shortens a string prop to 4000 words and returns the result: export default defineComponent({ props: { inputText: { type: "string", label: "Input Text", }, }, async run({ steps, $ }) { const words = this.inputText.split(" "); const shortenedText = words.slice(0, 4000).join(" "); return shortenedText; }, }); This code step takes an input prop called inputText, splits it into words, and then slices the array to keep only the first 4000 words. Finally, it joins the words back together and returns the shortened text.
just a note, OpenAI’s “tokens” are NOT the same as words. Depending on the complexity / rarity / length of a word, some words use more tokens than others. As a general rule of thumb, an average word is about 1.3~ tokens.
You may need to decrease the word count even more.
Also, just a tip – if you have long transcripts, what you can do is write a NodeJS step (ask Pi to help you like Marco did) to first break them down into chunks, and then use another NodeJS step to call the GPT API to summarize the chunks sequentially. You can then combine the summaries in a later step