How to Fix Error 400 Related to Exceeding Maximum Context Length in a Model?

This topic was automatically generated from Slack. You can find the original thread here.

Can anyone help me with this error: Error - Request failed with status code 400
{“error”:{“message”:“This model’s maximum context length is 4097 tokens. However, your messages resulted in 4208 tokens. Please reduce the length of the messages.”,“type”:“invalid_request_error”,“param”:“messages”,“code”:“context_length_exceeded”}}

From the error message, I assume this has to do with ChatGPT.

Token are basically words. This error just means that the text that was submitted to ChatGPT is too long.

ok… I did a workflow drive - chatGPT whisper - summarize transcript ChatGPT - Node - notion. Is there anyway I can shorten the text within the workflow?

You could use a Node.js step to do that. I don’t think there’s any other way. :thinking_face:

could you maybe explain how to me?

: Could you please write a node.js code step to shorten a string prop to 4000 words, and return the result?

Sure! Here’s a Node.js code step that shortens a string prop to 4000 words and returns the result: export default defineComponent({ props: { inputText: { type: "string", label: "Input Text", }, }, async run({ steps, $ }) { const words = this.inputText.split(" "); const shortenedText = words.slice(0, 4000).join(" "); return shortenedText; }, }); This code step takes an input prop called inputText, splits it into words, and then slices the array to keep only the first 4000 words. Finally, it joins the words back together and returns the shortened text.

: You can copy paste the code above into a node.js code step, and it should do the shortening for you! :ok_hand:

This one:

image.png

Do I just add it to the code I already have or to I make another step In workflow?

What Pi provided is a standalone step, but you could also use the code directly within one of your own code steps.

Just this part should be enough if you already have your own code: const shortenedText = inputText.split(" ").slice(0, 4000).join(" ");

Note that obviously, shortening the text can alter the resulting summary.

If the original text is 10,000 words, only the first 4000 words will be summarized.

Ok :grinning: Thank you so much for your help

just a note, OpenAI’s “tokens” are NOT the same as words. Depending on the complexity / rarity / length of a word, some words use more tokens than others. As a general rule of thumb, an average word is about 1.3~ tokens.

You may need to decrease the word count even more.

Also, just a tip – if you have long transcripts, what you can do is write a NodeJS step (ask Pi to help you like Marco did) to first break them down into chunks, and then use another NodeJS step to call the GPT API to summarize the chunks sequentially. You can then combine the summaries in a later step

Now you’ve got the expert advice! :point_up_2: :smile: