This topic was automatically generated from Slack. You can find the original thread here.
Hi there - having a slight trouble with a create embeddings step with OpenAI - I’m using an input that is significantly below 8192 tokens, but I keep getting this error:
“Configuration error
Element #0 is more than 8192 tokens in length. Each input must not exceed 8192 tokens in length.” Per the OpenAI tokenizer, i’m seeing 1607 tokens & 9034 characters - is the pipedream validation kicking off of the character count vs. tokens since they upped ada’s max token capabilities? cc