Why does "Response Format" and "JSON Mode" cause an error when used with the gpt4-vision-preview model in Chat with OpenAI (ChatGPT) API?

This topic was automatically generated from Slack. You can find the original thread here.

Thanks for the great service you provide.

I am using the gpt4-vision-preview model in my action “Chat with OpenAI (ChatGPT) API”. The version of this action has been updated from 0.1.3 to 0.1.4/0.1.5 to support the “Response Format”.

However, “Response Format” and “JSON Mode”, which can be specified using this, are not supported for all models. It is only supported for the gpt-4-1106-preview and gpt-3.5-turbo-1106 at the moment.
https://platform.openai.com/docs/guides/text-generation/json-mode

To prevent these errors and improve model performance, when calling gpt-4-1106-preview or gpt-3.5-turbo-1106, you can set response_format to { "type": "json_object" } to enable JSON mode.

via.
new `response_format` field not permitted with `gpt-4-vision-preview` · Issue #469 · openai/openai-node · GitHub
Using response_format in chat completion throws error - #5 by OliAI - API - OpenAI Developer Forum
python - OpenAI API: How do I enable JSON mode using the gpt-4-vision-preview model? - Stack Overflow
Therefore, when using the gpt4-vision-preview model with the Chat action, the following error occurs.

1 validation error for Request
body -> response_format
  extra fields not permitted (type=value_error.extra)

I am still unable to write code well enough to submit a pull request.
Please fix this problem.

Hi , thanks for raising this! I can reproduce your issue!

I’ve created a new issue here to fix it and added to Pipedream prioritized backlog! Pipedream component dev will take a look soon!

Thank you for your support!

Posted thread to Discourse: Why does the "Chat with OpenAI (ChatGPT) API" encounter a validation error when using "Response Format" and "JSON Mode" with the gpt4-vision-preview model?

I updated it over the weekend and confirmed it works fine! Thank you for your support.