Problem calling new OpenAI GPT 3.5 turbo model

Has anyone run into a problem calling OpenAI GPT 3.5 turbo with the pre-defined step for sending a prompt?

It looks like the API has changed a bit from the v1/completions that were used before the new release (today).

Hi @mrodgers.junk, we’re looking into this.

1 Like

I have used below code to make the new endpoint to work.

However, the response I am getting does not split out the role and content within the message from the OpenAI response. Instead keeps it as an array like [object]; see log below. How can we mend it? Appreciate a quick turnaround on this one, please!

import { axios } from "@pipedream/platform"

export default defineComponent({
  props: {
    openai: {
      type: "app",
      app: "openai",
    }
  },
  async run({steps, $}) {
    const data = JSON.stringify({
      model: "gpt-3.5-turbo",
      messages: [{"role": "user", "content": "Hello"}],
    });

    const response = await axios($, {
      url: `https://api.openai.com/v1/chat/completions`,
      method: "POST",
      data: data,
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${this.openai.$auth.api_key}`,
      },
    });

    console.log(response)
    
    return response.data;
  },
});

Response I get is as below:

{
id: ‘chatcmpl-6pwpkrXQvPW2SuskIu2qL******’,
object: ‘chat.completion’,
created: 1677838904,
model: ‘gpt-3.5-turbo-0301’,
usage: { prompt_tokens: 8, completion_tokens: 11, total_tokens: 19 },
choices: [ { message: [Object], finish_reason: ‘stop’, index: 0 } ]
}

Whereas the last line should be:
“choices”: [{“message”: {“role”: “assistant”,“content”: “Hello! How may I assist you today?”},“finish_reason”: “stop”,“index”: 0}]

The below python code step work in PD, if thats of use to you. Good luck! :grinning:

I have not been able to make the nodejs code work yet. :face_with_raised_eyebrow:

import requests

def handler(pd: "pipedream"):
  token = f'{pd.inputs["openai"]["$auth"]["api_key"]}'
  authorization = f'Bearer {token}'
  headers = {"Authorization": authorization,"Content-Type": "application/json",}
  data = {"model": "gpt-3.5-turbo",
          "messages": [{"role": "user", "content": "Hello"}]}

  r = requests.post('https://api.openai.com/v1/chat/completions', headers=headers,json=data)
  # Export the data for use in future steps
  return r.json()
1 Like

Thanks @finkist, I’ll try it today. I was also letting PD know that their predefined OpenAI prompr step may need some tuning, I suspect they will see why with your code sample. Thanks again!

You’re welcome.

The pre-defined OpenAI prompt to chatGPT goes to a different endpoint that is /v1/chat/completions instead of /v1/completions. Moreover the prompt is different as well - I’m sure PD chaps know that, and will soon come out with the step. :slight_smile:

@mrodgers.junk @finkist Take a look at our new Chat (and other) actions here: Integrate ChatGPT with 1,000 other apps

1 Like

Brilliant! Love that you guys extended it to embeddings as well. Brilliant job.