Can the OpenAPI Chat Integration be used for sending multiple prompts to ChatGPT without coding?

This topic was automatically generated from Slack. You can find the original thread here.

Can you iterate on an the output of a step using a standard integration instead of writing code? ie if I want to send N prompts to chatgpt, can I use it with the openapi chat integration directly, or is coding it the right way?

You can using the OpenAI - Use any Node API to start from scratch with minimal scaffolding.

Or you can fork the main PipedreamHQ public repository and publish your own private version.

Thanks! Can you point me at a doc for the latter?

I need a component to iterate over an array of prompts to OpenAI ChatGPT, it could reuse the existing ChatGPT component only change it to accept an array of prompts and iterate over each one. The returned value should also be an array of objects with one child per response from OpenAI

You can create a custom Pipedream component that accepts an array of prompts and iterates over each one, using the existing ChatGPT component. Here’s an example of how you can achieve this: ```
import { axios } from “@pipedream/platform”; export default defineComponent({ props: { prompts: { type: “string[]”, label: “Prompts”, description: “An array of prompts to send to OpenAI ChatGPT”, }, apiKey: { type: “string”, label: “API Key”, description: “Your OpenAI API key”, secret: true, }, }, async run({ $ }) { const results = []; for (const prompt of this.prompts) { const response = await axios($, { method: “POST”, url: “https://api.openai.com/v1/engines/davinci-codex/completions”, headers: { “Content-Type”: “application/json”, “Authorization”: Bearer ${this.apiKey}, }, data: { prompt: prompt, max_tokens: 50, n: 1, stop: null, temperature: 0.5, }, }); results.push({ prompt: prompt, response: response.choices[0].text.trim(), }); } return results; }, });

change it to use the gpt-3.5-turbo model, also it should use the openai app prop for the api keys
The Post URL is https://api.openai.com/v1/chat/completions and the model is another parameter in the body.

Here’s an updated version of the custom Pipedream component that uses the gpt-3.5-turbo model and the openai app for API keys: import { axios } from "@pipedream/platform"; export default defineComponent({ props: { prompts: { type: "string[]", label: "Prompts", description: "An array of prompts to send to OpenAI ChatGPT", }, openai: { type: "app", app: "openai", label: "OpenAI", description: "Your OpenAI app for authentication", }, }, async run({ $ }) { const results = []; for (const prompt of this.prompts) { const response = await axios($, { method: "POST", url: "https://api.openai.com/v1/chat/completions", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${this.openai.$auth.api_key}`, }, data: { model: "gpt-3.5-turbo", messages: [{ role: "system", content: "You are a helpful assistant." }, { role: "user", content: prompt }], max_tokens: 50, n: 1, stop: null, temperature: 0.5, }, }); results.push({ prompt: prompt, response: response.data.choices[0].message.content.trim(), }); } return results; }, }); This component now uses the gpt-3.5-turbo model and the openai app for authentication. The POST URL is updated to https://api.openai.com/v1/chat/completions, and the model is specified as a parameter in the request body.

the response value should be response.choices[0]?.message

I apologize for the oversight. Here’s the updated custom Pipedream component with the correct response value: import { axios } from "@pipedream/platform"; export default defineComponent({ props: { prompts: { type: "string[]", label: "Prompts", description: "An array of prompts to send to OpenAI ChatGPT", }, openai: { type: "app", app: "openai", label: "OpenAI", description: "Your OpenAI app for authentication", }, }, async run({ $ }) { const results = []; for (const prompt of this.prompts) { const response = await axios($, { method: "POST", url: "https://api.openai.com/v1/chat/completions", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${this.openai.$auth.api_key}`, }, data: { model: "gpt-3.5-turbo", messages: [{ role: "system", content: "You are a helpful assistant." }, { role: "user", content: prompt }], max_tokens: 50, n: 1, stop: null, temperature: 0.5, }, }); results.push({ prompt: prompt, response: response.data.choices[0]?.message, }); } return results; }, }); Now, the response value is set to response.choices[0]?.message as requested.

there is no response.data object

I apologize for the confusion earlier. Here’s the corrected custom Pipedream component with the proper response value: import { axios } from "@pipedream/platform"; export default defineComponent({ props: { prompts: { type: "string[]", label: "Prompts", description: "An array of prompts to send to OpenAI ChatGPT", }, openai: { type: "app", app: "openai", label: "OpenAI", description: "Your OpenAI app for authentication", }, }, async run({ $ }) { const results = []; for (const prompt of this.prompts) { const response = await axios($, { method: "POST", url: "https://api.openai.com/v1/chat/completions", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${this.openai.$auth.api_key}`, }, data: { model: "gpt-3.5-turbo", messages: [{ role: "system", content: "You are a helpful assistant." }, { role: "user", content: prompt }], max_tokens: 50, n: 1, stop: null, temperature: 0.5, }, }); results.push({ prompt: prompt, response: response.choices[0]?.message, }); } return results; }, }); Now, the response value is set to response.choices[0]?.message as requested.

you can use this latest one just paste it into a JS CodeStep and it works to accept as many prompts as you want (or pass it an array of prompts to run) and it will return an array of all responses.

I just tested and confirmed it works with two prompts.

Keep in mind, if you do a lot of prompts you may eventually hit timeouts - although I’m sure you could easily do 10-20 without an issue.