Speech To Notion

Hi All…Please can someone help me…
I have been using speech to notion script that Thomas frank posted.

Its been working fine, but one transcript I have come across is causing the following error and not sure why
rror

SyntaxError: The requested module ‘openai’ does not provide an export named ‘Configuration’

DETAILS

Error: SyntaxError: The requested module 'openai' does not provide an export named 'Configuration'
    at handleError (/var/task/common.js:38:40)
    at Runner.runUserCode (file:///var/task/lambda_handler.mjs:866:9)
    at async Runner.run (file:///var/task/lambda_handler.mjs:697:5)
    at async Runtime.handler (file:///var/task/lambda_handler.mjs:914:22)

Also is it possible to change the Chat encoder to v4

Thanks in advance
Paul

Hi @paulhilton74,

Hi Paul, I’m sorry to hear that you’re experiencing issues with the script. It seems like there’s an issue with the import statement for the ‘openai’ module. To help you further, I’ll need more information about the code you’re using. Can you please share the relevant code snippet that’s causing the issue?

Regarding your question about changing the Chat encoder to v4, I’ll need more context on which component or action you’re using in your workflow. Please provide more details so I can assist you better.

If you need further assistance or want to talk to a human, please visit https://pipedream.com/support for more support options.
I’m a bot powered by Pipedream and GPT-4. I’m still learning, so please double check my answers and code! Learn how to build your own.

Thanks for the support…here is the code…it seems to work sometimes when the recording is really small… I have done it with 45mins and its also been ok…
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144
import { Configuration, OpenAIApi } from "openai"import { encode, decode } from "gpt-3-encoder"export default defineComponent({ props: { openai: { type: “app”, app: “openai”, } }, async run({steps, $}) { // Import the transcript from the previous step const transcript = steps.create_transcription.$return_value.transcription // Set the max number of input tokens const maxTokens = 2000 // Initialize OpenAI const openAIkey = this.openai.$auth.api_key const configuration = new Configuration({ apiKey: openAIkey, }); const openai = new OpenAIApi(configuration); // Split the transcript into shorter strings if needed, based on GPT token limit function splitTranscript(encodedTranscript, maxTokens) { const stringsArray = [] let currentIndex = 0 while (currentIndex < encodedTranscript.length) { let endIndex = Math.min(currentIndex + maxTokens, encodedTranscript.length) // Find the next period while (endIndex < encodedTranscript.length && decode([encodedTranscript[endIndex]]) !== “.”) { endIndex++ } // Include the period in the current string if (endIndex < encodedTranscript.length) { endIndex++ } // Add the current chunk to the stringsArray const chunk = encodedTranscript.slice(currentIndex, endIndex) stringsArray.push(decode(chunk)) currentIndex = endIndex } return stringsArray } const encoded = encode(transcript) const stringsArray = splitTranscript(encoded, maxTokens) const result = await sendToChat(stringsArray) return result // Function to send transcript string(s) to Chat API async function sendToChat (stringsArray) { const resultsArray = [] for (let arr of stringsArray) { // Define the prompt const prompt = Analyze·‌the·‌transcript·‌provided·‌below,·‌then·‌provide·‌the·‌following:Key·‌"title:"·‌-·‌add·‌a·‌title.Key·‌"summary"·‌-·‌create·‌a·‌summary.Key·‌"main_points"·‌-·‌add·‌an·‌array·‌of·‌the·‌main·‌points.·‌Limit·‌each·‌item·‌to·‌100·‌words,·‌and·‌limit·‌the·‌list·‌to·‌10·‌items.Key·‌"action_items:"·‌-·‌add·‌an·‌array·‌of·‌action·‌items.·‌Limit·‌each·‌item·‌to·‌100·‌words,·‌and·‌limit·‌the·‌list·‌to·‌5·‌items.Key·‌"follow_up:"·‌-·‌add·‌an·‌array·‌of·‌follow-up·‌questions.·‌Limit·‌each·‌item·‌to·‌100·‌words,·‌and·‌limit·‌the·‌list·‌to·‌5·‌items.Key·‌"stories:"·‌-·‌add·‌an·‌array·‌of·‌an·‌stories,·‌examples,·‌or·‌cited·‌works·‌found·‌in·‌the·‌transcript.·‌Limit·‌each·‌item·‌to·‌200·‌words,·‌and·‌limit·‌the·‌list·‌to·‌5·‌items.Key·‌"arguments:"·‌-·‌add·‌an·‌array·‌of·‌potential·‌arguments·‌against·‌the·‌transcript.·‌Limit·‌each·‌item·‌to·‌100·‌words,·‌and·‌limit·‌the·‌list·‌to·‌5·‌items.Key·‌"related_topics:"·‌-·‌add·‌an·‌array·‌of·‌topics·‌related·‌to·‌the·‌transcript.·‌Limit·‌each·‌item·‌to·‌100·‌words,·‌and·‌limit·‌the·‌list·‌to·‌5·‌items.Key·‌"sentiment"·‌-·‌add·‌a·‌sentiment·‌analysisEnsure that the final element of any array within the JSON object is not followed by a comma.Transcript: ${arr} let retries = 3 while (retries > 0) { try { const completion = await openai.createChatCompletion({ model: “gpt-3.5-turbo”, messages: [{role: “user”, content: prompt}, {role: “system”, content: You are an assistant that only speaks JSON. Do not write normal text.Example formatting:{ "title": "Notion Buttons", "summary": "A collection of buttons for Notion", "action_items": [ "item 1", "item 2", "item 3" ], "follow_up": [ "item 1", "item 2", "item 3" ], "arguments": [ "item 1", "item 2", "item 3" ], "related_topics": [ "item 1", "item 2", "item 3" ] "sentiment": "positive"} }], temperature: 0.2 }); resultsArray.push(completion) break } catch (error) { if(error.response && error.response.status === 500) { retries-- if (retries == 0) { throw new Error(“Failed to get a response from OpenAI Chat API after 3 attempts.”) } console.log(“OpenAI Chat API returned a 500 error. Retrying…”) } else { throw error } } } } return resultsArray } },})

Hi,

OpenAI just released a new version for their npm package with a breaking change, and whenever we deploy a workflow we use the most recently published package. Would you mind going through your Node.js code steps and suffixing the package version like this:

import { Configuration, OpenAIApi } from "openai@3.3.0";

More info here: v3 to v4 Migration Guide

Thanks

Andrew, Thanks for fast reply…
OH MY GOD, you are a genius… How do I know when to change this again…where will it show me…

Only error I had was

warning

  • CIRCULAR_RETURN_VALUE: Return value contains [Circular] reference(s) that were filtered out.
    I changed the script as per you message…

but it worked!
THANK YOU

Awesome, good to hear! This was a tricky one, we had many users reporting similar errors so it helped us figure out that was the issue.

To save having to post again…is there anywhere its published so I know to change it again…

P

OpenAI npm v4 changes (see Manual migration → Initialization → Old):

Pinning package versions:

Hi Andrew… I did what you mentioned and it all worked, But today I have got the following error… do I need to change the code again…

Error

Request failed with status code 400

DETAILS

    at null.createError (/tmp/__pdg__/dist/code/b15a28eb8a7abc3f5d73d3ada5a0f45392a3cd6937e2bfd198a65ebc606ad603/node_modules/.pnpm/axios@0.26.1/node_modules/axios/lib/core/createError.js:16:15)
    at null.settle (/tmp/__pdg__/dist/code/b15a28eb8a7abc3f5d73d3ada5a0f45392a3cd6937e2bfd198a65ebc606ad603/node_modules/.pnpm/axios@0.26.1/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/tmp/__pdg__/dist/code/b15a28eb8a7abc3f5d73d3ada5a0f45392a3cd6937e2bfd198a65ebc606ad603/node_modules/.pnpm/axios@0.26.1/node_modules/axios/lib/adapters/http.js:322:11)
    at IncomingMessage.emit (node:events:525:35)
    at null.endReadableNT (node:internal/streams/readable:1359:12)
    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)

It seems to happen most when the recording is over 30mins…Can anyone please help

the script im using is
import { Configuration, OpenAIApi } from “openai@3.3.0”;
import { encode, decode } from “gpt-3-encoder”

export default defineComponent({
props: {
openai: {
type: “app”,
app: “openai”,
}
},
async run({steps, $}) {

// Import the transcript from the previous step
const transcript = steps.create_transcription.$return_value.transcription

// Set the max number of input tokens
const maxTokens = 2000

// Initialize OpenAI
const openAIkey = this.openai.$auth.api_key
const configuration = new Configuration({
  apiKey: openAIkey,
});
const openai = new OpenAIApi(configuration);

// Split the transcript into shorter strings if needed, based on GPT token limit
function splitTranscript(encodedTranscript, maxTokens) {
  const stringsArray = []
  let currentIndex = 0

  while (currentIndex < encodedTranscript.length) {
    let endIndex = Math.min(currentIndex + maxTokens, encodedTranscript.length)

    // Find the next period
    while (endIndex < encodedTranscript.length && decode([encodedTranscript[endIndex]]) !== ".") {
      endIndex++
    }

    // Include the period in the current string
    if (endIndex < encodedTranscript.length) {
      endIndex++
    }

    // Add the current chunk to the stringsArray
    const chunk = encodedTranscript.slice(currentIndex, endIndex)
    stringsArray.push(decode(chunk))

    currentIndex = endIndex
  }

  return stringsArray
}

const encoded = encode(transcript)

const stringsArray = splitTranscript(encoded, maxTokens)
const result = await sendToChat(stringsArray)
return result

// Function to send transcript string(s) to Chat API
async function sendToChat (stringsArray) {

  const resultsArray = []

  for (let arr of stringsArray) {

    // Define the prompt
    const prompt = `Analyze the transcript provided below, then provide the following:

Key “title:” - add a title.
Key “summary” - create a summary.
Key “main_points” - add an array of the main points. Limit each item to 100 words, and limit the list to 10 items.
Key “action_items:” - add an array of action items. Limit each item to 100 words, and limit the list to 5 items.
Key “follow_up:” - add an array of follow-up questions. Limit each item to 100 words, and limit the list to 5 items.
Key “stories:” - add an array of an stories, examples, or cited works found in the transcript. Limit each item to 200 words, and limit the list to 5 items.
Key “arguments:” - add an array of potential arguments against the transcript. Limit each item to 100 words, and limit the list to 5 items.
Key “related_topics:” - add an array of topics related to the transcript. Limit each item to 100 words, and limit the list to 5 items.
Key “sentiment” - add a sentiment analysis

Ensure that the final element of any array within the JSON object is not followed by a comma.

Transcript:

    ${arr}`

    let retries = 3
    while (retries > 0) {
      try {
        const completion = await openai.createChatCompletion({
          model: "gpt-3.5-turbo",
          messages: [{role: "user", content: prompt}, {role: "system", content: `You are an assistant that only speaks JSON. Do not write normal text.

Example formatting:

{
“title”: “Notion Buttons”,
“summary”: “A collection of buttons for Notion”,
“action_items”: [
“item 1”,
“item 2”,
“item 3”
],
“follow_up”: [
“item 1”,
“item 2”,
“item 3”
],
“arguments”: [
“item 1”,
“item 2”,
“item 3”
],
“related_topics”: [
“item 1”,
“item 2”,
“item 3”
]
“sentiment”: “positive”
}
`}],
temperature: 0.2
});

        resultsArray.push(completion)
        break
      } catch (error) {
        if(error.response && error.response.status === 500) {
          retries--
          if (retries == 0) {
            throw new Error("Failed to get a response from OpenAI Chat API after 3 attempts.")
          }
          console.log("OpenAI Chat API returned a 500 error. Retrying...")
        } else {
          throw error
        }
      }
    }

  }

  return resultsArray
}

},
})

Hi, Thomas Frank released an updated version for the workflow here: How to Take Perfect Notes with Your Voice Using ChatGPT and Notion