Has Anyone Encountered the "ENOSPC: no space left on device, write" Error Before?

This topic was automatically generated from Slack. You can find the original thread here.

Anyone seen this error before?

Error
ENOSPC: no space left on device, write

are you writing data to /tmp or maybe using a package that would persist data there?

I’ve downloaded a file to tmp, and then I’m having OpenAI transcribe the audio.

How large is the original file? Are you also splitting the file into parts to send to OpenAI? Just trying to get a sense of the size

It’s pretty big. It’s an hour long Google Meet

The code is

export default defineComponent({
  async run({ steps, $ }) {
    const results = {
      "tmpPath": `/tmp/recording.${steps.trigger.event.fullFileExtension}`,
      "mimetype": `${steps.trigger.event.fullFileExtension}`
    }
    return results
  },
})

Any thoughts on how to split it up effectively?

do you know if the original file exceeds 2GB in size?

In drive it’s listed as 1,014.7 MB

got it thanks. Are you cleaning up the data you store in /tmp after you process one recording?

I’ve just been testing it, actually. So no. I haven’t.

In fact, I don’t know how to do that.

That may be it. You only have 2GB of available space on /tmp and /tmp can be shared across workflow runs, since multiple executions of the same workflow will run on a single “worker” — the machine where the actual data is being stored.

Take a look at these docs for more info on deleting files. You can do that at the end of your workflow once you’ve processed all the data, so that you should have space for the next run.

since this file is only 1 GB, hopefully processing it in this manner should work

Fantastic. Thanks man. I really appreciate it!

np let us know if that doesn’t end up working

Is there any way to delete what’s in tmp right now without putting the script into the workflow? (Which I’ll still be doing)

easiest way right now would be to list the files currently in /tmp and then run delete statements for each of your recordings. You’ll notice some other files in there (internal Pipedream files), but if you just delete the recordings nearly all of the 2GB should be available.

If you also wait a few minutes, it’s likely the worker you’re on now will be spun down and a new one will pop up in its place, but there’s not an easy way to say “give me a completely fresh worker” on-demand, so the listing and deleting is probably the fastest way

Okay cool. Thanks

Oh, I’ve also been getting this error whenever I “Test Workflow”

Pipedream Internal Error
Please retry your event or reach out to Pipedream support at https://pipedream.com/support/

But if I test each step individually it seems to work.

In your OpenAI step, try adding a “@3.3.0” at the end of the OpenAI import, like this:

import { Configuration, OpenAIApi } from “openai@3.3.0";

Then try to run the workflow again and let me know if that works