This topic was automatically generated from Slack. You can find the original thread here.
David McKay : Is there a filepath on the disk that is persisted across steps for a given execution?
This topic was automatically generated from Slack. You can find the original thread here.
David McKay : Is there a filepath on the disk that is persisted across steps for a given execution?
SERW2039 : There is a /TMP folder where data is written during the workflow execution, Data is removed after workflow completes
SERW2039 : https://docs.pipedream.com/workflows/steps/code/nodejs/working-with-files/#the-tmp-directory
Dylan Sather (Pipedream) : That’s correct. /tmp is also not guaranteed to be cleared across workflow invocations, since we reuse the same execution environment across runs if we can, and do not clear /tmp between runs.
David McKay : I’m trying to download a file from Bannerbear ad upload it to YouTube using this approach, but Pipedream says I’m breaking some limits?
David McKay : ```
const fs = require(‘fs’);
const fetch = require(‘node-fetch’);
const response = await fetch(event.body.image_url_png);
const buffer = await response.buffer();
return await fs.writeFileSync(/tmp/image.png
, buffer);
David McKay : ```
const { google } = require(‘googleapis’);
const youtubeAuth = new google.auth.OAuth2();
youtubeAuth.setCredentials({ access_token: auths.youtube_data_api.oauth_access_token });
const service = await google.youtube({ version: ‘v3’, auth: youtubeAuth });
const fs = require(‘fs’);
const image = await fs.readFileSync(’/tmp/image.png’);
return await service.thumbnails.set({
videoId: event.body.metadata,
data: image,
});;
David McKay : This seems OK to me, but I’m sure I’m doing something silly
David McKay : Seems to be the service.thumbnails.set
that’s failing / causing the error
David McKay : ```
return await service.thumbnails.set({
videoId: event.body.metadata,
media: {
body: fs.createReadStream(’/tmp/image.png’)
},
});
David McKay : This works! I’ll publish these as actions for others
David McKay : Thank you both for your help
Dylan Sather (Pipedream) : Nice!
David McKay : I need to blog all this automation. I’ll aim for next week
Raymond Camden : Dylan, you mentioned /tmp is not cleaned, but is access to it single threaded?
Dylan Sather (Pipedream) : If more than one invocation is triggered at the same time, we’ll actually spin up a new worker / execution environment to handle that request. This new worker will have an empty /tmp on that first run. If it writes a file to /tmp, and your workflow continues processing two invocations in parallel, the invocations will run on one of the two workers, but the data in /tmp may now be different.
So technically yes, a single worker will only be able to access /tmp at any given time. But because of the above, I’d recommend not relying on files remaining in /tmp, and perhaps clearing it at the start of runs if you rely on listing files, for example.
Raymond Camden : so its ok to use tmp and you have single threaded access to it, but you should not assume previous values exist. what im trying to be sure of is a scenario where step 1 of a workflow writes to /tmp and step 10 reads, it’s safe to assume no other invocation wrote to your tmp
Dylan Sather (Pipedream) : That is correct, a single worker will only process one invocation at a time, so between steps you can rely on /tmp not being accessed by another invocation