How to Call the DreamStudio API in Pipedream?

I apologize for the confusion. It seems the error handling in the code needs to be improved. Let’s modify the code to catch any errors during the API request and log the error details: ```
import { axios } from “@pipedream/platform”; export default defineComponent({ props: { apiKey: { type: “string”, label: “API Key”, secret: true, }, }, async run({ steps, $ }) { const path = “https://api.stability.ai/v1/generation/stable-diffusion-xl-1024-v1-0/text-to-image”; const headers = { Accept: “application/json”, Authorization: Bearer ${this.apiKey}, }; const body = { steps: 40, width: 1024, height: 1024, seed: 0, cfg_scale: 5, samples: 1, text_prompts: [ { text: “A painting of a cat”, weight: 1, }, { text: “blurry, bad”, weight: -1, }, ], }; try { const response = await axios($, { url: path, headers, method: “POST”, data: body, }); console.log(“Response status:”, response.status); console.log(“Response data:”, response.data); if (response.status !== 200) { throw new Error(Non-200 response: ${response.status}); } response.data.artifacts.forEach((image, index) => { fs.writeFileSync( /tmp/txt2img_${image.seed}.png, Buffer.from(image.base64, “base64”) ); }); return { message: “Images saved to /tmp directory” }; } catch (error) { console.error(“Error details:”, error); throw new Error(Request failed: ${error.message}); } }, });

This information is redundant. It seems like the log basically just has the same information as the standard Pipedream error log:

Error
Request failed: Non-200 response: undefined
DETAILS

    at Object.run (file:///tmp/__pdg__/dist/code/7178af64b265af000e6affac94d8b58154c9a65446ff17686e072b8c205d7301/component.mjs:63:13)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at null.executeComponent (/var/task/launch_worker.js:229:22)
    at MessagePort.messageHandler (/var/task/launch_worker.js:726:28)

8/26/2023, 4:53:31 PM
Response status: undefined
8/26/2023, 4:53:31 PM
Response data: undefined
8/26/2023, 4:53:31 PM
Error details: at Object.run (file:///tmp/pdg/dist/code/7178af64b265af000e6affac94d8b58154c9a65446ff17686e072b8c205d7301/component.mjs:50:15) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at null.executeComponent (/var/task/launch_worker.js:229:22) at MessagePort.messageHandler (/var/task/launch_worker.js:726:28)

Remember, Dylan said “axios returns the response directly, there’s no data / status property”.

Let’s think this through in a step-by-step manner first before trying to change the code again.

I apologize for the confusion earlier. You’re right, the response from @pipedream/platform axios does not have a data or status property. Let’s modify the code to handle the response correctly: ```
import { axios } from “@pipedream/platform”; export default defineComponent({ props: { apiKey: { type: “string”, label: “API Key”, secret: true, }, }, async run({ steps, $ }) { const path = “https://api.stability.ai/v1/generation/stable-diffusion-xl-1024-v1-0/text-to-image”; const headers = { Accept: “application/json”, Authorization: Bearer ${this.apiKey}, }; const body = { steps: 40, width: 1024, height: 1024, seed: 0, cfg_scale: 5, samples: 1, text_prompts: [ { text: “A painting of a cat”, weight: 1, }, { text: “blurry, bad”, weight: -1, }, ], }; try { const response = await axios($, { url: path, headers, method: “POST”, data: body, }); console.log(“Response:”, response); if (!response || response.error) { throw new Error(Request failed: ${response.error}); } response.artifacts.forEach((image, index) => { fs.writeFileSync( /tmp/txt2img_${image.seed}.png, Buffer.from(image.base64, “base64”) ); }); return { message: “Images saved to /tmp directory” }; } catch (error) { console.error(“Error details:”, error); throw new Error(Request failed: ${error.message}); } }, });

appreciate you being patient. I added this thread to our “improve Pi” list. Try just this code:

import { axios } from "@pipedream/platform";

export default defineComponent({
  props: {
    apiKey: {
      type: "string",
      label: "API Key",
      secret: true,
    },
  },
  async run({ steps, $ }) {
    const path = "https://api.stability.ai/v1/generation/stable-diffusion-xl-1024-v1-0/text-to-image";

    const headers = {
      Accept: "application/json",
      Authorization: `Bearer ${this.apiKey}`,
    };

    const body = {
      steps: 40,
      width: 1024,
      height: 1024,
      seed: 0,
      cfg_scale: 5,
      samples: 1,
      text_prompts: [
        {
          text: "A painting of a cat",
          weight: 1,
        },
        {
          text: "blurry, bad",
          weight: -1,
        },
      ],
    };

    return await axios($, {
      url: path,
      headers,
      method: "POST",
      data: body,
    });
  },
});

Thanks you’re the real MVP. The Node step seems to have made the API call successfully, but I’m not sure how I get the return .png. The DreamStudio documentation examples are all for python.

The export from Node:

steps.node_1{1}
$return_value{1}
artifacts[1]
0{3}
Base64: (and a GIGANTIC string)

No URL

gotcha, let’s see if we can get Pi to handle that

I have fixed the errors and seemingly generated an image in the /tmp/ directory. Here is the code i used:

import { axios } from@pipedream/platform”;
import fs from ‘fs’;

export default defineComponent({
props: {
apiKey: {
type: “string”,
label: “API Key”,
secret: true,
},
},
async run({ steps, $ }) {
const path = “https://api.stability.ai/v1/generation/stable-diffusion-xl-1024-v1-0/text-to-image”;

**const** headers = {
  Accept: "application/json",
  Authorization: `Bearer ${**this**.apiKey}`,
};

**const** body = {
  steps: 40,
  width: 1024,
  height: 1024,
  seed: 0,
  cfg_scale: 5,
  samples: 1,
  text_prompts: [
    {
      text: "A painting of a cat",
      weight: 1,
    },
    {
      text: "blurry, bad",
      weight: -1,
    },
  ],
};

**const** response = **await** axios($, {
  url: path,
  headers,
  method: "POST",
  data: body,
  responseType: 'arraybuffer'
});

_// Generate a unique filename with a timestamp_
**const** timestamp = Date.now();
**const** filename = `/tmp/image_${timestamp}.png`;

_// Save the image_
fs.writeFileSync(filename, response);

},
});

Can you please tell me how to get pull the image into another step? The Node JS step doesn’t have anything in the Export tab

Great job generating the image and saving it to the /tmp/ directory! To make the image available for other steps, you can return the filename from your current step. Modify your code like this: import { axios } from "@pipedream/platform"; import fs from 'fs'; export default defineComponent({ props: { apiKey: { type: "string", label: "API Key", secret: true, }, }, async run({ steps, $ }) { const path = "https://api.stability.ai/v1/generation/stable-diffusion-xl-1024-v1-0/text-to-image"; const headers = { Accept: "application/json", Authorization: `Bearer ${this.apiKey}`, }; const body = { steps: 40, width: 1024, height: 1024, seed: 0, cfg_scale: 5, samples: 1, text_prompts: [ { text: "A painting of a cat", weight: 1, }, { text: "blurry, bad", weight: -1, }, ], }; const response = await axios($, { url: path, headers, method: "POST", data: body, responseType: 'arraybuffer' }); // Generate a unique filename with a timestamp const timestamp = Date.now(); const filename = `/tmp/image_${timestamp}.png`; // Save the image fs.writeFileSync(filename, response.data); // Return the filename return { filename }; }, }); Now, the filename will be available in the next step as steps.<step_name>.filename, where <step_name> is the name of the step that runs this code. You can use this filename to read the image file or perform any other operations in the next steps.

This is the typical way we pass file references between steps. if you return the base64-encoded data directly, it’ll be too large for your browser to handle so when you inspect these events, it will be slow. It’s best to pass file references and then read / upload the file in the next step. See more on that here: Working with the filesystem in Node.js

Thanks again Dylan. Also, gotta love code interpreter. Fed it the documentation above and got this code:

import { axios } from “@pipedream/platform”;
import fs from ‘fs’;

export default defineComponent({
props: {
apiKey: {
type: “string”,
label: “API Key”,
secret: true,
},
},
async run({ steps, $ }) {
const path = “https://api.stability.ai/v1/generation/stable-diffusion-xl-1024-v1-0/text-to-image”;

const headers = {
  Accept: "application/json",
  Authorization: `Bearer ${this.apiKey}`,
};

const body = {
  steps: 40,
  width: 1024,
  height: 1024,
  seed: 0,
  cfg_scale: 5,
  samples: 1,
  text_prompts: [
    {
      text: "A painting of a cat",
      weight: 1,
    },
    {
      text: "blurry, bad",
      weight: -1,
    },
  ],
};

const response = await axios($, {
  url: path,
  headers,
  method: "POST",
  data: body,
  responseType: 'arraybuffer'
});

// Generate a unique filename with a timestamp
const timestamp = Date.now();
const filename = `/tmp/image_${timestamp}.png`;

// Save the image
fs.writeFileSync(filename, response);

// Return the filename for use in other steps
return {
  filename
};

},
});

And it seemed to work successfully… Now I need to figure out how to pass the file to my Google Drive to double check everything :stuck_out_tongue:

Awesome, yeah check out the Google DriveUpload File action and you’ll see a File Path argument where you can reference the file you saved in the previous step

damn:

Error
ENOENT: no such file or directory, open ‘/tmp/image_1693085247834.png’

The code didn’t actually generate the file it seems… Just spit out a file name for me.

I actually have to run right now but since you mentioned this response format earlier:

steps.node_1{1}
$return_value{1}
artifacts[1]
0{3}
Base64: (and a GIGANTIC string)

make sure you’re saving the base64 data to the file and not the raw response itself. That may be part of the issue. The best way to troubleshoot this is also to use $.export("name", value) for any variable whose value you want to review. That will export the data below the step but not exit the code like return, so it’s like doing a console.log but with formatted data you can inspect.

No worries. Thanks again for all of your help. Really appreciate you! Cheers

no worries, good luck!

I managed to get everything to work. I also realized that the default payload from the API was JSON – whereas I needed a PNG. So I changed the header Accept to “image/png” – which successfully generated and saved an image.

Here is my code:

import { axios } from@pipedream/platform”;
import fs from ‘fs’;

export default defineComponent({
props: {
apiKey: {
type: “string”,
label: “API Key”,
secret: true,
},
},
async run({ steps, $ }) {
const path = “https://api.stability.ai/v1/generation/stable-diffusion-xl-1024-v1-0/text-to-image”;

**const** headers = {
  Accept: "image/png",
  Authorization: `Bearer ${**this**.apiKey}`,
};

**const** body = {
  steps: 40,
  width: 832,
  height: 1216,
  seed: 0,
  cfg_scale: 5,
  samples: 1,
  text_prompts: [
    {
      text: "A cat",
      weight: 1,
    },
    {
      text: "Blurry, low quality",
      weight: -1,
    },
  ],
};

**const** response = **await** axios($, {
  url: path,
  headers,
  method: "POST",
  data: body,
  responseType: 'arraybuffer'
});

_// Convert the ArrayBuffer to base64_
**const** base64Data = Buffer.**from**(response).toString('base64');

_// Debug the base64 string_
$.**export**("base64Data", base64Data);

_// Generate a unique filename with a timestamp_
**const** timestamp = Date.now();
**const** filename = `/tmp/image_${timestamp}.png`;

_// Write the base64 data to the image file_
fs.writeFileSync(filename, Buffer.**from**(base64Data, 'base64'));

_// Return the filename for use in other steps_
**return** {
  filename
};

},
});

Can you please tell me how I can add an output from a previous step in the workflow and replace the prompt (“A cat”) within the node JS? I tried copying the variable path from the previous step, but the Node step treated is plain text and didn’t include the actual value of the variable.

text_prompts: [
{
text: “A cat”,
weight: 1,
},

I’m glad you got everything working! To use the output from a previous step in the workflow, you can access it using the steps object. For example, if the previous step is named previous_step, and it exports a value named output, you can reference it like this: steps.previous_step.output. Here’s how you can modify the text_prompts to use the output from the previous step: text_prompts: [ { text: steps.previous_step.output, weight: 1, }, { text: "Blurry, low quality", weight: -1, }, ], Make sure to replace previous_step with the actual name of the step you want to reference.

everything works wonderfully. I am now trying to add an upscale step (using another API from DreamStudio).

Here is the code:

import { axios } from@pipedream/platform”;
import fs from ‘fs’;

export default defineComponent({
props: {
apiKey: {
type: “string”,
label: “API Key”,
secret: true,
},
inputFilename: {
type: “string”,
label: “Input File Name”,
default: “”,
},
},
async run({ steps, $ }) {
// Use the filename from the previous step if not provided
const actualInputFilename = this.inputFilename || steps.DreamStudio_Create_Cover_Image.$return_value.filename;

_// Read the previously generated image file_
**const** imageData = fs.readFileSync(actualInputFilename);

**const** path = "https://api.stability.ai/v1/generation/esrgan-v1-x2plus/image-to-image/upscale";

**const** headers = {
  Accept: "image/png",
  Authorization: `Bearer ${**this**.apiKey}`,
  "Content-Type": "application/octet-stream",
};

_// Perform the upscale API call_
**const** response = **await** axios($, {
  url: path,
  headers,
  method: "POST",
  data: imageData,
  responseType: 'arraybuffer'
});

_// Generate a unique filename for the upscaled image_
**const** timestamp = Date.now();
**const** filename = `upscaled_image_${timestamp}.png`;

_// Save the upscaled image to the /tmp/ directory_
fs.writeFileSync(`/tmp/${filename}`, response);

_// Return the filename for use in other steps_
**return** {
  filename: `/tmp/${filename}`
};

},
});

I am getting a Function Payload Limit Exceeded error.

I believe this has to do with the size of the file being generated by upscaler API.

Is there any way to address this?

I’m glad to hear that everything is working well so far. The “Function Payload Limit Exceeded” error occurs when the total size of console.log() statements, step exports, and the original event data sent to the workflow exceeds the combined size limit of 6MB. In your case, it’s likely due to the size of the upscaled image. To address this issue, you can write the upscaled image to the /tmp directory in one step and read it into another step. This avoids using step exports for large data and should keep you under the payload limit. You’re already writing the upscaled image to the /tmp directory, so you just need to read it in the next step. Here’s an example of how to read the image file in the next step: import fs from 'fs'; export default defineComponent({ props: { inputFilename: { type: "string", label: "Input File Name", default: "", }, }, async run({ steps, $ }) { // Use the filename from the previous step if not provided const actualInputFilename = this.inputFilename || steps.previous_step.filename; // Read the upscaled image file const imageData = fs.readFileSync(actualInputFilename); // Process the imageData as needed in this step }, }); Make sure to replace previous_step with the actual name of the step where you saved the upscaled image to /tmp.

do I need to change my previous code? Because the code in the previous step is getting the error of exceeding the payload. I’m not sure how adding another step after will fix this.