R2 gives you the freedom to create the multi-cloud architectures you desire with an S3-compatible object storage.
Write Python and use any of the 350k+ PyPi packages available. Refer to the Pipedream Python docs to learn more.
The Cloudflare R2 API lets you interact with Cloudflare's object storage service, providing a cost-effective way to store large amounts of data with no egress fees. On Pipedream, you can harness this API to build automated workflows that can store, retrieve, and manage data within your R2 buckets. By combining Cloudflare R2 with Pipedream’s capabilities, you can create serverless workflows that trigger on various events, process data in-flight, and integrate with over 800+ apps available on the platform.
import { S3 } from "@aws-sdk/client-s3";
import { ListBucketsCommand } from "@aws-sdk/client-s3";
export default defineComponent({
props: {
cloudflare_r2: {
type: "app",
app: "cloudflare_r2",
}
},
async run({ steps, $ }) {
const cloudflare_r2 = {
account_id: this.cloudflare_r2.$auth.account_id
};
const s3Client = new S3({
forcePathStyle: false,
endpoint: `https://${cloudflare_r2.account_id}.r2.cloudflarestorage.com`,
region: "auto",
credentials: {
accessKeyId: this.cloudflare_r2.$auth.access_key_id,
secretAccessKey: this.cloudflare_r2.$auth.access_key_secret
}
});
const data = await s3Client.send(new ListBucketsCommand({}));
return data.Buckets;
}
}
)
Develop, run and deploy your Python code in Pipedream workflows. Integrate seamlessly between no-code steps, with connected accounts, or integrate Data Stores and manipulate files within a workflow.
This includes installing PyPI packages, within your code without having to manage a requirements.txt
file or running pip
.
Below is an example of using Python to access data from the trigger of the workflow, and sharing it with subsequent workflow steps:
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test":True}}