The Google Cloud Platform, including BigQuery
Write Python and use any of the 350k+ PyPi packages available. Refer to the Pipedream Python docs to learn more.
Inserts rows into a BigQuery table. See the docs and for an example here.
Creates a scheduled query in Google Cloud. See the documentation
Gets Google Cloud Storage bucket metadata. See the docs.
The Google Cloud API opens a world of possibilities for enhancing cloud operations and automating tasks. It empowers you to manage, scale, and fine-tune various services within the Google Cloud Platform (GCP) programmatically. With Pipedream, you can harness this power to create intricate workflows, trigger cloud functions based on events from other apps, manage resources, and analyze data, all in a serverless environment. The ability to interconnect GCP services with numerous other apps enriches automation, making it easier to synchronize data, streamline development workflows, and deploy applications efficiently.
module.exports = defineComponent({
props: {
google_cloud: {
type: "app",
app: "google_cloud",
}
},
async run({steps, $}) {
// Required workaround to get the @google-cloud/storage package
// working correctly on Pipedream
require("@dylburger/umask")()
const { Storage } = require('@google-cloud/storage')
const key = JSON.parse(this.google_cloud.$auth.key_json)
const storage = new Storage({
projectId: key.project_id,
credentials: {
client_email: key.client_email,
private_key: key.private_key,
}
})
await storage.authClient.getCredentials()
return {
status: "success",
authenticated: true,
projectId: key.project_id,
serviceAccount: key.client_email
}
},
})
Develop, run and deploy your Python code in Pipedream workflows. Integrate seamlessly between no-code steps, with connected accounts, or integrate Data Stores and manipulate files within a workflow.
This includes installing PyPI packages, within your code without having to manage a requirements.txt
file or running pip
.
Below is an example of using Python to access data from the trigger of the workflow, and sharing it with subsequent workflow steps:
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test":True}}