Limitless enterprise infrastructure for live data queries, analytics, insights, and retention. No data schema requirements. Index-free. Blazing fast. The technology of choice for engineering, DevOps, IT, and security teams to unlock the power of data.
Write Python and use any of the 350k+ PyPi packages available. Refer to the Pipedream Python docs to learn more.
The DataSet API enables you to manage and analyze large datasets. With Pipedream's integration, you can automate data ingestion, transformation, and analysis, create serverless workflows that respond to data events, and connect with other apps to enrich and utilize your datasets. Utilize Pipedream's serverless platform to trigger workflows, perform actions based on DataSet events, and orchestrate complex data processes with minimal setup.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
dataset: {
type: "app",
app: "dataset",
}
},
async run({steps, $}) {
const data = {
"token": `${this.dataset.$auth.access_key}`,
}
return await axios($, {
method: "post",
url: `https://app.scalyr.com/api/listUsers`,
headers: {
"Content-Type": `application/json`,
},
data,
})
},
})
Develop, run and deploy your Python code in Pipedream workflows. Integrate seamlessly between no-code steps, with connected accounts, or integrate Data Stores and manipulate files within a workflow.
This includes installing PyPI packages, within your code without having to manage a requirements.txt
file or running pip
.
Below is an example of using Python to access data from the trigger of the workflow, and sharing it with subsequent workflow steps:
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test":True}}