Easy to use no-code web scraping and data extraction software.
Emit new event when an automation run has finished running. See the documentation
Write Python and use any of the 350k+ PyPi packages available. Refer to the Pipedream Python docs to learn more.
Triggers a pre-built automation by providing the scraper ID. See the documentation
The Browserhub API offers automation and control over browser sessions, enabling users to create, manipulate, and extract data from web pages programmatically. Integrating Browserhub with Pipedream opens a world of possibilities for automating web interaction workflows, monitoring website changes, scraping data, and testing web applications. Pipedream's serverless platform allows for easy orchestration of API calls and data handling, making it a powerful tool for developers to harness the capabilities of Browserhub in a scalable and efficient manner.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
browserhub: {
type: "app",
app: "browserhub",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.browserhub.io/v1/status`,
headers: {
Authorization: `Bearer ${this.browserhub.$auth.api_key}`,
},
})
},
})
Develop, run and deploy your Python code in Pipedream workflows. Integrate seamlessly between no-code steps, with connected accounts, or integrate Data Stores and manipulate files within a workflow.
This includes installing PyPI packages, within your code without having to manage a requirements.txt
file or running pip
.
Below is an example of using Python to access data from the trigger of the workflow, and sharing it with subsequent workflow steps:
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test":True}}