Making web data extraction easy and accessible for everyone.
Emit new event when a page scraping job has completed. See the docs here
Creates a scraping job (scrapes a sitemap). See the docs here
Write Python and use any of the 350k+ PyPi packages available. Refer to the Pipedream Python docs to learn more.
Creates a sitemap for the selected website. See the docs here
Retrieves a list of scraping jobs for a sitemap. See the docs here
The WebScraper.IO API allows you to programmatically perform web scraping tasks, extracting structured data from websites. With the API, you can automate the gathering of web content for analysis, monitoring, and integration with other data sources. In Pipedream, you can leverage this API to build workflows that process, analyze, and act on the data you scrape without writing code for backend infrastructure.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
webscraper_io: {
type: "app",
app: "webscraper_io",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.webscraper.io/api/v1/sitemaps`,
params: {
api_token: `${this.webscraper_io.$auth.api_key}`,
},
})
},
})
Develop, run and deploy your Python code in Pipedream workflows. Integrate seamlessly between no-code steps, with connected accounts, or integrate Data Stores and manipulate files within a workflow.
This includes installing PyPI packages, within your code without having to manage a requirements.txt
file or running pip
.
Below is an example of using Python to access data from the trigger of the workflow, and sharing it with subsequent workflow steps:
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test":True}}