The easiest way to scrape websites via #API. ScrapingAnt uses the latest Chrome browser and rotates proxies to automate your data mining tasks.
Write Python and use any of the 350k+ PyPi packages available. Refer to the Pipedream Python docs to learn more.
The ScrapingAnt API allows you to scrape web pages without getting blocked. It can handle JavaScript rendering, cookies, sessions, and can even interact with web pages as if a real person were browsing. Using Pipedream, you can integrate ScrapingAnt with countless other apps to automate data extraction and feed this data into various business processes, analytics tools, or databases.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
scrapingant: {
type: "app",
app: "scrapingant",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.scrapingant.com/v2/general`,
headers: {
"x-api-key": `${this.scrapingant.$auth.api_token}`,
},
params: {
url: `pipedream.com`,
},
})
},
})
Develop, run and deploy your Python code in Pipedream workflows. Integrate seamlessly between no-code steps, with connected accounts, or integrate Data Stores and manipulate files within a workflow.
This includes installing PyPI packages, within your code without having to manage a requirements.txt
file or running pip
.
Below is an example of using Python to access data from the trigger of the workflow, and sharing it with subsequent workflow steps:
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test":True}}