with Scrapeless and Python?
Crawl any website at scale and say goodbye to blocks. See the documentation
Write Python and use any of the 350k+ PyPi packages available. Refer to the Pipedream Python docs to learn more.
Retrieve the result of a completed scraping job. See the documentation
Endpoints for fresh, structured data from 100+ popular sites. See the documentation
Submit a new web scraping job with specified target URL and extraction rules. See the documentation
Scrapeless – your go-to platform for powerful, compliant web data extraction. With tools like Universal Scraping API, Scrapeless makes it easy to access and gather data from complex sites. Focus on insights while we handle the technical hurdles. Scrapeless – data extraction made simple.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
scrapeless: {
type: "app",
app: "scrapeless",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.scrapeless.com/api/v1/me`,
headers: {
"x-api-token": `${this.scrapeless.$auth.api_key}`,
},
})
},
})
Develop, run and deploy your Python code in Pipedream workflows. Integrate seamlessly between no-code steps, with connected accounts, or integrate Data Stores and manipulate files within a workflow
This includes installing PyPI packages, within your code without having to manage a requirements.txt
file or running pip
.
Below is an example of using Python to access data from the trigger of the workflow, and sharing it with subsequent workflow steps:
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test":True}}