with Webcrawler API and Python?
Write Python and use any of the 350k+ PyPi packages available. Refer to the Pipedream Python docs to learn more.
import { WebcrawlerClient } from "webcrawlerapi-js";
export default defineComponent({
props: {
webcrawler_api: {
type: "app",
app: "webcrawler_api",
}
},
async run({ steps, $ }) {
const client = new WebcrawlerClient(this.webcrawler_api.$auth.api_key);
return await client.crawl({
"items_limit": 10,
"url": "https://books.toscrape.com/",
"scrape_type": "markdown",
});
},
})
Develop, run and deploy your Python code in Pipedream workflows. Integrate seamlessly between no-code steps, with connected accounts, or integrate Data Stores and manipulate files within a workflow
This includes installing PyPI packages, within your code without having to manage a requirements.txt file or running pip.
Below is an example of using Python to access data from the trigger of the workflow, and sharing it with subsequent workflow steps:
def handler(pd: "pipedream"):
# Reference data from previous steps
print(pd.steps["trigger"]["context"]["id"])
# Return data for use in future steps
return {"foo": {"test":True}}