Scrape-It.Cloud is an API for web scraping valuable data in JSON format from any website without needing a proxy.
Initiate the scraping process for a specific endpoint. See the documentation here.
Write custom Node.js code and use any of the 400k+ npm packages available. Refer to the Pipedream Node docs to learn more.
The Scrape-It.Cloud API allows you to automate the extraction of data from websites. It can parse, scrape, and retrieve content without the need for manual intervention. With this API on Pipedream, you can build workflows that trigger on various events and use the scraped data for numerous applications like data analysis, lead generation, and content aggregation.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
scrape_it_cloud: {
type: "app",
app: "scrape_it_cloud",
}
},
async run({steps, $}) {
const data = {
"url": `https://pipedream.com`,
}
return await axios($, {
method: "post",
url: `https://api.scrape-it.cloud/scrape`,
headers: {
"Content-Type": `application/json`,
"x-api-key": `${this.scrape_it_cloud.$auth.api_key}`,
},
data,
})
},
})
Develop, run and deploy your Node.js code in Pipedream workflows, using it between no-code steps, with connected accounts, or integrate Data Stores and File Stores.
This includes installing NPM packages, within your code without having to manage a package.json
file or running npm install
.
Below is an example of installing the axios
package in a Pipedream Node.js code step. Pipedream imports the axios
package, performs the API request, and shares the response with subsequent workflow steps:
// To use previous step data, pass the `steps` object to the run() function
export default defineComponent({
async run({ steps, $ }) {
// Return data to use it in future steps
return steps.trigger.event
},
})