Proxy. Crawl. Scale. All-In-One data crawling and scraping platform for business developers.
Emit new event when you add or modify a new row in a table. See the docs here
Emit new event when new rows are returned from a custom query. See the docs here
Emit new event when a new table is added to a database. See the docs here
The Crawlbase API provides powerful tools for web scraping and data extraction from any webpage. It handles large-scale data collection tasks, bypassing bot protection and CAPTCHAs, and returning structured data. Within Pipedream, you can leverage Crawlbase to automate the harvesting of web data, integrate scraped content with other services, and process it for analysis, reporting, or triggering other workflows.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
crawlbase: {
type: "app",
app: "crawlbase",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.crawlbase.com/account`,
params: {
token: `${this.crawlbase.$auth.api_token}`,
product: `crawling-api`,
},
})
},
})
The MySQL application on Pipedream enables direct interaction with your MySQL databases, allowing you to perform CRUD operations—create, read, update, delete—on your data with ease. You can leverage these capabilities to automate data synchronization, report generation, and event-based triggers that kick off workflows in other apps. With Pipedream's serverless platform, you can connect MySQL to hundreds of other services without managing infrastructure, crafting complex code, or handling authentication.
import mysql from '@pipedream/mysql';
export default defineComponent({
props: {
mysql,
},
async run({steps, $}) {
// Component source code:
// https://github.com/PipedreamHQ/pipedream/tree/master/components/mysql
const queryObj = {
sql: "SELECT NOW()",
values: [], // Ignored since query does not contain placeholders
};
return await this.mysql.executeQuery(queryObj);
},
});