Proxy. Crawl. Scale. All-In-One data crawling and scraping platform for business developers.
Emit new event for every new row added in a table. See documentation here
Emit new event for every insert
, update
, or delete
operation in a table. This source requires user configuration using the Supabase website. More information in the README. Also see documentation here
The Crawlbase API provides powerful tools for web scraping and data extraction from any webpage. It handles large-scale data collection tasks, bypassing bot protection and CAPTCHAs, and returning structured data. Within Pipedream, you can leverage Crawlbase to automate the harvesting of web data, integrate scraped content with other services, and process it for analysis, reporting, or triggering other workflows.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
crawlbase: {
type: "app",
app: "crawlbase",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.crawlbase.com/account`,
params: {
token: `${this.crawlbase.$auth.api_token}`,
product: `crawling-api`,
},
})
},
})
Supabase is a real-time backend-as-a-service that provides developers with a suite of tools to quickly build and scale their applications. It offers database storage, authentication, instant APIs, and real-time subscriptions. With the Supabase API, you can perform CRUD operations on your database, manage users, and listen to database changes in real time. When integrated with Pipedream, you can automate workflows that react to these database events, synchronize data across multiple services, or streamline user management processes.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
supabase: {
type: "app",
app: "supabase",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://${this.supabase.$auth.subdomain}.supabase.co/rest/v1/`,
headers: {
Authorization: `Bearer ${this.supabase.$auth.service_key}`,
"apikey": `${this.supabase.$auth.service_key}`,
},
})
},
})