with RSS and Crawlbase?
Retrieve multiple RSS feeds and return a merged array of items sorted by date See documentation
The RSS app allows users to automatically fetch and parse updates from web feeds. This functionality is pivotal for staying abreast of content changes or updates from websites, blogs, and news outlets that offer RSS feeds. With Pipedream, you can harness the RSS API to trigger workflows that enable a broad range of automations, like content aggregation, monitoring for specific keywords, notifications, and data synchronization across platforms.
module.exports = defineComponent({
props: {
rss: {
type: "app",
app: "rss",
}
},
async run({steps, $}) {
// Retrieve items from a sample feed
const Parser = require('rss-parser');
const parser = new Parser();
const stories = []
// Replace with your feed URL
const url = "https://pipedream.com/community/latest.rss"
const feed = await parser.parseURL(url);
const { title, items } = feed
this.title = title
if (!items.length) {
$end("No new stories")
}
this.items = items
},
})
The Crawlbase API provides powerful tools for web scraping and data extraction from any webpage. It handles large-scale data collection tasks, bypassing bot protection and CAPTCHAs, and returning structured data. Within Pipedream, you can leverage Crawlbase to automate the harvesting of web data, integrate scraped content with other services, and process it for analysis, reporting, or triggering other workflows.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
crawlbase: {
type: "app",
app: "crawlbase",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.crawlbase.com/account`,
params: {
token: `${this.crawlbase.$auth.api_token}`,
product: `crawling-api`,
},
})
},
})