with Fauna and FireCrawl?
Emit new event each time you add or remove a document from a specific collection, with the details of the document.
Crawls a given URL and returns the contents of sub-pages. See the documentation
Performs an arbitrary authorized GraphQL query. See docs here
Extract structured data from one or multiple URLs. See the documentation
Obtains the status and data from a previous crawl operation. See the documentation
Fauna API offers a powerful serverless database solution for modern applications. Its unique capabilities allow for highly scalable, secure, and flexible data management. With Pipedream, you can harness the power of Fauna to create intricate serverless workflows that react to various triggers, manage data efficiently, and connect seamlessly with other services and APIs to automate complex tasks.
module.exports = defineComponent({
props: {
faunadb: {
type: "app",
app: "faunadb",
}
},
async run({steps, $}) {
const faunadb = require('faunadb')
const q = faunadb.query
const client = new faunadb.Client({ secret: this.faunadb.$auth.secret })
// Lists collections in the database tied to your secret key
const collectionsPaginator = await client.paginate(q.Collections())
this.collections = []
await collectionsPaginator.each(page => {
for (const collection of page) {
this.collections.push(collection.id)
}
})
},
})
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
firecrawl: {
type: "app",
app: "firecrawl",
}
},
async run({steps, $}) {
const data = {
"url": "https://pipedream.com",
}
return await axios($, {
method: "post",
url: `https://api.firecrawl.dev/v0/crawl`,
headers: {
Authorization: `Bearer ${this.firecrawl.$auth.api_key}`,
},
data,
})
},
})