with Docparser and Scrapeless?
Emit new event every time a document is processed and parsed data is available. See the documentation
Fetches a document from a provided URL and imports it to Docparser for parsing. See the documentation
Crawl any website at scale and say goodbye to blocks. See the documentation
Retrieve the result of a completed scraping job. See the documentation
Uploads a document to docparser that initiates parsing immediately after reception. See the documentation
Endpoints for fresh, structured data from 100+ popular sites. See the documentation
Docparser is a tool for extracting data from documents, such as PDFs, Word, and images. With the Docparser API, you can automate the process of capturing data without manual entry, transforming documents into actionable information. It shines in scenarios where structured information needs to be pulled from files that typically require manual data entry, such as invoices, forms, and reports.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
docparser: {
type: "app",
app: "docparser",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.docparser.com/v1/ping`,
auth: {
username: `${this.docparser.$auth.api_key}`,
password: ``,
},
})
},
})
Scrapeless – your go-to platform for powerful, compliant web data extraction. With tools like Universal Scraping API, Scrapeless makes it easy to access and gather data from complex sites. Focus on insights while we handle the technical hurdles. Scrapeless – data extraction made simple.
import { axios } from "@pipedream/platform"
export default defineComponent({
props: {
scrapeless: {
type: "app",
app: "scrapeless",
}
},
async run({steps, $}) {
return await axios($, {
url: `https://api.scrapeless.com/api/v1/me`,
headers: {
"x-api-token": `${this.scrapeless.$auth.api_key}`,
},
})
},
})