← npm + Scrapfly integrations

Scrape Page with Scrapfly API on New Package Version from npm API

Pipedream makes it easy to connect APIs for Scrapfly, npm and 2,400+ other apps remarkably fast.

Trigger workflow on
New Package Version from the npm API
Next, do this
Scrape Page with the Scrapfly API
No credit card required
Intro to Pipedream
Watch us build a workflow
Watch us build a workflow
8 min
Watch now ➜

Trusted by 1,000,000+ developers from startups to Fortune 500 companies

Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo
Adyen logo
Appcues logo
Bandwidth logo
Checkr logo
ChartMogul logo
Dataminr logo
Gopuff logo
Gorgias logo
LinkedIn logo
Logitech logo
Replicated logo
Rudderstack logo
SAS logo
Scale AI logo
Webflow logo
Warner Bros. logo

Developers Pipedream

Getting Started

This integration creates a workflow with a npm trigger and Scrapfly action. When you configure and deploy the workflow, it will run on Pipedream's servers 24x7 for free.

  1. Select this integration
  2. Configure the New Package Version trigger
    1. Connect your npm account
    2. Configure timer
    3. Configure Package
  3. Configure the Scrape Page action
    1. Connect your Scrapfly account
    2. Configure URL
    3. Optional- Configure Headers
    4. Optional- Configure Language
    5. Optional- Configure Operating System
    6. Optional- Configure Timeout
    7. Optional- Select a Format
    8. Optional- Configure Retry
    9. Optional- Configure Proxified Response
    10. Optional- Configure Debug
    11. Optional- Configure Correlation ID
    12. Optional- Configure Tags
    13. Optional- Configure DNS
    14. Optional- Configure SSL
    15. Optional- Select a Proxy Pool
  4. Deploy the workflow
  5. Send a test event to validate your setup
  6. Turn on the trigger

Details

This integration uses pre-built, source-available components from Pipedream's GitHub repo. These components are developed by Pipedream and the community, and verified and maintained by Pipedream.

To contribute an update to an existing component or create a new component, create a PR on GitHub. If you're new to Pipedream component development, you can start with quickstarts for trigger span and action development, and then review the component API reference.

Trigger

Description:Emit new event when a new version of an npm package is published. [See the documentation](https://github.com/npm/registry/blob/main/docs/responses/package-metadata.md)
Version:0.0.1
Key:npm-new-package-version

Trigger Code

import { DEFAULT_POLLING_SOURCE_TIMER_INTERVAL } from "@pipedream/platform";
import app from "../../npm.app.mjs";

export default {
  key: "npm-new-package-version",
  name: "New Package Version",
  description: "Emit new event when a new version of an npm package is published. [See the documentation](https://github.com/npm/registry/blob/main/docs/responses/package-metadata.md)",
  version: "0.0.1",
  type: "source",
  dedupe: "unique",
  props: {
    app,
    db: "$.service.db",
    timer: {
      type: "$.interface.timer",
      default: {
        intervalSeconds: DEFAULT_POLLING_SOURCE_TIMER_INTERVAL,
      },
    },
    packageName: {
      type: "string",
      label: "Package",
      description: "Enter an npm package name. Leave blank for all",
      default: "@pipedream/platform",
    },
  },
  async run() {
    const {
      app,
      packageName,
    } = this;

    const response = await app.getPackageMetadata({
      debug: true,
      packageName,
    });

    const { "dist-tags": { latest: latestVersion } } = response;

    this.$emit(response, {
      id: latestVersion,
      summary: `New Package Version ${latestVersion}`,
      ts: Date.parse(response.modified),
    });
  },
};

Trigger Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI and CLI.
LabelPropTypeDescription
npmappappThis component uses the npm app.
N/Adb$.service.dbThis component uses $.service.db to maintain state between executions.
timer$.interface.timer
PackagepackageNamestring

Enter an npm package name. Leave blank for all

Trigger Authentication

The npm API does not require authentication.

About npm

Node package manager

Action

Description:Extract data from a specified web page. [See the documentation](https://scrapfly.io/docs/scrape-api/getting-started)
Version:0.0.1
Key:scrapfly-scrape-page

Action Code

import { ConfigurationError } from "@pipedream/platform";
import {
  FORMAT_OPTIONS,
  PROXY_COUNTRY_OPTIONS,
  PROXY_POOL_OPTIONS,
} from "../../common/constants.mjs";
import { parseObject } from "../../common/utils.mjs";
import scrapfly from "../../scrapfly.app.mjs";

export default {
  key: "scrapfly-scrape-page",
  name: "Scrape Page",
  description: "Extract data from a specified web page. [See the documentation](https://scrapfly.io/docs/scrape-api/getting-started)",
  version: "0.0.1",
  type: "action",
  props: {
    scrapfly,
    url: {
      propDefinition: [
        scrapfly,
        "url",
      ],
    },
    headers: {
      type: "object",
      label: "Headers",
      description: "Pass custom headers to the request.",
      optional: true,
    },
    lang: {
      type: "string",
      label: "Language",
      description: "Select page language. By default it uses the language of the selected proxy location. Behind the scenes, it configures the `Accept-Language` HTTP header. If the website support the language, the content will be in that lang. **Note: you cannot set headers `Accept-Language` header manually**. [See the documentation](https://scrapfly.io/docs/scrape-api/getting-started#spec)",
      optional: true,
    },
    os: {
      type: "string",
      label: "Operating System",
      description: "Operating System, if not selected it's random. **Note: you cannot set os parameter and `User-Agent` header at the same time.** [See the documentation](https://scrapfly.io/docs/scrape-api/getting-started#spec)",
      optional: true,
    },
    timeout: {
      type: "integer",
      label: "Timeout",
      description: "Timeout in milliseconds. It represents the maximum time allowed for Scrapfly to perform the scrape. Since `timeout` is not trivial to understand see our [extended documentation on timeouts](https://scrapfly.io/docs/scrape-api/understand-timeout)",
      optional: true,
    },
    format: {
      type: "string",
      label: "Format",
      description: "Format of the response.",
      options: FORMAT_OPTIONS,
      optional: true,
    },
    retry: {
      type: "boolean",
      label: "Retry",
      description: "Improve reliability with retries on failure.",
      optional: true,
    },
    proxifiedResponse: {
      type: "boolean",
      label: "Proxified Response",
      description: "Return the content of the page directly.",
      optional: true,
    },
    debug: {
      type: "boolean",
      label: "Debug",
      description: "Store the API result and take a screenshot if rendering js is enabled.",
      optional: true,
    },
    correlationId: {
      type: "string",
      label: "Correlation ID",
      description: "Helper ID for correlating a group of scrapes.",
      optional: true,
    },
    tags: {
      type: "string[]",
      label: "Tags",
      description: "Add tags to your scrapes to group them.",
      optional: true,
    },
    dns: {
      type: "boolean",
      label: "DNS",
      description: "Query and retrieve target DNS information.",
      optional: true,
    },
    ssl: {
      type: "boolean",
      label: "SSL",
      description: "SSL option.",
      optional: true,
    },
    proxyPool: {
      type: "string",
      label: "Proxy Pool",
      description: "Select the proxy pool to use.",
      optional: true,
      options: PROXY_POOL_OPTIONS,
      reloadProps: true,
    },
  },
  async additionalProps() {
    const props = {};
    props.country = {
      type: "string",
      label: "Country",
      description: "Proxy country location. If not set it chooses a random location available. A reference to a country must be ISO 3166 alpha-2 (2 letters). The available countries are defined by the proxy pool you use. [See the documentation](https://scrapfly.io/docs/scrape-api/getting-started#spec)",
      optional: true,
      options: PROXY_COUNTRY_OPTIONS[this.proxyPool],
    };
    return props;
  },
  async run({ $ }) {
    try {
      let headers = "";
      if (this.headers) {
        headers = Object.keys(parseObject(this.headers))
          .reduce((acc, key) => {
            acc.push(`headers[${key}]=${encodeURIComponent(this.headers[key])}`);
            return acc;
          }, []);
      }
      const params = {
        url: this.url,
        proxy_pool: this.proxyPool,
        country: this.country,
        lang: this.lang,
        os: this.os,
        timeout: this.timeout,
        format: this.format,
        retry: this.retry,
        proxified_response: this.proxifiedResponse,
        debug: this.debug,
        correlation_id: this.correlationId,
        tags: parseObject(this.tags),
        dns: this.dns,
        ssl: this.ssl,
        headers,
      };

      const response = await this.scrapfly.extractWebPageContent({
        $,
        params,
      });

      $.export("$summary", `Successfully scraped content from ${this.url}`);
      return response;
    } catch ({ response: { data: { message } } }) {
      throw new ConfigurationError(message);
    }
  },
};

Action Configuration

This component may be configured based on the props defined in the component code. Pipedream automatically prompts for input values in the UI.

LabelPropTypeDescription
ScrapflyscrapflyappThis component uses the Scrapfly app.
URLurlstring

This URL is used to transform any relative URLs in the document into absolute URLs automatically. It can be either the base URL or the exact URL of the document. Must be url encoded.

Headersheadersobject

Pass custom headers to the request.

Languagelangstring

Select page language. By default it uses the language of the selected proxy location. Behind the scenes, it configures the Accept-Language HTTP header. If the website support the language, the content will be in that lang. Note: you cannot set headers Accept-Language header manually. See the documentation

Operating Systemosstring

Operating System, if not selected it's random. Note: you cannot set os parameter and User-Agent header at the same time. See the documentation

Timeouttimeoutinteger

Timeout in milliseconds. It represents the maximum time allowed for Scrapfly to perform the scrape. Since timeout is not trivial to understand see our extended documentation on timeouts

FormatformatstringSelect a value from the drop down menu:rawtextmarkdownmarkdown:no_links,no_imagesLLMclean_htmljson
Retryretryboolean

Improve reliability with retries on failure.

Proxified ResponseproxifiedResponseboolean

Return the content of the page directly.

Debugdebugboolean

Store the API result and take a screenshot if rendering js is enabled.

Correlation IDcorrelationIdstring

Helper ID for correlating a group of scrapes.

Tagstagsstring[]

Add tags to your scrapes to group them.

DNSdnsboolean

Query and retrieve target DNS information.

SSLsslboolean

SSL option.

Proxy PoolproxyPoolstringSelect a value from the drop down menu:public_datacenter_poolpublic_residential_pool

Action Authentication

Scrapfly uses API keys for authentication. When you connect your Scrapfly account, Pipedream securely stores the keys so you can easily authenticate to Scrapfly APIs in both code and no-code steps.

About Scrapfly

Scrapfly Web Scraping API for developer

More Ways to Connect Scrapfly + npm

AI Data Extraction with Scrapfly API on New Download Counts from npm API
npm + Scrapfly
 
Try it
AI Data Extraction with Scrapfly API on New Package Version from npm API
npm + Scrapfly
 
Try it
Retrieve Scrapfly Account Info with Scrapfly API on New Download Counts from npm API
npm + Scrapfly
 
Try it
Retrieve Scrapfly Account Info with Scrapfly API on New Package Version from npm API
npm + Scrapfly
 
Try it
Scrape Page with Scrapfly API on New Download Counts from npm API
npm + Scrapfly
 
Try it
New Download Counts from the npm API

Emit new event with the latest count of downloads for an npm package. See the documentation.

 
Try it
New Package Version from the npm API

Emit new event when a new version of an npm package is published. See the documentation

 
Try it
AI Data Extraction with the Scrapfly API

Automate content extraction from any text-based source using AI, LLM, and custom parsing. See the documentation

 
Try it
Retrieve Scrapfly Account Info with the Scrapfly API

Retrieve current subscription and account usage details from Scrapfly. See the documentation

 
Try it
Scrape Page with the Scrapfly API

Extract data from a specified web page. See the documentation

 
Try it

Explore Other Apps

1
-
24
of
2,400+
apps by most popular

HTTP / Webhook
HTTP / Webhook
Get a unique URL where you can send HTTP or webhook requests
Node
Node
Anything you can do with Node.js, you can do in a Pipedream workflow. This includes using most of npm's 400,000+ packages.
Python
Python
Anything you can do in Python can be done in a Pipedream Workflow. This includes using any of the 350,000+ PyPi packages available in your Python powered workflows.
OpenAI (ChatGPT)
OpenAI (ChatGPT)
OpenAI is an AI research and deployment company with the mission to ensure that artificial general intelligence benefits all of humanity. They are the makers of popular models like ChatGPT, DALL-E, and Whisper.
Premium
Salesforce
Salesforce
Web services API for interacting with Salesforce
Premium
HubSpot
HubSpot
HubSpot's CRM platform contains the marketing, sales, service, operations, and website-building software you need to grow your business.
Premium
Zoho CRM
Zoho CRM
Zoho CRM is an online Sales CRM software that manages your sales, marketing, and support in one CRM platform.
Premium
Stripe
Stripe
Stripe powers online and in-person payment processing and financial solutions for businesses of all sizes.
Shopify
Shopify
Shopify is a complete commerce platform that lets anyone start, manage, and grow a business. You can use Shopify to build an online store, manage sales, market to customers, and accept payments in digital and physical locations.
Premium
WooCommerce
WooCommerce
WooCommerce is the open-source ecommerce platform for WordPress.
Premium
Snowflake
Snowflake
A data warehouse built for the cloud
Premium
MongoDB
MongoDB
MongoDB is an open source NoSQL database management program.
Supabase
Supabase
Supabase is an open source Firebase alternative.
MySQL
MySQL
MySQL is an open-source relational database management system.
PostgreSQL
PostgreSQL
PostgreSQL is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
Premium
AWS
AWS
Amazon Web Services (AWS) offers reliable, scalable, and inexpensive cloud computing services.
Premium
Twilio SendGrid
Twilio SendGrid
Send marketing and transactional email through the Twilio SendGrid platform with the Email API, proprietary mail transfer agent, and infrastructure for scalable delivery.
Amazon SES
Amazon SES
Amazon SES is a cloud-based email service provider that can integrate into any application for high volume email automation
Premium
Klaviyo
Klaviyo
Email Marketing and SMS Marketing Platform
Premium
Zendesk
Zendesk
Zendesk is award-winning customer service software trusted by 200K+ customers. Make customers happy via text, mobile, phone, email, live chat, social media.
Notion
Notion
Notion is a new tool that blends your everyday work apps into one. It's the all-in-one workspace for you and your team.
Slack
Slack
Slack is a channel-based messaging platform. With Slack, people can work together more effectively, connect all their software tools and services, and find the information they need to do their best work — all within a secure, enterprise-grade environment.
Microsoft Teams
Microsoft Teams
Microsoft Teams has communities, events, chats, channels, meetings, storage, tasks, and calendars in one place.
Schedule
Schedule
Trigger workflows on an interval or cron schedule.