Skip to content

dataflowkitvsspidr

BSD-3-Clause 4 3 655
Feb 09 2017 2024-08-30(11 days ago)
800 2 16 MIT
Jul 25 2009 1.8 thousand (month) 0.7.1(7 months ago)

Dataflow kit ("DFK") is a Web Scraping framework for Gophers. It extracts data from web pages, following the specified CSS Selectors. You can use it in many ways for data mining, data processing or archiving.

Web-scraping pipeline consists of 3 general components:

  • Downloading an HTML web-page. (Fetch Service)
  • Parsing an HTML page and retrieving data we're interested in (Parse Service)
  • Encoding parsed data to CSV, MS Excel, JSON, JSON Lines or XML format.

For fetching dataflowkit has several types of page fetchers:

  • Base fetcher uses standard golang http client to fetch pages as is. It works faster than Chrome fetcher. But Base fetcher cannot render dynamic javascript driven web pages.
  • Chrome fetcher is intended for rendering dynamic javascript based content. It sends requests to Chrome running in headless mode.

For parsing dataflowkit extracts data from downloaded web page following the rules listed in configuration JSON file. Extracted data is returned in CSV, MS Excel, JSON or XML format.

Some dataflowkit features:

  • Scraping of JavaScript generated pages;
  • Data extraction from paginated websites;
  • Processing infinite scrolled pages.
  • Sсraping of websites behind login form;
  • Cookies and sessions handling;
  • Following links and detailed pages processing;
  • Managing delays between requests per domain;
  • Following robots.txt directives;
  • Saving intermediate data in Diskv or Mongodb. Storage interface is flexible enough to add more storage types easily;
  • Encode results to CSV, MS Excel, JSON(Lines), XML formats;
  • Dataflow kit is fast. It takes about 4-6 seconds to fetch and then parse 50 pages.
  • Dataflow kit is suitable to process quite large volumes of data. Our tests show the time needed to parse appr. 4 millions of pages is about 7 hours.

Spidr is a Ruby gem that provides a simple and flexible way to spider and scrape websites. It allows you to easily navigate through a website, following links and scraping data as you go. It is built on top of Nokogiri, a popular Ruby gem for parsing and searching HTML and XML documents, and it provides a simple and intuitive API for defining and running web scraping operations.

One of the main features of Spidr is its ability to spider a website, following all the links on a page and visiting all the pages of a website. This allows you to easily and quickly scrape large amounts of data from a website, without having to manually specify which pages to visit.

In addition to its spidering capabilities, Spidr also provides a variety of other features that can simplify the web scraping process. It can automatically filter which links to follow and which pages to visit, it can handle cookies and authentication, and it can automatically store the scraped data in a database or file. It also provides a built-in support for parallelism and queueing to speed up the scraping process.

Example Use


Dataflowkit uses JSON configuration like:
{
  "name": "collection",
  "request": {
      "url": "https://example.com"
  },
  "fields": [
      {
          "name": "Title",
          "selector": ".product-container a",
          "extractor": {
              "types": [
                  "text",
                  "href"
              ],
              "filters": [
                  "trim",
                  "lowerCase"
              ],
              "params": {
                  "includeIfEmpty": false
              }
          }
      },
      {
          "name": "Image",
          "selector": "#product-container img",
          "extractor": {
              "types": [
                  "alt",
                  "src",
                  "width",
                  "height"
              ],
              "filters": [
                  "trim",
                  "upperCase"
              ]
          }
      },
      {
          "name": "Buyinfo",
          "selector": ".buy-info",
          "extractor": {
              "types": [
                  "text"
              ],
              "params": {
                  "includeIfEmpty": false
              }
          }
      }
  ],
  "paginator": {
      "selector": ".next",
      "attr": "href",
      "maxPages": 3
  },
  "format": "json",
  "fetcherType": "chrome",
  "paginateResults": false
}
which is then ingested through CLI command.
require 'spidr'

Spidr.start_at("http://example.com") do |spider|
  spider.every_page do |page|
    puts "Visiting: #{page.url}"

    # Extract data from the page using Nokogiri
    doc = Nokogiri::HTML(page.body)
    title = doc.css("title").text
    puts "Title: #{title}"
  end
end

Alternatives / Similar


Was this page helpful?