Skip to content

dataflowkitvswombat

BSD-3-Clause 4 3 663
Feb 09 2017 2024-12-03(6 days ago)
1,315 2 24 MIT
Dec 27 2011 1.6 thousand (month) 3.0.0(2 years ago)

Dataflow kit ("DFK") is a Web Scraping framework for Gophers. It extracts data from web pages, following the specified CSS Selectors. You can use it in many ways for data mining, data processing or archiving.

Web-scraping pipeline consists of 3 general components:

  • Downloading an HTML web-page. (Fetch Service)
  • Parsing an HTML page and retrieving data we're interested in (Parse Service)
  • Encoding parsed data to CSV, MS Excel, JSON, JSON Lines or XML format.

For fetching dataflowkit has several types of page fetchers:

  • Base fetcher uses standard golang http client to fetch pages as is. It works faster than Chrome fetcher. But Base fetcher cannot render dynamic javascript driven web pages.
  • Chrome fetcher is intended for rendering dynamic javascript based content. It sends requests to Chrome running in headless mode.

For parsing dataflowkit extracts data from downloaded web page following the rules listed in configuration JSON file. Extracted data is returned in CSV, MS Excel, JSON or XML format.

Some dataflowkit features:

  • Scraping of JavaScript generated pages;
  • Data extraction from paginated websites;
  • Processing infinite scrolled pages.
  • Sсraping of websites behind login form;
  • Cookies and sessions handling;
  • Following links and detailed pages processing;
  • Managing delays between requests per domain;
  • Following robots.txt directives;
  • Saving intermediate data in Diskv or Mongodb. Storage interface is flexible enough to add more storage types easily;
  • Encode results to CSV, MS Excel, JSON(Lines), XML formats;
  • Dataflow kit is fast. It takes about 4-6 seconds to fetch and then parse 50 pages.
  • Dataflow kit is suitable to process quite large volumes of data. Our tests show the time needed to parse appr. 4 millions of pages is about 7 hours.

Wombat is a Ruby gem that makes it easy to scrape websites and extract structured data from HTML pages. It is built on top of Nokogiri, a popular Ruby gem for parsing and searching HTML and XML documents, and it provides a simple and intuitive API for defining and running web scraping operations.

One of the main features of Wombat is its ability to extract structured data from HTML pages using a simple, CSS-like syntax. It allows you to define a set of rules for extracting data from a page, and then automatically applies those rules to the page's HTML to extract the desired data. This makes it easy to extract data from even complex and dynamic pages, without having to write a lot of custom code.

In addition to its data extraction capabilities, Wombat also provides a variety of other features that can simplify the web scraping process. It can automatically follow links and scrape multiple pages, it can handle pagination and AJAX requests, and it can handle cookies and authentication. It also provides a built-in support for parallelism and queueing to speed up the scraping process.

Example Use


Dataflowkit uses JSON configuration like:
{
  "name": "collection",
  "request": {
      "url": "https://example.com"
  },
  "fields": [
      {
          "name": "Title",
          "selector": ".product-container a",
          "extractor": {
              "types": [
                  "text",
                  "href"
              ],
              "filters": [
                  "trim",
                  "lowerCase"
              ],
              "params": {
                  "includeIfEmpty": false
              }
          }
      },
      {
          "name": "Image",
          "selector": "#product-container img",
          "extractor": {
              "types": [
                  "alt",
                  "src",
                  "width",
                  "height"
              ],
              "filters": [
                  "trim",
                  "upperCase"
              ]
          }
      },
      {
          "name": "Buyinfo",
          "selector": ".buy-info",
          "extractor": {
              "types": [
                  "text"
              ],
              "params": {
                  "includeIfEmpty": false
              }
          }
      }
  ],
  "paginator": {
      "selector": ".next",
      "attr": "href",
      "maxPages": 3
  },
  "format": "json",
  "fetcherType": "chrome",
  "paginateResults": false
}
which is then ingested through CLI command.
require 'wombat'

Wombat.crawl do
  base_url "https://www.github.com"
  path "/"

  headline xpath: "//h1"
  subheading css: "p.alt-lead"

  what_is({ css: ".one-fourth h4" }, :list)

  links do
    explore xpath: '/html/body/header/div/div/nav[1]/a[4]' do |e|
      e.gsub(/Explore/, "Love")
    end

    features css: '.nav-item-opensource'
    business css: '.nav-item-business'
  end
end
will result in:
{
  "headline"=>"How people build software",
  "subheading"=>"Millions of developers use GitHub to build personal projects, support their businesses, and work together on open source technologies.",
  "what_is"=>[
    "For everything you build",
    "A better way to work",
    "Millions of projects",
    "One platform, from start to finish"
  ],
  "links"=>{
    "explore"=>"Love",
    "features"=>"Open source",
    "business"=>"Business"
  }
}

Alternatives / Similar


Was this page helpful?