Skip to content

ferretvsspidr

Apache-2.0 52 7 5,716
58.1 thousand (month) Aug 06 2019 v0.18.0(1 year, 8 months ago)
809 2 16 MIT
Jul 25 2009 1.9 thousand (month) 0.7.1(10 months ago)

Ferret is a web scraping system. It aims to simplify data extraction from the web for UI testing, machine learning, analytics and more. ferret allows users to focus on the data. It abstracts away the technical details and complexity of underlying technologies using its own declarative language. It is extremely portable, extensible, and fast.

Features

  • Declarative language
  • Support of both static and dynamic web pages
  • Embeddable
  • Extensible

Ferret is always implemented in Python through pyfer

Spidr is a Ruby gem that provides a simple and flexible way to spider and scrape websites. It allows you to easily navigate through a website, following links and scraping data as you go. It is built on top of Nokogiri, a popular Ruby gem for parsing and searching HTML and XML documents, and it provides a simple and intuitive API for defining and running web scraping operations.

One of the main features of Spidr is its ability to spider a website, following all the links on a page and visiting all the pages of a website. This allows you to easily and quickly scrape large amounts of data from a website, without having to manually specify which pages to visit.

In addition to its spidering capabilities, Spidr also provides a variety of other features that can simplify the web scraping process. It can automatically filter which links to follow and which pages to visit, it can handle cookies and authentication, and it can automatically store the scraped data in a database or file. It also provides a built-in support for parallelism and queueing to speed up the scraping process.

Example Use


// Example scraper for Google in Ferret:
LET google = DOCUMENT("https://www.google.com/", {
    driver: "cdp",
    userAgent: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36"
})

HOVER(google, 'input[name="q"]')
WAIT(RAND(100))
INPUT(google, 'input[name="q"]', @criteria, 30)
WAIT(RAND(100))
CLICK(google, 'input[name="btnK"]')

WAITFOR EVENT "navigation" IN google

WAIT_ELEMENT(google, "#res")

LET results = ELEMENTS(google, X("//*[text() = 'Search Results']/following-sibling::*/*"))

FOR el IN results
    RETURN {
        title: INNER_TEXT(el, 'h3')?,
        description: INNER_TEXT(el, X("//em/parent::*")),
        url: ELEMENT(el, 'a')?.attributes.href
    }
require 'spidr'

Spidr.start_at("http://example.com") do |spider|
  spider.every_page do |page|
    puts "Visiting: #{page.url}"

    # Extract data from the page using Nokogiri
    doc = Nokogiri::HTML(page.body)
    title = doc.css("title").text
    puts "Title: #{title}"
  end
end

Alternatives / Similar


Was this page helpful?