ferretvsgerapy
Ferret is a web scraping system. It aims to simplify data extraction from the web for UI testing, machine learning, analytics and more. ferret allows users to focus on the data. It abstracts away the technical details and complexity of underlying technologies using its own declarative language. It is extremely portable, extensible, and fast.
Features
- Declarative language
- Support of both static and dynamic web pages
- Embeddable
- Extensible
Ferret is always implemented in Python through pyfer
Gerapy is a Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js.
It is built on top of the Scrapy framework and provides a simple and easy-to-use interface for performing web scraping tasks. Gerapy also includes features such as support for scheduling and distributed crawling, as well as a built-in web-based dashboard for monitoring and managing scraping tasks. Additionally, Gerapy is designed to be highly extensible, allowing users to easily create custom plugins and integrations.
Overall, Gerapy is a useful tool for those looking to automate web scraping tasks and extract data from websites.
Example Use
// Example scraper for Google in Ferret:
LET google = DOCUMENT("https://www.google.com/", {
driver: "cdp",
userAgent: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36"
})
HOVER(google, 'input[name="q"]')
WAIT(RAND(100))
INPUT(google, 'input[name="q"]', @criteria, 30)
WAIT(RAND(100))
CLICK(google, 'input[name="btnK"]')
WAITFOR EVENT "navigation" IN google
WAIT_ELEMENT(google, "#res")
LET results = ELEMENTS(google, X("//*[text() = 'Search Results']/following-sibling::*/*"))
FOR el IN results
RETURN {
title: INNER_TEXT(el, 'h3')?,
description: INNER_TEXT(el, X("//em/parent::*")),
url: ELEMENT(el, 'a')?.attributes.href
}