Skip to content

ferretvsayakashi

Apache-2.0 52 7 5,716
58.1 thousand (month) Aug 06 2019 v0.18.0(1 year, 8 months ago)
212 1 8 AGPL-3.0-only
Apr 18 2019 79 (month) 1.0.0-beta8.4(1 year, 5 months ago)

Ferret is a web scraping system. It aims to simplify data extraction from the web for UI testing, machine learning, analytics and more. ferret allows users to focus on the data. It abstracts away the technical details and complexity of underlying technologies using its own declarative language. It is extremely portable, extensible, and fast.

Features

  • Declarative language
  • Support of both static and dynamic web pages
  • Embeddable
  • Extensible

Ferret is always implemented in Python through pyfer

Ayakashi is a web scraping library for Node.js that allows developers to easily extract structured data from websites. It is built on top of the popular "puppeteer" library and provides a simple and intuitive API for defining and querying the structure of a website.

Features:

  • Powerful querying and data models
    Ayakashi's way of finding things in the page and using them is done with props and domQL. Directly inspired by the relational database world (and SQL), domQL makes DOM access easy and readable no matter how obscure the page's structure is. Props are the way to package domQL expressions as re-usable structures which can then be passed around to actions or to be used as models for data extraction.
  • High level builtin actions
    Ready made actions so you can focus on what matters. Easily handle infinite scrolling, single page navigation, events and more. Plus, you can always build your own actions, either from scratch or by composing other actions.
  • Preload code on pages
    Need to include a bunch of code, a library you made or a 3rd party module and make it available on a page? Preloaders have you covered.

Example Use


// Example scraper for Google in Ferret:
LET google = DOCUMENT("https://www.google.com/", {
    driver: "cdp",
    userAgent: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36"
})

HOVER(google, 'input[name="q"]')
WAIT(RAND(100))
INPUT(google, 'input[name="q"]', @criteria, 30)
WAIT(RAND(100))
CLICK(google, 'input[name="btnK"]')

WAITFOR EVENT "navigation" IN google

WAIT_ELEMENT(google, "#res")

LET results = ELEMENTS(google, X("//*[text() = 'Search Results']/following-sibling::*/*"))

FOR el IN results
    RETURN {
        title: INNER_TEXT(el, 'h3')?,
        description: INNER_TEXT(el, X("//em/parent::*")),
        url: ELEMENT(el, 'a')?.attributes.href
    }
const ayakashi = require("ayakashi");
const myAyakashi = ayakashi.init();

// navigate the browser
await myAyakashi.goTo("https://example.com/product");

// parsing HTML
// first by defnining a selector
myAyakashi
    .select("productList")
    .where({class: {eq: "product-item"}});

// then executing selector on current HTML:
const productList = await myAyakashi.extract("productList");
console.log(productList);

Alternatives / Similar


Was this page helpful?