Skip to content

ayakashivsscrapydweb

AGPL-3.0-only 8 1 212
79 (month) Apr 18 2019 1.0.0-beta8.4(1 year, 5 months ago)
3,176 1 66 GPL-3.0
Sep 30 2018 1.6 thousand (month) 1.5.1(2 months ago)

Ayakashi is a web scraping library for Node.js that allows developers to easily extract structured data from websites. It is built on top of the popular "puppeteer" library and provides a simple and intuitive API for defining and querying the structure of a website.

Features:

  • Powerful querying and data models
    Ayakashi's way of finding things in the page and using them is done with props and domQL. Directly inspired by the relational database world (and SQL), domQL makes DOM access easy and readable no matter how obscure the page's structure is. Props are the way to package domQL expressions as re-usable structures which can then be passed around to actions or to be used as models for data extraction.
  • High level builtin actions
    Ready made actions so you can focus on what matters. Easily handle infinite scrolling, single page navigation, events and more. Plus, you can always build your own actions, either from scratch or by composing other actions.
  • Preload code on pages
    Need to include a bunch of code, a library you made or a 3rd party module and make it available on a page? Preloaders have you covered.

ScrapydWeb is a web-based management tool for the Scrapyd service. It is built using the Python Flask framework and allows you to easily manage and monitor your Scrapy spider projects through a web interface.

ScrapydWeb allows you to view the status of your running spiders, view the logs of completed spiders, schedule new spider runs, and manage spider settings and configurations.

ScrapydWeb provides a simple way to manage your scraping tasks and allows you to schedule and run multiple spiders simultaneously. It also provides a user-friendly web interface that makes it easy to view the status of your spiders and monitor their progress.

You can install the package via pip by running pip install scrapydweb and then you can run the package by running scrapydweb command in your command prompt.

It will start a web server that you can access through your web browser at http://localhost:6800/ You will need to have Scrapyd running in order to use ScrapydWeb, Scrapyd is a service for running Scrapy spiders, it allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines.

Example Use


const ayakashi = require("ayakashi");
const myAyakashi = ayakashi.init();

// navigate the browser
await myAyakashi.goTo("https://example.com/product");

// parsing HTML
// first by defnining a selector
myAyakashi
    .select("productList")
    .where({class: {eq: "product-item"}});

// then executing selector on current HTML:
const productList = await myAyakashi.extract("productList");
console.log(productList);

Alternatives / Similar


Was this page helpful?