ayakashivsscrapyd
Ayakashi is a web scraping library for Node.js that allows developers to easily extract structured data from websites. It is built on top of the popular "puppeteer" library and provides a simple and intuitive API for defining and querying the structure of a website.
Features:
- Powerful querying and data models
Ayakashi's way of finding things in the page and using them is done with props and domQL. Directly inspired by the relational database world (and SQL), domQL makes DOM access easy and readable no matter how obscure the page's structure is. Props are the way to package domQL expressions as re-usable structures which can then be passed around to actions or to be used as models for data extraction. - High level builtin actions
Ready made actions so you can focus on what matters. Easily handle infinite scrolling, single page navigation, events and more. Plus, you can always build your own actions, either from scratch or by composing other actions. - Preload code on pages
Need to include a bunch of code, a library you made or a 3rd party module and make it available on a page? Preloaders have you covered.
Scrapyd is a service for running Scrapy spiders. It allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines. It is built in Python, and it is meant to be used in a server-client architecture, where the scrapyd server runs on a remote machine, and clients can schedule and control spider runs on the server using an HTTP API. With Scrapyd, you can schedule spider runs on a regular basis, schedule spider runs on demand, and view the status of running spiders.
You can also see the logs of completed spiders, and manage spider settings and
configurations. Scrapyd also provides an API that allows you to schedule spider runs, cancel spider
runs, and view the status of running spiders.
You can install the package via pip by running pip install scrapyd
and
then you can run the package by running scrapyd
command in your command prompt.
By default, it will start a web server on port 6800, but you can specify a different port using the
`--port`` option.
Scrapyd is a good solution if you need to run Scrapy spiders on a remote machine, or if you need to schedule spider runs on a regular basis. It's also useful if you have multiple spiders, and you need a way to manage and monitor them all in one place.
for more web interface see scrapydweb
Example Use
const ayakashi = require("ayakashi");
const myAyakashi = ayakashi.init();
// navigate the browser
await myAyakashi.goTo("https://example.com/product");
// parsing HTML
// first by defnining a selector
myAyakashi
.select("productList")
.where({class: {eq: "product-item"}});
// then executing selector on current HTML:
const productList = await myAyakashi.extract("productList");
console.log(productList);
$ scrapyd
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=spider2