scrapydvskimurai
Scrapyd is a service for running Scrapy spiders. It allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines. It is built in Python, and it is meant to be used in a server-client architecture, where the scrapyd server runs on a remote machine, and clients can schedule and control spider runs on the server using an HTTP API. With Scrapyd, you can schedule spider runs on a regular basis, schedule spider runs on demand, and view the status of running spiders.
You can also see the logs of completed spiders, and manage spider settings and
configurations. Scrapyd also provides an API that allows you to schedule spider runs, cancel spider
runs, and view the status of running spiders.
You can install the package via pip by running pip install scrapyd and
then you can run the package by running scrapyd command in your command prompt.
By default, it will start a web server on port 6800, but you can specify a different port using the
`--port`` option.
Scrapyd is a good solution if you need to run Scrapy spiders on a remote machine, or if you need to schedule spider runs on a regular basis. It's also useful if you have multiple spiders, and you need a way to manage and monitor them all in one place.
for more web interface see scrapydweb
Kimurai is a modern web scraping framework for Ruby, inspired by Python's Scrapy. It provides a structured approach to building web scrapers with built-in support for multiple browser engines, session management, and data pipelines.
Key features include:
- Multiple engine support Can use different backends depending on the scraping needs: Mechanize for simple HTTP requests, Selenium with headless Chrome/Firefox for JavaScript-rendered pages, and Poltergeist (PhantomJS) for lightweight rendering.
- Scrapy-like architecture Follows the spider pattern: define a spider class with start URLs and parsing methods, and the framework handles crawling, scheduling, and data collection.
- Built-in data pipelines Save scraped data to JSON, CSV, or custom formats with configurable output pipelines.
- Session management Maintains browser sessions with automatic cookie handling and configurable delays between requests.
- Request scheduling Built-in request queue with configurable concurrency, delays, and retry logic.
- CLI tools Command-line tools for generating new spiders, running individual spiders, and managing scraping projects.
Kimurai is the closest Ruby equivalent to Scrapy. It's well-suited for structured scraping projects that need organization, multiple spiders, and data pipeline processing.
Note: Kimurai has not seen active development recently, but it remains a useful framework for Ruby scraping projects and is included as the most complete Ruby scraping framework available.