Skip to content

dudevskimurai

AGPL-3.0 32 2 425
54 (month) Feb 20 2022 0.1.3(2023-08-01 20:28:33 ago)
1,098 1 14 MIT
Aug 23 2018 2.4 thousand (month) 2.2.0(2026-01-27 17:36:19 ago)

Dude (dude uncomplicated data extraction) is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-learn syntax.

The simplest web scraper will look like this: ```python from dude import select

@select(css="a") def get_link(element): return {"url": element.get_attribute("href")} ```

dude supports multiple parser backends: - playwright
- lxml
- parsel - beautifulsoup - pyppeteer - selenium

Kimurai is a modern web scraping framework for Ruby, inspired by Python's Scrapy. It provides a structured approach to building web scrapers with built-in support for multiple browser engines, session management, and data pipelines.

Key features include:

  • Multiple engine support Can use different backends depending on the scraping needs: Mechanize for simple HTTP requests, Selenium with headless Chrome/Firefox for JavaScript-rendered pages, and Poltergeist (PhantomJS) for lightweight rendering.
  • Scrapy-like architecture Follows the spider pattern: define a spider class with start URLs and parsing methods, and the framework handles crawling, scheduling, and data collection.
  • Built-in data pipelines Save scraped data to JSON, CSV, or custom formats with configurable output pipelines.
  • Session management Maintains browser sessions with automatic cookie handling and configurable delays between requests.
  • Request scheduling Built-in request queue with configurable concurrency, delays, and retry logic.
  • CLI tools Command-line tools for generating new spiders, running individual spiders, and managing scraping projects.

Kimurai is the closest Ruby equivalent to Scrapy. It's well-suited for structured scraping projects that need organization, multiple spiders, and data pipeline processing.

Note: Kimurai has not seen active development recently, but it remains a useful framework for Ruby scraping projects and is included as the most complete Ruby scraping framework available.

Highlights


middlewaresoutput-pipelines

Example Use


```python from dude import select """ This example demonstrates how to use Parsel + async HTTPX To access an attribute, use: selector.attrib["href"] You can also access an attribute using the ::attr(name) pseudo-element, for example "a::attr(href)", then: selector.get() To get the text, use ::text pseudo-element, then: selector.get() """ @select(css="a.url", priority=2) async def result_url(selector): return {"url": selector.attrib["href"]} # Option to get url using ::attr(name) pseudo-element @select(css="a.url::attr(href)", priority=2) async def result_url2(selector): return {"url2": selector.get()} @select(css=".title::text", priority=1) async def result_title(selector): return {"title": selector.get()} @select(css=".description::text", priority=0) async def result_description(selector): return {"description": selector.get()} if __name__ == "__main__": import dude dude.run(urls=["https://dude.ron.sh"], parser="parsel") ```
```ruby require 'kimurai' class ProductSpider < Kimurai::Base @name = 'product_spider' @engine = :selenium_chrome # or :mechanize for simple pages @start_urls = ['https://example.com/products'] def parse(response, url:, data: {}) # Extract product data from current page response.css('.product').each do |product| item = { name: product.css('.name').text.strip, price: product.css('.price').text.strip, url: absolute_url(product.at_css('a')['href'], base: url), } # Send item to the pipeline save_to "products.json", item, format: :json end # Follow pagination links if next_page = response.at_css('a.next-page') request_to :parse, url: absolute_url(next_page['href'], base: url) end end end # Run the spider ProductSpider.crawl! ```

Alternatives / Similar


Was this page helpful?