Skip to content

mechanizevskimurai

MIT 6 8 4,440
213.1 thousand (month) Jul 25 2009 2.14.0(2025-01-05 18:30:46 ago)
1,098 1 14 MIT
Aug 23 2018 2.4 thousand (month) 2.2.0(2026-01-27 17:36:19 ago)

Mechanize is a Ruby library for automating interaction with websites. It automatically stores and sends cookies, follows redirects, and can submit forms — making it behave like a web browser without needing an actual browser engine.

Key features include:

  • Automatic cookie management Stores cookies received from servers and sends them back on subsequent requests, maintaining session state across multiple pages.
  • Form handling Can find, fill in, and submit HTML forms programmatically. Supports text inputs, selects, checkboxes, radio buttons, and file uploads.
  • Link following Navigate through pages by clicking links using their text content, CSS selectors, or href patterns.
  • History and back/forward Maintains a browsing history, allowing you to go back and forward through visited pages.
  • HTTP authentication Supports basic and digest HTTP authentication.
  • Proxy support Can route requests through HTTP proxies.
  • Redirect handling Automatically follows HTTP redirects (configurable).

Mechanize is one of the oldest and most established web interaction libraries in Ruby. It is best suited for scraping traditional server-rendered websites with forms and multi-page workflows. For JavaScript-heavy sites, a browser automation tool like Selenium or Playwright is recommended instead.

Kimurai is a modern web scraping framework for Ruby, inspired by Python's Scrapy. It provides a structured approach to building web scrapers with built-in support for multiple browser engines, session management, and data pipelines.

Key features include:

  • Multiple engine support Can use different backends depending on the scraping needs: Mechanize for simple HTTP requests, Selenium with headless Chrome/Firefox for JavaScript-rendered pages, and Poltergeist (PhantomJS) for lightweight rendering.
  • Scrapy-like architecture Follows the spider pattern: define a spider class with start URLs and parsing methods, and the framework handles crawling, scheduling, and data collection.
  • Built-in data pipelines Save scraped data to JSON, CSV, or custom formats with configurable output pipelines.
  • Session management Maintains browser sessions with automatic cookie handling and configurable delays between requests.
  • Request scheduling Built-in request queue with configurable concurrency, delays, and retry logic.
  • CLI tools Command-line tools for generating new spiders, running individual spiders, and managing scraping projects.

Kimurai is the closest Ruby equivalent to Scrapy. It's well-suited for structured scraping projects that need organization, multiple spiders, and data pipeline processing.

Note: Kimurai has not seen active development recently, but it remains a useful framework for Ruby scraping projects and is included as the most complete Ruby scraping framework available.

Highlights


popularproduction
middlewaresoutput-pipelines

Example Use


```ruby require 'mechanize' agent = Mechanize.new # Navigate to a page page = agent.get('https://example.com') puts page.title # Find and click a link page = page.link_with(text: 'Products').click # Extract data from the page page.search('.product').each do |product| name = product.at('.name').text price = product.at('.price').text puts "#{name}: #{price}" end # Fill in and submit a login form login_page = agent.get('https://example.com/login') form = login_page.form_with(action: '/login') form['username'] = 'user@example.com' form['password'] = 'password123' dashboard = agent.submit(form) # Cookies are maintained automatically puts dashboard.title # "Dashboard" # Download a file agent.get('https://example.com/report.csv').save('report.csv') ```
```ruby require 'kimurai' class ProductSpider < Kimurai::Base @name = 'product_spider' @engine = :selenium_chrome # or :mechanize for simple pages @start_urls = ['https://example.com/products'] def parse(response, url:, data: {}) # Extract product data from current page response.css('.product').each do |product| item = { name: product.css('.name').text.strip, price: product.css('.price').text.strip, url: absolute_url(product.at_css('a')['href'], base: url), } # Send item to the pipeline save_to "products.json", item, format: :json end # Follow pagination links if next_page = response.at_css('a.next-page') request_to :parse, url: absolute_url(next_page['href'], base: url) end end end # Run the spider ProductSpider.crawl! ```

Alternatives / Similar


Was this page helpful?