Skip to content

katanavskimurai

MIT 18 6 16,499
Nov 07 2022 v1.5.0(2026-03-10 14:52:47 ago)
1,098 1 14 MIT
Aug 23 2018 2.4 thousand (month) 2.2.0(2026-01-27 17:36:19 ago)

Katana is a next-generation web crawling and spidering framework written in Go by ProjectDiscovery. It is designed for fast, comprehensive endpoint and asset discovery and is widely used in the security research and bug bounty communities.

Katana offers multiple crawling modes:

  • Standard mode Fast HTTP-based crawling without a browser. Parses HTML, JavaScript files, and other resources to discover endpoints and links.
  • Headless mode Uses a headless Chrome browser for crawling JavaScript-rendered pages and single-page applications (SPAs).
  • Passive mode Discovers URLs from external sources (Wayback Machine, CommonCrawl, etc.) without actively visiting the target.

Key features include:

  • Scope control Configurable crawl scope with regex patterns for including/excluding URLs, domains, and file extensions.
  • JavaScript parsing Extracts endpoints from JavaScript files, inline scripts, and AJAX requests even in standard (non-headless) mode.
  • Customizable output Filter and format output with field selection, JSON output, and custom templates.
  • Rate limiting Built-in rate limiting and concurrency control to avoid overwhelming targets.
  • Proxy support HTTP and SOCKS5 proxy support with rotation.
  • Form filling Can detect and auto-fill forms to discover endpoints behind form submissions.

While Katana was designed for security research and reconnaissance, its fast crawling capabilities and JavaScript parsing make it equally useful for web scraping discovery and sitemap generation.

Kimurai is a modern web scraping framework for Ruby, inspired by Python's Scrapy. It provides a structured approach to building web scrapers with built-in support for multiple browser engines, session management, and data pipelines.

Key features include:

  • Multiple engine support Can use different backends depending on the scraping needs: Mechanize for simple HTTP requests, Selenium with headless Chrome/Firefox for JavaScript-rendered pages, and Poltergeist (PhantomJS) for lightweight rendering.
  • Scrapy-like architecture Follows the spider pattern: define a spider class with start URLs and parsing methods, and the framework handles crawling, scheduling, and data collection.
  • Built-in data pipelines Save scraped data to JSON, CSV, or custom formats with configurable output pipelines.
  • Session management Maintains browser sessions with automatic cookie handling and configurable delays between requests.
  • Request scheduling Built-in request queue with configurable concurrency, delays, and retry logic.
  • CLI tools Command-line tools for generating new spiders, running individual spiders, and managing scraping projects.

Kimurai is the closest Ruby equivalent to Scrapy. It's well-suited for structured scraping projects that need organization, multiple spiders, and data pipeline processing.

Note: Kimurai has not seen active development recently, but it remains a useful framework for Ruby scraping projects and is included as the most complete Ruby scraping framework available.

Highlights


fastpopularlarge-scale
middlewaresoutput-pipelines

Example Use


```go package main import ( "context" "math" "github.com/projectdiscovery/katana/pkg/engine/standard" "github.com/projectdiscovery/katana/pkg/output" "github.com/projectdiscovery/katana/pkg/types" ) func main() { // Configure crawl options options := &types.Options{ MaxDepth: 3, FieldScope: "rdn", // restrict to root domain BodyReadSize: math.MaxInt, Timeout: 10, Concurrency: 10, Parallelism: 10, Delay: 0, RateLimit: 150, Strategy: "depth-first", OnResult: func(result output.Result) { // Process each discovered URL println(result.Request.URL) }, } // Create and run the crawler crawlerOptions, _ := types.NewCrawlerOptions(options) defer crawlerOptions.Close() crawler, _ := standard.New(crawlerOptions) defer crawler.Close() // Start crawling _ = crawler.Crawl("https://example.com") } ```
```ruby require 'kimurai' class ProductSpider < Kimurai::Base @name = 'product_spider' @engine = :selenium_chrome # or :mechanize for simple pages @start_urls = ['https://example.com/products'] def parse(response, url:, data: {}) # Extract product data from current page response.css('.product').each do |product| item = { name: product.css('.name').text.strip, price: product.css('.price').text.strip, url: absolute_url(product.at_css('a')['href'], base: url), } # Send item to the pipeline save_to "products.json", item, format: :json end # Follow pagination links if next_page = response.at_css('a.next-page') request_to :parse, url: absolute_url(next_page['href'], base: url) end end end # Run the spider ProductSpider.crawl! ```

Alternatives / Similar


Was this page helpful?