Skip to content

scrapydvskatana

BSD-3-Clause 6 8 3,087
44.4 thousand (month) Sep 04 2013 1.6.0(2025-07-22 06:00:53 ago)
16,499 6 18 MIT
Nov 07 2022 v1.5.0(2026-03-10 14:52:47 ago)

Scrapyd is a service for running Scrapy spiders. It allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines. It is built in Python, and it is meant to be used in a server-client architecture, where the scrapyd server runs on a remote machine, and clients can schedule and control spider runs on the server using an HTTP API. With Scrapyd, you can schedule spider runs on a regular basis, schedule spider runs on demand, and view the status of running spiders.

You can also see the logs of completed spiders, and manage spider settings and configurations. Scrapyd also provides an API that allows you to schedule spider runs, cancel spider runs, and view the status of running spiders. You can install the package via pip by running pip install scrapyd and then you can run the package by running scrapyd command in your command prompt. By default, it will start a web server on port 6800, but you can specify a different port using the `--port`` option.

Scrapyd is a good solution if you need to run Scrapy spiders on a remote machine, or if you need to schedule spider runs on a regular basis. It's also useful if you have multiple spiders, and you need a way to manage and monitor them all in one place.

for more web interface see scrapydweb

Katana is a next-generation web crawling and spidering framework written in Go by ProjectDiscovery. It is designed for fast, comprehensive endpoint and asset discovery and is widely used in the security research and bug bounty communities.

Katana offers multiple crawling modes:

  • Standard mode Fast HTTP-based crawling without a browser. Parses HTML, JavaScript files, and other resources to discover endpoints and links.
  • Headless mode Uses a headless Chrome browser for crawling JavaScript-rendered pages and single-page applications (SPAs).
  • Passive mode Discovers URLs from external sources (Wayback Machine, CommonCrawl, etc.) without actively visiting the target.

Key features include:

  • Scope control Configurable crawl scope with regex patterns for including/excluding URLs, domains, and file extensions.
  • JavaScript parsing Extracts endpoints from JavaScript files, inline scripts, and AJAX requests even in standard (non-headless) mode.
  • Customizable output Filter and format output with field selection, JSON output, and custom templates.
  • Rate limiting Built-in rate limiting and concurrency control to avoid overwhelming targets.
  • Proxy support HTTP and SOCKS5 proxy support with rotation.
  • Form filling Can detect and auto-fill forms to discover endpoints behind form submissions.

While Katana was designed for security research and reconnaissance, its fast crawling capabilities and JavaScript parsing make it equally useful for web scraping discovery and sitemap generation.

Highlights


fastpopularlarge-scale

Example Use


```shell $ scrapyd $ curl http://localhost:6800/schedule.json -d project=myproject -d spider=spider2 ```
```go package main import ( "context" "math" "github.com/projectdiscovery/katana/pkg/engine/standard" "github.com/projectdiscovery/katana/pkg/output" "github.com/projectdiscovery/katana/pkg/types" ) func main() { // Configure crawl options options := &types.Options{ MaxDepth: 3, FieldScope: "rdn", // restrict to root domain BodyReadSize: math.MaxInt, Timeout: 10, Concurrency: 10, Parallelism: 10, Delay: 0, RateLimit: 150, Strategy: "depth-first", OnResult: func(result output.Result) { // Process each discovered URL println(result.Request.URL) }, } // Create and run the crawler crawlerOptions, _ := types.NewCrawlerOptions(options) defer crawlerOptions.Close() crawler, _ := standard.New(crawlerOptions) defer crawler.Close() // Start crawling _ = crawler.Crawl("https://example.com") } ```

Alternatives / Similar


Was this page helpful?