Skip to content

crawleevsgocrawl

Apache-2.0 175 26 22,720
341.9 thousand (month) Apr 22 2022 3.16.0(2026-04-09 07:36:53 ago)
2,053 2 6 BSD-3-Clause
Nov 20 2016 58.1 thousand (month) (2021-05-19 15:14:49 ago)

Crawlee is a modern web scraping and browser automation framework for JavaScript and TypeScript, built by Apify. It is the successor to the Apify SDK and provides a unified interface for building reliable web scrapers and crawlers that can scale from simple scripts to large-scale data extraction projects.

Crawlee supports multiple crawling strategies through different crawler classes:

  • CheerioCrawler For fast, lightweight HTML scraping using Cheerio (no browser needed). Best for static pages.
  • PlaywrightCrawler Uses Playwright for full browser automation. Handles JavaScript-rendered pages, SPAs, and complex interactions.
  • PuppeteerCrawler Similar to PlaywrightCrawler but uses Puppeteer as the browser automation backend.
  • HttpCrawler Minimal crawler for raw HTTP requests without HTML parsing.

Key features include:

  • Automatic request queue management with configurable concurrency and rate limiting
  • Built-in proxy rotation with session management
  • Persistent request queue and dataset storage (local or cloud via Apify)
  • Automatic retry and error handling with configurable strategies
  • TypeScript-first design with full type safety
  • Middleware-like request/response hooks (preNavigationHooks, postNavigationHooks)
  • Output pipelines for storing extracted data
  • Easy deployment to Apify cloud platform

Crawlee is considered the most feature-complete web scraping framework in the JavaScript/TypeScript ecosystem, comparable to Python's Scrapy but with native browser automation support.

Gocrawl is a polite, slim and concurrent web crawler library written in Go. It is designed to be simple and easy to use, while still providing a high degree of flexibility and control over the crawling process.

One of the key features of Gocrawl is its politeness, which means that it obeys the website's robots.txt file and respects the crawl-delay specified in the file. It also takes into account the website's last modified date, if any, to avoid recrawling the same page. This helps to reduce the load on the website and prevent any potential legal issues. Gocrawl is also highly concurrent, which allows it to efficiently crawl large numbers of pages in parallel. This helps to speed up the crawling process and reduce the time required to complete the task.

The library also offers a high degree of flexibility in customizing the crawling process. It allows you to specify custom callbacks and handlers for handling different types of pages, such as error pages, redirects, and so on. This allows you to handle and process the pages as per your requirement. Additionally, Gocrawl provides various functionalities such as support for cookies, user-agent, auto-detection of links, and auto-detection of sitemaps.

Highlights


populartypescriptextendiblemiddlewaresoutput-pipelineslarge-scaleproxy

Example Use


```javascript import { PlaywrightCrawler, Dataset } from 'crawlee'; // Create a crawler with Playwright for JS rendering const crawler = new PlaywrightCrawler({ // Limit concurrency to avoid overwhelming the target maxConcurrency: 5, // This function is called for each URL async requestHandler({ request, page, enqueueLinks }) { const title = await page.title(); // Extract data from the page const products = await page.$$eval('.product', (els) => els.map((el) => ({ name: el.querySelector('.name')?.textContent, price: el.querySelector('.price')?.textContent, })) ); // Store extracted data await Dataset.pushData({ url: request.url, title, products, }); // Follow links to crawl more pages await enqueueLinks({ globs: ['https://example.com/products/**'], }); }, }); // Start crawling await crawler.run(['https://example.com/products']); ```
```go // Only enqueue the root and paths beginning with an "a" var rxOk = regexp.MustCompile(`http://duckduckgo\.com(/a.*)?$`) // Create the Extender implementation, based on the gocrawl-provided DefaultExtender, // because we don't want/need to override all methods. type ExampleExtender struct { gocrawl.DefaultExtender // Will use the default implementation of all but Visit and Filter } // Override Visit for our need. func (x *ExampleExtender) Visit(ctx *gocrawl.URLContext, res *http.Response, doc *goquery.Document) (interface{}, bool) { // Use the goquery document or res.Body to manipulate the data // ... // Return nil and true - let gocrawl find the links return nil, true } // Override Filter for our need. func (x *ExampleExtender) Filter(ctx *gocrawl.URLContext, isVisited bool) bool { return !isVisited && rxOk.MatchString(ctx.NormalizedURL().String()) } func ExampleCrawl() { // Set custom options opts := gocrawl.NewOptions(new(ExampleExtender)) // should always set your robot name so that it looks for the most // specific rules possible in robots.txt. opts.RobotUserAgent = "Example" // and reflect that in the user-agent string used to make requests, // ideally with a link so site owners can contact you if there's an issue opts.UserAgent = "Mozilla/5.0 (compatible; Example/1.0; +http://example.com)" opts.CrawlDelay = 1 * time.Second opts.LogFlags = gocrawl.LogAll // Play nice with ddgo when running the test! opts.MaxVisits = 2 // Create crawler and start at root of duckduckgo c := gocrawl.NewCrawlerWithOptions(opts) c.Run("https://duckduckgo.com/") // Remove "x" before Output: to activate the example (will run on go test) // xOutput: voluntarily fail to see log output } ```

Alternatives / Similar


Was this page helpful?