Skip to content

botasaurusvsgocrawl

MIT 52 5 4,321
35.5 thousand (month) Oct 01 2023 4.0.97(2026-01-06 07:45:54 ago)
2,053 2 6 BSD-3-Clause
Nov 20 2016 58.1 thousand (month) (2021-05-19 15:14:49 ago)

Botasaurus is an all-in-one Python web scraping framework that combines browser automation, anti-detection, and scaling features into a single package. It aims to simplify the entire web scraping workflow from development to deployment.

Key features include:

  • Anti-detect browser Ships with a stealth-patched browser that passes common bot detection tests. Automatically handles fingerprinting, user agent rotation, and other anti-detection measures.
  • Decorator-based API Uses Python decorators (@browser, @request) to define scraping tasks, making code clean and easy to organize.
  • Built-in parallelism Easy parallel execution of scraping tasks across multiple browser instances with configurable concurrency.
  • Caching Built-in caching layer to avoid re-scraping pages during development and debugging.
  • Profile persistence Can save and reuse browser profiles (cookies, localStorage) across scraping sessions for maintaining login state.
  • Output handling Automatic output to JSON, CSV, or custom formats with built-in data filtering.
  • Web dashboard Includes a web UI for monitoring scraping progress, viewing results, and managing tasks.

Botasaurus is designed for developers who want a batteries-included framework that handles anti-detection automatically, without needing to manually configure stealth settings or manage browser fingerprints.

Gocrawl is a polite, slim and concurrent web crawler library written in Go. It is designed to be simple and easy to use, while still providing a high degree of flexibility and control over the crawling process.

One of the key features of Gocrawl is its politeness, which means that it obeys the website's robots.txt file and respects the crawl-delay specified in the file. It also takes into account the website's last modified date, if any, to avoid recrawling the same page. This helps to reduce the load on the website and prevent any potential legal issues. Gocrawl is also highly concurrent, which allows it to efficiently crawl large numbers of pages in parallel. This helps to speed up the crawling process and reduce the time required to complete the task.

The library also offers a high degree of flexibility in customizing the crawling process. It allows you to specify custom callbacks and handlers for handling different types of pages, such as error pages, redirects, and so on. This allows you to handle and process the pages as per your requirement. Additionally, Gocrawl provides various functionalities such as support for cookies, user-agent, auto-detection of links, and auto-detection of sitemaps.

Highlights


anti-detectstealthlarge-scale

Example Use


```python from botasaurus.browser import browser, Driver from botasaurus.request import request, Request # Browser-based scraping with anti-detection @browser(parallel=3, cache=True) def scrape_products(driver: Driver, url: str): driver.get(url) # Wait for content to load driver.wait_for_element(".product-list") # Extract product data products = [] for el in driver.select_all(".product-card"): products.append({ "name": el.select(".product-name").text, "price": el.select(".product-price").text, "url": el.select("a").get_attribute("href"), }) return products # HTTP-based scraping (no browser needed) @request(parallel=5, cache=True) def scrape_api(req: Request, url: str): response = req.get(url) return response.json() # Run the scraper results = scrape_products( ["https://example.com/page/1", "https://example.com/page/2"] ) ```
```go // Only enqueue the root and paths beginning with an "a" var rxOk = regexp.MustCompile(`http://duckduckgo\.com(/a.*)?$`) // Create the Extender implementation, based on the gocrawl-provided DefaultExtender, // because we don't want/need to override all methods. type ExampleExtender struct { gocrawl.DefaultExtender // Will use the default implementation of all but Visit and Filter } // Override Visit for our need. func (x *ExampleExtender) Visit(ctx *gocrawl.URLContext, res *http.Response, doc *goquery.Document) (interface{}, bool) { // Use the goquery document or res.Body to manipulate the data // ... // Return nil and true - let gocrawl find the links return nil, true } // Override Filter for our need. func (x *ExampleExtender) Filter(ctx *gocrawl.URLContext, isVisited bool) bool { return !isVisited && rxOk.MatchString(ctx.NormalizedURL().String()) } func ExampleCrawl() { // Set custom options opts := gocrawl.NewOptions(new(ExampleExtender)) // should always set your robot name so that it looks for the most // specific rules possible in robots.txt. opts.RobotUserAgent = "Example" // and reflect that in the user-agent string used to make requests, // ideally with a link so site owners can contact you if there's an issue opts.UserAgent = "Mozilla/5.0 (compatible; Example/1.0; +http://example.com)" opts.CrawlDelay = 1 * time.Second opts.LogFlags = gocrawl.LogAll // Play nice with ddgo when running the test! opts.MaxVisits = 2 // Create crawler and start at root of duckduckgo c := gocrawl.NewCrawlerWithOptions(opts) c.Run("https://duckduckgo.com/") // Remove "x" before Output: to activate the example (will run on go test) // xOutput: voluntarily fail to see log output } ```

Alternatives / Similar


Was this page helpful?