Skip to content

botasaurusvspydoll

MIT 52 5 4,321
35.5 thousand (month) Oct 01 2023 4.0.97(2026-01-06 07:45:54 ago)
- - - None
Jun 01 2024 0.0.0(2025-02-01 00:00:00 ago)

Botasaurus is an all-in-one Python web scraping framework that combines browser automation, anti-detection, and scaling features into a single package. It aims to simplify the entire web scraping workflow from development to deployment.

Key features include:

  • Anti-detect browser Ships with a stealth-patched browser that passes common bot detection tests. Automatically handles fingerprinting, user agent rotation, and other anti-detection measures.
  • Decorator-based API Uses Python decorators (@browser, @request) to define scraping tasks, making code clean and easy to organize.
  • Built-in parallelism Easy parallel execution of scraping tasks across multiple browser instances with configurable concurrency.
  • Caching Built-in caching layer to avoid re-scraping pages during development and debugging.
  • Profile persistence Can save and reuse browser profiles (cookies, localStorage) across scraping sessions for maintaining login state.
  • Output handling Automatic output to JSON, CSV, or custom formats with built-in data filtering.
  • Web dashboard Includes a web UI for monitoring scraping progress, viewing results, and managing tasks.

Botasaurus is designed for developers who want a batteries-included framework that handles anti-detection automatically, without needing to manually configure stealth settings or manage browser fingerprints.

Pydoll is a Python library for browser automation that uses the Chrome DevTools Protocol (CDP) directly, designed to be undetectable by anti-bot systems. Unlike Selenium-based tools, Pydoll does not use WebDriver and avoids the common detection vectors that anti-bot systems look for.

Key features include:

  • Native CDP communication Connects directly to Chrome/Chromium via CDP websocket without intermediary drivers, avoiding the automation flags and fingerprints that WebDriver-based tools leave behind.
  • Event-driven architecture Built around an async event system that can listen for and react to browser events like network requests, console messages, and DOM changes.
  • Network interception Can intercept, modify, and mock network requests and responses, useful for blocking unnecessary resources or modifying API responses during scraping.
  • Async-first design Fully asynchronous API built on Python's asyncio for efficient concurrent automation.
  • Clean API Provides a high-level, Pythonic API for common browser automation tasks while still allowing direct CDP command execution for advanced use cases.
  • Multi-browser support Can manage multiple browser instances and pages concurrently.

Pydoll fills a similar niche to nodriver and camoufox — browser automation with a focus on avoiding detection — but takes a different approach by providing more granular control over CDP communication and network interception.

Highlights


anti-detectstealthlarge-scale
anti-detectcdpasync

Example Use


```python from botasaurus.browser import browser, Driver from botasaurus.request import request, Request # Browser-based scraping with anti-detection @browser(parallel=3, cache=True) def scrape_products(driver: Driver, url: str): driver.get(url) # Wait for content to load driver.wait_for_element(".product-list") # Extract product data products = [] for el in driver.select_all(".product-card"): products.append({ "name": el.select(".product-name").text, "price": el.select(".product-price").text, "url": el.select("a").get_attribute("href"), }) return products # HTTP-based scraping (no browser needed) @request(parallel=5, cache=True) def scrape_api(req: Request, url: str): response = req.get(url) return response.json() # Run the scraper results = scrape_products( ["https://example.com/page/1", "https://example.com/page/2"] ) ```
```python import asyncio from pydoll.browser import Chrome from pydoll.constants import By async def main(): async with Chrome() as browser: # Open a new page page = await browser.new_page() await page.go_to("https://example.com") # Find and interact with elements search_input = await page.find_element(By.CSS, "input[name='q']") await search_input.type_text("web scraping") submit_btn = await page.find_element(By.CSS, "button[type='submit']") await submit_btn.click() # Wait for results and extract content await page.wait_element(By.CSS, ".results") results = await page.find_elements(By.CSS, ".result-item") for result in results: title = await result.get_text() print(title) # Network interception example await page.enable_network_interception() # intercept and analyze API calls made by the page asyncio.run(main()) ```

Alternatives / Similar


Was this page helpful?