Skip to content

botasaurusvsdataflowkit

MIT 52 5 4,321
35.5 thousand (month) Oct 01 2023 4.0.97(2026-01-06 07:45:54 ago)
711 3 4 BSD-3-Clause
Feb 09 2017 2026-03-21(2026-03-21 09:11:03 ago)

Botasaurus is an all-in-one Python web scraping framework that combines browser automation, anti-detection, and scaling features into a single package. It aims to simplify the entire web scraping workflow from development to deployment.

Key features include:

  • Anti-detect browser Ships with a stealth-patched browser that passes common bot detection tests. Automatically handles fingerprinting, user agent rotation, and other anti-detection measures.
  • Decorator-based API Uses Python decorators (@browser, @request) to define scraping tasks, making code clean and easy to organize.
  • Built-in parallelism Easy parallel execution of scraping tasks across multiple browser instances with configurable concurrency.
  • Caching Built-in caching layer to avoid re-scraping pages during development and debugging.
  • Profile persistence Can save and reuse browser profiles (cookies, localStorage) across scraping sessions for maintaining login state.
  • Output handling Automatic output to JSON, CSV, or custom formats with built-in data filtering.
  • Web dashboard Includes a web UI for monitoring scraping progress, viewing results, and managing tasks.

Botasaurus is designed for developers who want a batteries-included framework that handles anti-detection automatically, without needing to manually configure stealth settings or manage browser fingerprints.

Dataflow kit ("DFK") is a Web Scraping framework for Gophers. It extracts data from web pages, following the specified CSS Selectors. You can use it in many ways for data mining, data processing or archiving.

Web-scraping pipeline consists of 3 general components:

  • Downloading an HTML web-page. (Fetch Service)
  • Parsing an HTML page and retrieving data we're interested in (Parse Service)
  • Encoding parsed data to CSV, MS Excel, JSON, JSON Lines or XML format.

For fetching dataflowkit has several types of page fetchers:

  • Base fetcher uses standard golang http client to fetch pages as is. It works faster than Chrome fetcher. But Base fetcher cannot render dynamic javascript driven web pages.
  • Chrome fetcher is intended for rendering dynamic javascript based content. It sends requests to Chrome running in headless mode.

For parsing dataflowkit extracts data from downloaded web page following the rules listed in configuration JSON file. Extracted data is returned in CSV, MS Excel, JSON or XML format.

Some dataflowkit features:

  • Scraping of JavaScript generated pages;
  • Data extraction from paginated websites;
  • Processing infinite scrolled pages.
  • Sсraping of websites behind login form;
  • Cookies and sessions handling;
  • Following links and detailed pages processing;
  • Managing delays between requests per domain;
  • Following robots.txt directives;
  • Saving intermediate data in Diskv or Mongodb. Storage interface is flexible enough to add more storage types easily;
  • Encode results to CSV, MS Excel, JSON(Lines), XML formats;
  • Dataflow kit is fast. It takes about 4-6 seconds to fetch and then parse 50 pages.
  • Dataflow kit is suitable to process quite large volumes of data. Our tests show the time needed to parse appr. 4 millions of pages is about 7 hours.

Highlights


anti-detectstealthlarge-scale

Example Use


```python from botasaurus.browser import browser, Driver from botasaurus.request import request, Request # Browser-based scraping with anti-detection @browser(parallel=3, cache=True) def scrape_products(driver: Driver, url: str): driver.get(url) # Wait for content to load driver.wait_for_element(".product-list") # Extract product data products = [] for el in driver.select_all(".product-card"): products.append({ "name": el.select(".product-name").text, "price": el.select(".product-price").text, "url": el.select("a").get_attribute("href"), }) return products # HTTP-based scraping (no browser needed) @request(parallel=5, cache=True) def scrape_api(req: Request, url: str): response = req.get(url) return response.json() # Run the scraper results = scrape_products( ["https://example.com/page/1", "https://example.com/page/2"] ) ```
Dataflowkit uses JSON configuration like: ```json { "name": "collection", "request": { "url": "https://example.com" }, "fields": [ { "name": "Title", "selector": ".product-container a", "extractor": { "types": [ "text", "href" ], "filters": [ "trim", "lowerCase" ], "params": { "includeIfEmpty": false } } }, { "name": "Image", "selector": "#product-container img", "extractor": { "types": [ "alt", "src", "width", "height" ], "filters": [ "trim", "upperCase" ] } }, { "name": "Buyinfo", "selector": ".buy-info", "extractor": { "types": [ "text" ], "params": { "includeIfEmpty": false } } } ], "paginator": { "selector": ".next", "attr": "href", "maxPages": 3 }, "format": "json", "fetcherType": "chrome", "paginateResults": false } ``` which is then ingested through CLI command.

Alternatives / Similar


Was this page helpful?