ayakashivsbotasaurus
Ayakashi is a web scraping library for Node.js that allows developers to easily extract structured data from websites. It is built on top of the popular "puppeteer" library and provides a simple and intuitive API for defining and querying the structure of a website.
Features:
- Powerful querying and data models
Ayakashi's way of finding things in the page and using them is done with props and domQL. Directly inspired by the relational database world (and SQL), domQL makes DOM access easy and readable no matter how obscure the page's structure is. Props are the way to package domQL expressions as re-usable structures which can then be passed around to actions or to be used as models for data extraction. - High level builtin actions
Ready made actions so you can focus on what matters. Easily handle infinite scrolling, single page navigation, events and more. Plus, you can always build your own actions, either from scratch or by composing other actions. - Preload code on pages
Need to include a bunch of code, a library you made or a 3rd party module and make it available on a page? Preloaders have you covered.
Botasaurus is an all-in-one Python web scraping framework that combines browser automation, anti-detection, and scaling features into a single package. It aims to simplify the entire web scraping workflow from development to deployment.
Key features include:
- Anti-detect browser Ships with a stealth-patched browser that passes common bot detection tests. Automatically handles fingerprinting, user agent rotation, and other anti-detection measures.
- Decorator-based API Uses Python decorators (@browser, @request) to define scraping tasks, making code clean and easy to organize.
- Built-in parallelism Easy parallel execution of scraping tasks across multiple browser instances with configurable concurrency.
- Caching Built-in caching layer to avoid re-scraping pages during development and debugging.
- Profile persistence Can save and reuse browser profiles (cookies, localStorage) across scraping sessions for maintaining login state.
- Output handling Automatic output to JSON, CSV, or custom formats with built-in data filtering.
- Web dashboard Includes a web UI for monitoring scraping progress, viewing results, and managing tasks.
Botasaurus is designed for developers who want a batteries-included framework that handles anti-detection automatically, without needing to manually configure stealth settings or manage browser fingerprints.
Highlights
anti-detectstealthlarge-scale
Example Use
```javascript
const ayakashi = require("ayakashi");
const myAyakashi = ayakashi.init();
// navigate the browser
await myAyakashi.goTo("https://example.com/product");
// parsing HTML
// first by defnining a selector
myAyakashi
.select("productList")
.where({class: {eq: "product-item"}});
// then executing selector on current HTML:
const productList = await myAyakashi.extract("productList");
console.log(productList);
```
```python
from botasaurus.browser import browser, Driver
from botasaurus.request import request, Request
# Browser-based scraping with anti-detection
@browser(parallel=3, cache=True)
def scrape_products(driver: Driver, url: str):
driver.get(url)
# Wait for content to load
driver.wait_for_element(".product-list")
# Extract product data
products = []
for el in driver.select_all(".product-card"):
products.append({
"name": el.select(".product-name").text,
"price": el.select(".product-price").text,
"url": el.select("a").get_attribute("href"),
})
return products
# HTTP-based scraping (no browser needed)
@request(parallel=5, cache=True)
def scrape_api(req: Request, url: str):
response = req.get(url)
return response.json()
# Run the scraper
results = scrape_products(
["https://example.com/page/1", "https://example.com/page/2"]
)
```
Alternatives / Similar
katana
new
puppeteer-extra
new
crawl4ai
new
scrapling
new
crawlee
new
mechanize
new
scrapegraphai
new
botasaurus
new
goutte
new
kimurai
new
firecrawl
new