Skip to content

dudevsfirecrawl

AGPL-3.0 32 2 425
54 (month) Feb 20 2022 0.1.3(2023-08-01 20:28:33 ago)
- - - None
Apr 01 2024 0.0.0(2025-03-15 00:00:00 ago)

Dude (dude uncomplicated data extraction) is a very simple framework for writing web scrapers using Python decorators. The design, inspired by Flask, was to easily build a web scraper in just a few lines of code. Dude has an easy-to-learn syntax.

The simplest web scraper will look like this: ```python from dude import select

@select(css="a") def get_link(element): return {"url": element.get_attribute("href")} ```

dude supports multiple parser backends: - playwright
- lxml
- parsel - beautifulsoup - pyppeteer - selenium

Firecrawl is an AI-powered web scraping API that converts web pages into clean Markdown or structured data, optimized for use with large language models (LLMs) and retrieval-augmented generation (RAG) pipelines. It handles JavaScript rendering, anti-bot bypass, and content extraction automatically.

Firecrawl offers multiple modes:

  • Scrape Convert a single URL into clean Markdown, HTML, or structured data. Handles JavaScript rendering and anti-bot protections automatically.
  • Crawl Crawl an entire website starting from a URL, with configurable depth, URL patterns, and page limits. Returns all pages as clean Markdown.
  • Map Quickly discover all URLs on a website without fully scraping each page. Useful for sitemap generation and crawl planning.
  • Extract Use LLMs to extract specific structured data from pages based on a schema definition.

Key features:

  • Clean Markdown output ideal for LLM context windows
  • Automatic JavaScript rendering with headless browsers
  • Built-in anti-bot bypass for protected websites
  • Structured extraction with JSON schemas
  • Batch crawling with webhook notifications
  • Python and JavaScript SDKs

Firecrawl is a commercial API service (requires API key, has a free tier) backed by Y Combinator. It has become one of the most popular tools for feeding web content into AI applications and is widely used in the LLM/RAG ecosystem.

Note: while the primary service is an API, the core is open source and can be self-hosted.

Highlights


ai-poweredpopularasync

Example Use


```python from dude import select """ This example demonstrates how to use Parsel + async HTTPX To access an attribute, use: selector.attrib["href"] You can also access an attribute using the ::attr(name) pseudo-element, for example "a::attr(href)", then: selector.get() To get the text, use ::text pseudo-element, then: selector.get() """ @select(css="a.url", priority=2) async def result_url(selector): return {"url": selector.attrib["href"]} # Option to get url using ::attr(name) pseudo-element @select(css="a.url::attr(href)", priority=2) async def result_url2(selector): return {"url2": selector.get()} @select(css=".title::text", priority=1) async def result_title(selector): return {"title": selector.get()} @select(css=".description::text", priority=0) async def result_description(selector): return {"description": selector.get()} if __name__ == "__main__": import dude dude.run(urls=["https://dude.ron.sh"], parser="parsel") ```
```python from firecrawl import FirecrawlApp app = FirecrawlApp(api_key="YOUR_API_KEY") # Scrape a single page - get clean markdown result = app.scrape_url("https://example.com/blog/article") print(result["markdown"]) # clean markdown content # Extract structured data with a schema result = app.scrape_url( "https://example.com/product/123", params={ "formats": ["extract"], "extract": { "schema": { "type": "object", "properties": { "name": {"type": "string"}, "price": {"type": "number"}, "description": {"type": "string"}, }, } }, }, ) print(result["extract"]) # {"name": "...", "price": 29.99, ...} # Crawl an entire website crawl_result = app.crawl_url( "https://example.com", params={"limit": 100, "scrapeOptions": {"formats": ["markdown"]}}, ) for page in crawl_result["data"]: print(page["metadata"]["title"], page["markdown"][:100]) # Map all URLs on a site map_result = app.map_url("https://example.com") print(f"Found {len(map_result['links'])} URLs") ```

Alternatives / Similar


Was this page helpful?