Skip to content

scrapegraphaivsbrowser-use

MIT 4 17 23,278
59.6 thousand (month) Jan 15 2024 1.76.0(2026-04-09 09:41:03 ago)
87,251 30 226 MIT
Nov 01 2024 8.9 million (month) 0.12.6(2026-04-02 07:55:13 ago)

ScrapeGraphAI is a Python library that uses large language models (LLMs) to create web scraping pipelines automatically. Instead of writing CSS selectors or XPath expressions, you describe what data you want in natural language and provide a Pydantic schema — the library handles the rest.

Key features include:

  • Natural language extraction Describe what you want to extract in plain English (e.g., "Extract all product names and prices") and the LLM figures out how to find and extract the data.
  • Pydantic schema output Define the expected output structure using Pydantic models for type-safe, validated extraction results.
  • Graph-based pipeline Built on a directed graph architecture where each node performs a specific task (fetching, parsing, extracting, merging). This makes pipelines modular and debuggable.
  • Multiple graph types SmartScraperGraph (single page), SearchGraph (search + scrape), SpeechGraph (audio output), and more specialized pipelines.
  • Multiple LLM providers Works with OpenAI, Anthropic, Google, Groq, local models via Ollama, and more.
  • HTML and JSON support Can extract data from both HTML pages and JSON API responses.

ScrapeGraphAI is particularly useful for rapid prototyping of scrapers and for extracting data from pages with complex or frequently changing layouts where traditional selectors would be brittle.

Browser-use is a Python library that enables AI agents to control web browsers using natural language instructions. It connects large language models (LLMs) to browser automation, allowing you to describe what you want done in plain English instead of writing explicit selectors and interaction code.

Key features include:

  • Natural language browser control Describe tasks like "go to Amazon and find the cheapest laptop under $500" and the AI agent will navigate, interact with elements, and extract the requested information.
  • Multi-step task execution Can handle complex workflows that require multiple pages, form filling, clicking, scrolling, and waiting for dynamic content.
  • Vision support Uses screenshot analysis (multimodal LLMs) to understand page layout and find elements visually, not just through DOM inspection.
  • Multiple LLM providers Works with OpenAI, Anthropic Claude, Google Gemini, and other LLM providers.
  • Playwright backend Uses Playwright under the hood for reliable browser automation across Chrome, Firefox, and Safari.
  • Structured output Can return extracted data in structured formats defined by Pydantic models.

Browser-use represents a new paradigm in web scraping where instead of writing brittle selectors, you describe the extraction task and let the AI figure out how to navigate and extract the data. This is especially useful for scraping diverse sites with varying layouts.

Highlights


ai-poweredpopular
ai-powerednatural-languageasync

Example Use


```python from scrapegraphai.graphs import SmartScraperGraph from pydantic import BaseModel, Field from typing import List # Define the output schema class Product(BaseModel): name: str = Field(description="Product name") price: float = Field(description="Price in USD") rating: float = Field(description="Customer rating out of 5") class ProductList(BaseModel): products: List[Product] # Create a scraping graph with natural language instruction graph = SmartScraperGraph( prompt="Extract all products with their names, prices, and ratings", source="https://example.com/products", schema=ProductList, config={ "llm": { "model": "openai/gpt-4o", "api_key": "YOUR_API_KEY", }, }, ) # Run the graph result = graph.run() for product in result["products"]: print(f"{product['name']}: ${product['price']} ({product['rating']}/5)") ```
```python from browser_use import Agent from langchain_openai import ChatOpenAI import asyncio async def main(): # Create an AI agent with a language model agent = Agent( task="Go to reddit.com/r/webscraping, find the top 5 posts " "from today, and extract their titles and scores", llm=ChatOpenAI(model="gpt-4o"), ) # Run the agent - it navigates and extracts automatically result = await agent.run() print(result) # More complex multi-step task agent = Agent( task="Go to example.com/login, log in with user@test.com " "and password 'test123', then navigate to the dashboard " "and extract all notification messages", llm=ChatOpenAI(model="gpt-4o"), ) result = await agent.run() print(result) asyncio.run(main()) ```

Alternatives / Similar


Was this page helpful?