gracyvsmechanize
Gracy is an API client library based on httpx that provides an extra stability layer with:
- Retry logic
- Logging
- Connection throttling
- Tracking/Middleware
In web scraping, Gracy can be a convenient tool for creating scraper based API clients.
Mechanize is a Ruby library for automating interaction with websites. It automatically stores and sends cookies, follows redirects, and can submit forms — making it behave like a web browser without needing an actual browser engine.
Key features include:
- Automatic cookie management Stores cookies received from servers and sends them back on subsequent requests, maintaining session state across multiple pages.
- Form handling Can find, fill in, and submit HTML forms programmatically. Supports text inputs, selects, checkboxes, radio buttons, and file uploads.
- Link following Navigate through pages by clicking links using their text content, CSS selectors, or href patterns.
- History and back/forward Maintains a browsing history, allowing you to go back and forward through visited pages.
- HTTP authentication Supports basic and digest HTTP authentication.
- Proxy support Can route requests through HTTP proxies.
- Redirect handling Automatically follows HTTP redirects (configurable).
Mechanize is one of the oldest and most established web interaction libraries in Ruby. It is best suited for scraping traditional server-rendered websites with forms and multi-page workflows. For JavaScript-heavy sites, a browser automation tool like Selenium or Playwright is recommended instead.
Highlights
popularproduction
Example Use
```python
# 0. Import
import asyncio
from typing import Awaitable
from gracy import BaseEndpoint, Gracy, GracyConfig, LogEvent, LogLevel
# 1. Define your endpoints
class PokeApiEndpoint(BaseEndpoint):
GET_POKEMON = "/pokemon/{NAME}" # 👈 Put placeholders as needed
# 2. Define your Graceful API
class GracefulPokeAPI(Gracy[str]):
class Config: # type: ignore
BASE_URL = "https://pokeapi.co/api/v2/" # 👈 Optional BASE_URL
# 👇 Define settings to apply for every request
SETTINGS = GracyConfig(
log_request=LogEvent(LogLevel.DEBUG),
log_response=LogEvent(LogLevel.INFO, "{URL} took {ELAPSED}"),
parser={
"default": lambda r: r.json()
}
)
async def get_pokemon(self, name: str) -> Awaitable[dict]:
return await self.get(PokeApiEndpoint.GET_POKEMON, {"NAME": name})
# Note: since Gracy is based on httpx we can customized the used client with custom headers etc"
def _create_client(self) -> httpx.AsyncClient:
client = super()._create_client()
client.headers = {"User-Agent": f"My Scraper"}
return client
pokeapi = GracefulPokeAPI()
async def main():
try:
pokemon = await pokeapi.get_pokemon("pikachu")
print(pokemon)
finally:
pokeapi.report_status("rich")
asyncio.run(main())
```
```ruby
require 'mechanize'
agent = Mechanize.new
# Navigate to a page
page = agent.get('https://example.com')
puts page.title
# Find and click a link
page = page.link_with(text: 'Products').click
# Extract data from the page
page.search('.product').each do |product|
name = product.at('.name').text
price = product.at('.price').text
puts "#{name}: #{price}"
end
# Fill in and submit a login form
login_page = agent.get('https://example.com/login')
form = login_page.form_with(action: '/login')
form['username'] = 'user@example.com'
form['password'] = 'password123'
dashboard = agent.submit(form)
# Cookies are maintained automatically
puts dashboard.title # "Dashboard"
# Download a file
agent.get('https://example.com/report.csv').save('report.csv')
```
Alternatives / Similar
katana
new
crawl4ai
new
scrapling
new
crawlee
new
mechanize
new
scrapegraphai
new
botasaurus
new
goutte
new
kimurai
new
firecrawl
new
katana
new
primp
new
crawl4ai
new
scrapling
new
crawlee
new
scrapegraphai
new
botasaurus
new
goutte
new
kimurai
new
firecrawl
new