Skip to content

treqvsmechanize

NOASSERTION 60 15 605
212.8 thousand (month) Dec 28 2012 25.5.0(2025-06-03 03:42:30 ago)
4,440 8 6 MIT
Jul 25 2009 213.1 thousand (month) 2.14.0(2025-01-05 18:30:46 ago)

treq is a Python library for making HTTP requests that provides a simple, convenient API for interacting with web services. It is inspired byt the popular requests library, but powered by Twisted asynchronous engine which allows promise based concurrency.

treq provides a simple, high-level API for making HTTP requests, including methods for GET, POST, PUT, DELETE, etc. It also allows for easy handling of JSON data, automatic decompression of gzipped responses, and connection pooling.

treq is a lightweight library and it's easy to use, it's a good choice for small to medium-sized projects where ease of use is more important than performance.

In web scraping treq isn't commonly used as it doesn't support HTTP2 but it's the only Twisted based HTTP client. treq is also based on callback/errback promises (like Scrapy) which can be easier to understand and maintain compared to asyncio's corountines.

Mechanize is a Ruby library for automating interaction with websites. It automatically stores and sends cookies, follows redirects, and can submit forms — making it behave like a web browser without needing an actual browser engine.

Key features include:

  • Automatic cookie management Stores cookies received from servers and sends them back on subsequent requests, maintaining session state across multiple pages.
  • Form handling Can find, fill in, and submit HTML forms programmatically. Supports text inputs, selects, checkboxes, radio buttons, and file uploads.
  • Link following Navigate through pages by clicking links using their text content, CSS selectors, or href patterns.
  • History and back/forward Maintains a browsing history, allowing you to go back and forward through visited pages.
  • HTTP authentication Supports basic and digest HTTP authentication.
  • Proxy support Can route requests through HTTP proxies.
  • Redirect handling Automatically follows HTTP redirects (configurable).

Mechanize is one of the oldest and most established web interaction libraries in Ruby. It is best suited for scraping traditional server-rendered websites with forms and multi-page workflows. For JavaScript-heavy sites, a browser automation tool like Selenium or Playwright is recommended instead.

Highlights


uses-twistedno-http2
popularproduction

Example Use


```python from twisted.internet import reactor from twisted.internet.task import react from twisted.internet.defer import ensureDeferred import treq # treq can be used with twisted's reactor with callbacks response_deferred = treq.get( "http://httpbin.org/get" ) # or POST response_deferred = treq.post( "http://httpbin.org/post", json={"key": "value"}, # JSON data={"key": "value"}, # Form Data ) # add callback or errback def handle_response(response): print(response.code) response.text().addCallback(lambda body: print(body)) def handle_error(failure): print(failure) # this callback will be called when request completes: response_deferred.addCallback(handle_response) # this errback will be called if request fails response_deferred.addErrback(handle_error) # this will be called if request completes or fails: response_deferred.addBoth(lambda _: reactor.stop()) # close twisted once finished if __name__ == '__main__': reactor.run() #Note that treq can also be used with async/await: async def main(): # content reads response data and get sends a get request: print(await treq.content(await treq.get("https://example.com/"))) if __name__ == '__main__': react(lambda reactor: ensureDeferred(main())) ``` ```
```ruby require 'mechanize' agent = Mechanize.new # Navigate to a page page = agent.get('https://example.com') puts page.title # Find and click a link page = page.link_with(text: 'Products').click # Extract data from the page page.search('.product').each do |product| name = product.at('.name').text price = product.at('.price').text puts "#{name}: #{price}" end # Fill in and submit a login form login_page = agent.get('https://example.com/login') form = login_page.form_with(action: '/login') form['username'] = 'user@example.com' form['password'] = 'password123' dashboard = agent.submit(form) # Cookies are maintained automatically puts dashboard.title # "Dashboard" # Download a file agent.get('https://example.com/report.csv').save('report.csv') ```

Alternatives / Similar


Was this page helpful?