Skip to content

pycurlvsmechanize

LGPL-2.1 6 10 1,147
5.2 million (month) Feb 25 2003 7.45.7(2025-09-24 13:35:56 ago)
4,440 8 6 MIT
Jul 25 2009 213.1 thousand (month) 2.14.0(2025-01-05 18:30:46 ago)

PycURL is a Python interface to libcurl, a multi-protocol file transfer library written in C. PycURL allows developers to use a variety of network protocols in their Python programs, including HTTP, FTP, SMTP, POP3, and many more.

PycURL is often used in web scraping, data analysis, and automation tasks, as it allows developers to send and receive data over the internet. It can be used to perform various types of requests, such as GET, POST, PUT, and DELETE, and can also handle file uploads and downloads, cookies, and redirects.

One of the key features of PycURL is its support for SSL and proxy servers, which allows developers to securely transfer data over the internet and work around any network restrictions. PycURL also supports a wide range of authentication methods, such as Basic, Digest, and NTLM, and allows developers to easily set custom headers and query parameters.

Just like cURL itself, PycURL is also highly configurable and allows for fine-grained control over various aspects of the transfer, such as timeouts, retries, buffer sizes, and verbosity levels. Additionally, PycURL also provides easy access to the underlying libcurl library, which allows developers to access advanced functionality that is not exposed by the PycURL API.

It's important to note that PycURL is a wrapper around the libcurl library and therefore provides the same functionality and performance as libcurl.

Main strengths of PycURL is that it uses cURL which is one of the most feature rich low-level http clients. The downside is that it's a very low-level client (see the examples below) with complex API making use in web scraping very difficult and niche.

Mechanize is a Ruby library for automating interaction with websites. It automatically stores and sends cookies, follows redirects, and can submit forms — making it behave like a web browser without needing an actual browser engine.

Key features include:

  • Automatic cookie management Stores cookies received from servers and sends them back on subsequent requests, maintaining session state across multiple pages.
  • Form handling Can find, fill in, and submit HTML forms programmatically. Supports text inputs, selects, checkboxes, radio buttons, and file uploads.
  • Link following Navigate through pages by clicking links using their text content, CSS selectors, or href patterns.
  • History and back/forward Maintains a browsing history, allowing you to go back and forward through visited pages.
  • HTTP authentication Supports basic and digest HTTP authentication.
  • Proxy support Can route requests through HTTP proxies.
  • Redirect handling Automatically follows HTTP redirects (configurable).

Mechanize is one of the oldest and most established web interaction libraries in Ruby. It is best suited for scraping traditional server-rendered websites with forms and multi-page workflows. For JavaScript-heavy sites, a browser automation tool like Selenium or Playwright is recommended instead.

Highlights


uses-curlhttp2multi-partresponse-streaminghttp-proxy
popularproduction

Example Use


```python import pycurl from io import BytesIO buf = BytesIO() headers = BytesIO() c = pycurl.Curl() c.setopt(c.HTTP_VERSION, c.CURL_HTTP_VERSION_2_0) # set to use http2 # set proxy c.setopt(c.PROXY, 'http://proxy.example.com:8080') c.setopt(c.PROXYUSERNAME, 'username') c.setopt(c.PROXYPASSWORD, 'password') # make a request c.setopt(c.URL, 'https://httpbin.org/get') c.setopt(c.WRITEFUNCTION, buf.write) # where to save response body c.setopt(c.HEADERFUNCTION, headers.write) # where to save response headers # to make post request enable POST option: # c.setopt(c.POST, 1) # c.setopt(c.POSTFIELDS, 'key1=value1&key2=value2') c.perform() # send request # read response data = buf.getvalue().decode() headers = headers.getvalue().decode() # headers as a string headers = dict([h.split(': ') for h in headers.splitlines() if ': ' in h]) # headers as a dict c.close() # multiple concurrent requests can be made using CurlMulti object: # Create a CurlMulti object multi = pycurl.CurlMulti() # Set the number of maximum connections multi.setopt(pycurl.MAXCONNECTS, 5) # Create a list to store the Curl objects curls = [] # Add the first request c1 = pycurl.Curl() c1.setopt(c1.URL, 'https://httpbin.org/get') c1.setopt(c1.WRITEFUNCTION, BytesIO().write) multi.add_handle(c1) curls.append(c1) # Add the second request c2 = pycurl.Curl() c2.setopt(c2.URL, 'https://httpbin.org/') c2.setopt(c2.WRITEFUNCTION, BytesIO().write) multi.add_handle(c2) curls.append(c2) # Start the requests while True: ret, _ = multi.perform() if ret != pycurl.E_CALL_MULTI_PERFORM: break # Close the connections for c in curls: multi.remove_handle(c) c.close() ```
```ruby require 'mechanize' agent = Mechanize.new # Navigate to a page page = agent.get('https://example.com') puts page.title # Find and click a link page = page.link_with(text: 'Products').click # Extract data from the page page.search('.product').each do |product| name = product.at('.name').text price = product.at('.price').text puts "#{name}: #{price}" end # Fill in and submit a login form login_page = agent.get('https://example.com/login') form = login_page.form_with(action: '/login') form['username'] = 'user@example.com' form['password'] = 'password123' dashboard = agent.submit(form) # Cookies are maintained automatically puts dashboard.title # "Dashboard" # Download a file agent.get('https://example.com/report.csv').save('report.csv') ```

Alternatives / Similar


Was this page helpful?