Skip to content

pycurlvstreq

LGPL-2.1 15 9 1,058
1.2 million (month) Feb 25 2003 7.45.3(4 months ago)
585 14 56 NOASSERTION
Dec 28 2012 93.2 thousand (month) 23.11.0(8 months ago)

PycURL is a Python interface to libcurl, a multi-protocol file transfer library written in C. PycURL allows developers to use a variety of network protocols in their Python programs, including HTTP, FTP, SMTP, POP3, and many more.

PycURL is often used in web scraping, data analysis, and automation tasks, as it allows developers to send and receive data over the internet. It can be used to perform various types of requests, such as GET, POST, PUT, and DELETE, and can also handle file uploads and downloads, cookies, and redirects.

One of the key features of PycURL is its support for SSL and proxy servers, which allows developers to securely transfer data over the internet and work around any network restrictions. PycURL also supports a wide range of authentication methods, such as Basic, Digest, and NTLM, and allows developers to easily set custom headers and query parameters.

Just like cURL itself, PycURL is also highly configurable and allows for fine-grained control over various aspects of the transfer, such as timeouts, retries, buffer sizes, and verbosity levels. Additionally, PycURL also provides easy access to the underlying libcurl library, which allows developers to access advanced functionality that is not exposed by the PycURL API.

It's important to note that PycURL is a wrapper around the libcurl library and therefore provides the same functionality and performance as libcurl.

Main strengths of PycURL is that it uses cURL which is one of the most feature rich low-level http clients. The downside is that it's a very low-level client (see the examples below) with complex API making use in web scraping very difficult and niche.

treq is a Python library for making HTTP requests that provides a simple, convenient API for interacting with web services. It is inspired byt the popular requests library, but powered by Twisted asynchronous engine which allows promise based concurrency.

treq provides a simple, high-level API for making HTTP requests, including methods for GET, POST, PUT, DELETE, etc. It also allows for easy handling of JSON data, automatic decompression of gzipped responses, and connection pooling.

treq is a lightweight library and it's easy to use, it's a good choice for small to medium-sized projects where ease of use is more important than performance.

In web scraping treq isn't commonly used as it doesn't support HTTP2 but it's the only Twisted based HTTP client. treq is also based on callback/errback promises (like Scrapy) which can be easier to understand and maintain compared to asyncio's corountines.

Highlights


uses-curlhttp2multi-partresponse-streaminghttp-proxy
uses-twistedno-http2

Example Use


import pycurl
from io import BytesIO

buf = BytesIO()
headers = BytesIO()

c = pycurl.Curl()
c.setopt(c.HTTP_VERSION, c.CURL_HTTP_VERSION_2_0)  # set to use http2
# set proxy
c.setopt(c.PROXY, 'http://proxy.example.com:8080') 
c.setopt(c.PROXYUSERNAME, 'username')
c.setopt(c.PROXYPASSWORD, 'password')

# make a request
c.setopt(c.URL, 'https://httpbin.org/get')
c.setopt(c.WRITEFUNCTION, buf.write)  # where to save response body
c.setopt(c.HEADERFUNCTION, headers.write)  # where to save response headers
# to make post request enable POST option:
# c.setopt(c.POST, 1)
# c.setopt(c.POSTFIELDS, 'key1=value1&key2=value2')
c.perform()  # send request

# read response
data = buf.getvalue().decode()
headers = headers.getvalue().decode()  # headers as a string
headers = dict([h.split(': ') for h in headers.splitlines() if ': ' in h])  # headers as a dict
c.close()

# multiple concurrent requests can be made using CurlMulti object:
# Create a CurlMulti object
multi = pycurl.CurlMulti()
# Set the number of maximum connections
multi.setopt(pycurl.MAXCONNECTS, 5)

# Create a list to store the Curl objects
curls = []

# Add the first request
c1 = pycurl.Curl()
c1.setopt(c1.URL, 'https://httpbin.org/get')
c1.setopt(c1.WRITEFUNCTION, BytesIO().write)
multi.add_handle(c1)
curls.append(c1)

# Add the second request
c2 = pycurl.Curl()
c2.setopt(c2.URL, 'https://httpbin.org/')
c2.setopt(c2.WRITEFUNCTION, BytesIO().write)
multi.add_handle(c2)
curls.append(c2)

# Start the requests
while True:
    ret, _ = multi.perform()
    if ret != pycurl.E_CALL_MULTI_PERFORM:
        break

# Close the connections
for c in curls:
    multi.remove_handle(c)
    c.close()
from twisted.internet import reactor
from twisted.internet.task import react
from twisted.internet.defer import ensureDeferred
import treq

# treq can be used with twisted's reactor with callbacks
response_deferred = treq.get(
    "http://httpbin.org/get"
)
# or POST
response_deferred = treq.post(
    "http://httpbin.org/post",
    json={"key": "value"},  # JSON
    data={"key": "value"},  # Form Data
)

# add callback or errback
def handle_response(response):
    print(response.code)
    response.text().addCallback(lambda body: print(body))
def handle_error(failure):
    print(failure)
# this callback will be called when request completes:
response_deferred.addCallback(handle_response)
# this errback will be called if request fails
response_deferred.addErrback(handle_error)
# this will be called if request completes or fails:
response_deferred.addBoth(lambda _: reactor.stop())  # close twisted once finished

if __name__ == '__main__':
    reactor.run()

#Note that treq can also be used with async/await:
async def main():
    # content reads response data and get sends a get request:
    print(await treq.content(await treq.get("https://example.com/")))

if __name__ == '__main__':
    react(lambda reactor: ensureDeferred(main()))
```

Alternatives / Similar


Was this page helpful?