httpxvspycurl
httpx is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support for both HTTP/1.1 and HTTP/2. It is designed to be a replacement for the popular requests package, with the added benefit of being fully compatible with Python 3's async features.
One of the main features of httpx is its support for asynchronous programming. This means that it can send multiple requests at the same time, without blocking the execution of your program. This can lead to significant performance improvements, especially when working with many small requests, or when dealing with slow or unreliable network connections.
httpx also supports sending HTTP/2 requests, which allows for more efficient use of network resources and can result in faster page loads.
One of the strengths of httpx is the possibility of working on streaming mode for the response data. This means you can process the response as it comes in, instead of waiting for the entire response to be received. This is useful when working with large files, or when you need to process the data in real-time.
Additionally, httpx provides a number of other features that are common in modern HTTP clients, such as support for sending and receiving cookies, handling redirects, and working with multipart file uploads. It also include support for several well-known authentication modules like BasicAuth, DigestAuth, and BearerAuth.
PycURL is a Python interface to libcurl, a multi-protocol file transfer library written in C. PycURL allows developers to use a variety of network protocols in their Python programs, including HTTP, FTP, SMTP, POP3, and many more.
PycURL is often used in web scraping, data analysis, and automation tasks, as it allows developers to send and receive data over the internet. It can be used to perform various types of requests, such as GET, POST, PUT, and DELETE, and can also handle file uploads and downloads, cookies, and redirects.
One of the key features of PycURL is its support for SSL and proxy servers, which allows developers to securely transfer data over the internet and work around any network restrictions. PycURL also supports a wide range of authentication methods, such as Basic, Digest, and NTLM, and allows developers to easily set custom headers and query parameters.
Just like cURL itself, PycURL is also highly configurable and allows for fine-grained control over various aspects of the transfer, such as timeouts, retries, buffer sizes, and verbosity levels. Additionally, PycURL also provides easy access to the underlying libcurl library, which allows developers to access advanced functionality that is not exposed by the PycURL API.
It's important to note that PycURL is a wrapper around the libcurl library and therefore provides the same functionality and performance as libcurl.
Main strengths of PycURL is that it uses cURL which is one of the most feature rich low-level http clients. The downside is that it's a very low-level client (see the examples below) with complex API making use in web scraping very difficult and niche.
Highlights
Example Use
import httpx
# Just like requests httpx can be used directly
response = httpx.get("http://webscraping.fyi/")
response.status_code
200
response.text
"text"
response.content
b"bytes"
# HTTP2 needs to be enabled explicitly and is recommended for web scraping:
response = httpx.get("http://webscraping.fyi/", http2=True)
# httpx can automatically convert json responses to Python dictionaries:
response = httpx.get("http://httpbin.org/json")
print(response.json())
{'slideshow': {'author': 'Yours Truly', 'date': 'date of publication', 'slides': [{'title': 'Wake up to WonderWidgets!', 'type': 'all'}, {'items': ['Why <em>WonderWidgets</em> are great', 'Who <em>buys</em> WonderWidgets'], 'title': 'Overview', 'type': 'all'}], 'title': 'Sample Slide Show'}}
# for POST request it can ingest Python's dictionaries as JSON:
response = requests.post("http://httpbin.org/post", json={"query": "hello world"})
# or form data:
response = requests.post("http://httpbin.org/post", data={"query": "hello world"})
# persistent client can be established using Client object
# this allows to set default values and automatically track cookies
from httpx import Client
c = Client(headers={"User-Agent": "webscraping.fyi"}, http2=True)
c.get('http://httpbin.org/cookies/set/foo/bar')
print(c.cookies['foo'])
'bar'
print(c.get('http://httpbin.org/cookies').json())
{'cookies': {'foo': 'bar'}}
# for asynchronous requests AsyncClient must be used:
import asyncio
from httpx import AsyncClient
async def example_use():
async with AsyncClient(headers={"User-Agent": "webscraping.fyi"}) as client:
response = await client.get("http://httpbing.org/get")
# to schedule multiple requests concurrently use asyncio gather or as_completed
three_concurrent_responses = await asyncio.gather(
client.get("http://httpbing.org/get"),
client.get("http://httpbing.org/get"),
client.get("http://httpbing.org/get"),
)
asyncio.run(example_use())
import pycurl
from io import BytesIO
buf = BytesIO()
headers = BytesIO()
c = pycurl.Curl()
c.setopt(c.HTTP_VERSION, c.CURL_HTTP_VERSION_2_0) # set to use http2
# set proxy
c.setopt(c.PROXY, 'http://proxy.example.com:8080')
c.setopt(c.PROXYUSERNAME, 'username')
c.setopt(c.PROXYPASSWORD, 'password')
# make a request
c.setopt(c.URL, 'https://httpbin.org/get')
c.setopt(c.WRITEFUNCTION, buf.write) # where to save response body
c.setopt(c.HEADERFUNCTION, headers.write) # where to save response headers
# to make post request enable POST option:
# c.setopt(c.POST, 1)
# c.setopt(c.POSTFIELDS, 'key1=value1&key2=value2')
c.perform() # send request
# read response
data = buf.getvalue().decode()
headers = headers.getvalue().decode() # headers as a string
headers = dict([h.split(': ') for h in headers.splitlines() if ': ' in h]) # headers as a dict
c.close()
# multiple concurrent requests can be made using CurlMulti object:
# Create a CurlMulti object
multi = pycurl.CurlMulti()
# Set the number of maximum connections
multi.setopt(pycurl.MAXCONNECTS, 5)
# Create a list to store the Curl objects
curls = []
# Add the first request
c1 = pycurl.Curl()
c1.setopt(c1.URL, 'https://httpbin.org/get')
c1.setopt(c1.WRITEFUNCTION, BytesIO().write)
multi.add_handle(c1)
curls.append(c1)
# Add the second request
c2 = pycurl.Curl()
c2.setopt(c2.URL, 'https://httpbin.org/')
c2.setopt(c2.WRITEFUNCTION, BytesIO().write)
multi.add_handle(c2)
curls.append(c2)
# Start the requests
while True:
ret, _ = multi.perform()
if ret != pycurl.E_CALL_MULTI_PERFORM:
break
# Close the connections
for c in curls:
multi.remove_handle(c)
c.close()