Skip to content

httpxvscrul

BSD-3-Clause 62 12 12,702
63.9 million (month) Jul 26 2019 0.27.0(4 months ago)
102 1 15 MIT
Nov 09 2016 74.3 thousand (month) 1.4.2(1 year, 1 month ago)

httpx is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support for both HTTP/1.1 and HTTP/2. It is designed to be a replacement for the popular requests package, with the added benefit of being fully compatible with Python 3's async features.

One of the main features of httpx is its support for asynchronous programming. This means that it can send multiple requests at the same time, without blocking the execution of your program. This can lead to significant performance improvements, especially when working with many small requests, or when dealing with slow or unreliable network connections.

httpx also supports sending HTTP/2 requests, which allows for more efficient use of network resources and can result in faster page loads.

One of the strengths of httpx is the possibility of working on streaming mode for the response data. This means you can process the response as it comes in, instead of waiting for the entire response to be received. This is useful when working with large files, or when you need to process the data in real-time.

Additionally, httpx provides a number of other features that are common in modern HTTP clients, such as support for sending and receiving cookies, handling redirects, and working with multipart file uploads. It also include support for several well-known authentication modules like BasicAuth, DigestAuth, and BearerAuth.

crul is a R library for sending HTTP requests and web scraping. It is designed to be simple and easy to use, while still providing powerful functionality for working with HTTP requests and scraping web pages.

One of the main features of crul is its intuitive and easy-to-use syntax for sending HTTP requests. It allows you to easily specify the HTTP method, headers, and body of a request, and also provides a simple way to handle the response.

crul also has the ability to handle different types of requests and responses, including GET, POST, PUT, DELETE, and PATCH. It also support for handling redirects, cookies, and authentication.

Another feature of crul is its support for web scraping. The library provides a simple and efficient way to extract data from web pages, using a syntax similar to that of the XML and httr libraries. It also allows to easily filter the extracted data based on a specific criteria.

crul also supports parallel scraping, which allows to make multiple requests at the same time, thus speeding up the scraping process.

In addition to these features, crul has a good compatibility with other R packages such as tidyverse and purrr which facilitates the manipulation of the data obtained after scraping.

Highlights


asynciotriosynchttp2
http2uses-curlasync

Example Use


import httpx

# Just like requests httpx can be used directly
response = httpx.get("http://webscraping.fyi/")
response.status_code
200
response.text
"text"
response.content
b"bytes"

# HTTP2 needs to be enabled explicitly and is recommended for web scraping:
response = httpx.get("http://webscraping.fyi/", http2=True)

# httpx can automatically convert json responses to Python dictionaries:
response = httpx.get("http://httpbin.org/json")
print(response.json())
{'slideshow': {'author': 'Yours Truly', 'date': 'date of publication', 'slides': [{'title': 'Wake up to WonderWidgets!', 'type': 'all'}, {'items': ['Why <em>WonderWidgets</em> are great', 'Who <em>buys</em> WonderWidgets'], 'title': 'Overview', 'type': 'all'}], 'title': 'Sample Slide Show'}}

# for POST request it can ingest Python's dictionaries as JSON:
response = requests.post("http://httpbin.org/post", json={"query": "hello world"})
# or form data:
response = requests.post("http://httpbin.org/post", data={"query": "hello world"})

# persistent client can be established using Client object
# this allows to set default values and automatically track cookies
from httpx import Client

c = Client(headers={"User-Agent": "webscraping.fyi"}, http2=True)
c.get('http://httpbin.org/cookies/set/foo/bar')
print(c.cookies['foo'])
'bar'
print(c.get('http://httpbin.org/cookies').json())
{'cookies': {'foo': 'bar'}}

# for asynchronous requests AsyncClient must be used:
import asyncio
from httpx import AsyncClient 

async def example_use():
    async with AsyncClient(headers={"User-Agent": "webscraping.fyi"}) as client:
        response = await client.get("http://httpbing.org/get")
        # to schedule multiple requests concurrently use asyncio gather or as_completed
        three_concurrent_responses = await asyncio.gather(
            client.get("http://httpbing.org/get"),
            client.get("http://httpbing.org/get"),
            client.get("http://httpbing.org/get"),
        )

asyncio.run(example_use())
library(crul)

# Sending a GET request to a website
response <- HttpClient$new("https://www.example.com")$get()
# Sending a POST request to a website
request_body <- list(param1 = "value1", param2 = "value2")
response <- HttpClient$new("https://www.example.com")$post(body = request_body)

# Extracting the status code and body of the response
status_code <- response$status_code()
body <- response$body()

# crul also allows easy asynchronous requests:
urls <- c("https://www.example1.com", "https://www.example2.com", "https://www.example3.com")
# Creating a list of request objects from urls
requests <- lapply(urls, function(url) {
  HttpClient$new(url)$get()
})

# Sending the requests asynchronously
responses <- async(requests)

# Extracting the status code and body of the responses
status_codes <- lapply(responses, function(response) response$status_code())
bodies <- lapply(responses, function(response) response$body())

Alternatives / Similar


Was this page helpful?