node-fetchvshttpx
node-fetch is a lightweight library that provides a fetch()-like API for making HTTP requests in Node.js. It is a light-weight implementation of the Fetch API, which is mostly compatible with the browser's version.
node-fetch is primarily known as almost identical package fetch() is included in web browsers so it shares the same use common API. It's great starting point for people coming from front-end environment.
httpx is a fully featured HTTP client for Python 3, which provides sync and async APIs, and support for both HTTP/1.1 and HTTP/2. It is designed to be a replacement for the popular requests package, with the added benefit of being fully compatible with Python 3's async features.
One of the main features of httpx is its support for asynchronous programming. This means that it can send multiple requests at the same time, without blocking the execution of your program. This can lead to significant performance improvements, especially when working with many small requests, or when dealing with slow or unreliable network connections.
httpx also supports sending HTTP/2 requests, which allows for more efficient use of network resources and can result in faster page loads.
One of the strengths of httpx is the possibility of working on streaming mode for the response data. This means you can process the response as it comes in, instead of waiting for the entire response to be received. This is useful when working with large files, or when you need to process the data in real-time.
Additionally, httpx provides a number of other features that are common in modern HTTP clients, such as support for sending and receiving cookies, handling redirects, and working with multipart file uploads. It also include support for several well-known authentication modules like BasicAuth, DigestAuth, and BearerAuth.
Highlights
Example Use
const fetch = require('node-fetch');
// fetch supports both Promises and async/await
fetch('http://httpbin.org/get')
.then(res => res.text())
.then(body => console.log(body))
.catch(err => console.error(err));
const response = await fetch('http://httpbin.org/get');
// for concurrent scraping Promise.all can be used
const results = await Promise.all([
fetch('http://httpbin.org/html'),
fetch('http://httpbin.org/html'),
fetch('http://httpbin.org/html'),
])
// POST requests
await fetch('http://httpbin.org/post', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: 'John Doe' }),
})
// Proxy use:
const agent = new https.Agent({
rejectUnauthorized: false,
proxy: {
host: 'proxy.example.com',
port: 8080
}
});
await fetch('https://httpbin.org/ip', { agent })
// setting headers and cookies
const headers = new fetch.Headers();
headers.append('Cookie', 'myCookie=123');
headers.append('X-My-Header', 'myValue');
await fetch('https://httpbin.org/headers', { headers })
import httpx
# Just like requests httpx can be used directly
response = httpx.get("http://webscraping.fyi/")
response.status_code
200
response.text
"text"
response.content
b"bytes"
# HTTP2 needs to be enabled explicitly and is recommended for web scraping:
response = httpx.get("http://webscraping.fyi/", http2=True)
# httpx can automatically convert json responses to Python dictionaries:
response = httpx.get("http://httpbin.org/json")
print(response.json())
{'slideshow': {'author': 'Yours Truly', 'date': 'date of publication', 'slides': [{'title': 'Wake up to WonderWidgets!', 'type': 'all'}, {'items': ['Why <em>WonderWidgets</em> are great', 'Who <em>buys</em> WonderWidgets'], 'title': 'Overview', 'type': 'all'}], 'title': 'Sample Slide Show'}}
# for POST request it can ingest Python's dictionaries as JSON:
response = requests.post("http://httpbin.org/post", json={"query": "hello world"})
# or form data:
response = requests.post("http://httpbin.org/post", data={"query": "hello world"})
# persistent client can be established using Client object
# this allows to set default values and automatically track cookies
from httpx import Client
c = Client(headers={"User-Agent": "webscraping.fyi"}, http2=True)
c.get('http://httpbin.org/cookies/set/foo/bar')
print(c.cookies['foo'])
'bar'
print(c.get('http://httpbin.org/cookies').json())
{'cookies': {'foo': 'bar'}}
# for asynchronous requests AsyncClient must be used:
import asyncio
from httpx import AsyncClient
async def example_use():
async with AsyncClient(headers={"User-Agent": "webscraping.fyi"}) as client:
response = await client.get("http://httpbing.org/get")
# to schedule multiple requests concurrently use asyncio gather or as_completed
three_concurrent_responses = await asyncio.gather(
client.get("http://httpbing.org/get"),
client.get("http://httpbing.org/get"),
client.get("http://httpbing.org/get"),
)
asyncio.run(example_use())