gotvscurl-cffi
Got is a lightweight and powerful HTTP client for Node.js. It is built on top of the http and https modules and provides a simple, consistent API for making HTTP requests.
Got is one of the most feature-rich http clients in NodeJS ecosystem offering http2, proxy and asynchronous support making it ideal for web scraping.
Got also supports many specific domain integrations like AWS, plugins for various public APIs like github.
Note that Got has some inconsistent behaviors when it comes to web scraping use.
For example, it normalizes http headers
which is undesired functionality in scraping and should be disabled.
Curl-cffi is a Python library for implementing curl-impersonate which is a
HTTP client that appears as one of popular web browsers like:
- Google Chrome
- Microsoft Edge
- Safari
- Firefox
Unlike requests
and httpx
which are native Python libraries, curl-cffi
uses cURL and inherits it's powerful features
like extensive HTTP protocol support and detection patches for TLS and HTTP fingerprinting.
Using curl-cffi web scrapers can bypass TLS and HTTP fingerprinting.
Highlights
Example Use
const got = require('got');
// GET requests are default and can be made calling the module as is:
const response = await got('https://api.example.com');
console.log(response.body);
// POST requests can send
const response = await got.post('https://api.example.com', {
json: { name: 'John Doe' },
});
console.log(response.body);
// handling cookies
import {CookieJar} from 'tough-cookie';
const cookieJar = new CookieJar();
await cookieJar.setCookie('foo=bar', 'https://httpbin.org');
await got('https://httpbin.org/anything', {cookieJar});
// using proxy
import got from 'got';
import {HttpsProxyAgent} from 'hpagent';
await got('https://httpbin.org/ip', {
agent: {
https: new HttpsProxyAgent({
keepAlive: true,
keepAliveMsecs: 1000,
maxSockets: 256,
maxFreeSockets: 256,
scheduling: 'lifo',
proxy: 'https://localhost:8080'
})
}
});
from curl_cffi import requests
response = requests.get('https://httpbin.org/json')
print(response.json())
# or using sessions
session = requests.Session()
response = session.get('https://httpbin.org/json')
# also supports async requests using asyncio
import asyncio
from curl_cffi.requests import AsyncSession
urls = [
"http://httpbin.org/html",
"http://httpbin.org/html",
"http://httpbin.org/html",
]
async with AsyncSession() as s:
tasks = []
for url in urls:
task = s.get(url)
tasks.append(task)
# scrape concurrently:
responses = await asyncio.gather(*tasks)
# also supports websocket connections
from curl_cffi.requests import Session, WebSocket
def on_message(ws: WebSocket, message):
print(message)
with Session() as s:
ws = s.ws_connect(
"wss://api.gemini.com/v1/marketdata/BTCUSD",
on_message=on_message,
)
ws.run_forever()