Skip to content

wreckvscurl-cffi

BSD-3-Clause 4 7 382
233.8 thousand (month) Aug 06 2011 18.1.0(4 months ago)
1,751 2 34 MIT
Feb 23 2022 594.9 thousand (month) 0.7.1(4 months ago)

Wreck is an HTTP client library for Node.js. It provides a simple, consistent API for making HTTP requests, including support for both the client and server side of an HTTP transaction.

Wreck is a very minimal but stable as it's part of Hapi web framework project. For web scraping, it doesn't offer required features like proxy configuration or http2 support so it's not recommended.

Curl-cffi is a Python library for implementing curl-impersonate which is a HTTP client that appears as one of popular web browsers like: - Google Chrome - Microsoft Edge - Safari - Firefox Unlike requests and httpx which are native Python libraries, curl-cffi uses cURL and inherits it's powerful features like extensive HTTP protocol support and detection patches for TLS and HTTP fingerprinting.

Using curl-cffi web scrapers can bypass TLS and HTTP fingerprinting.

Highlights


bypasshttp2tls-fingerprinthttp-fingerprintsyncasync

Example Use


const Wreck = require('wreck');

// get request
Wreck.get('http://example.com', (err, res, payload) => {
    if (err) {
        throw err;
    }
    console.log(payload.toString());
});

// post request
const options = {
    headers: { 'content-type': 'application/json' },
    payload: JSON.stringify({ name: 'John Doe' })
};

Wreck.post('http://example.com', options, (err, res, payload) => {
    if (err) {
        throw err;
    }
    console.log(payload.toString());
});
curl-cffi can be accessed as low-level curl client as well as an easy high-level HTTP client:
from curl_cffi import requests

response = requests.get('https://httpbin.org/json')
print(response.json())

# or using sessions
session = requests.Session()
response = session.get('https://httpbin.org/json')

# also supports async requests using asyncio
import asyncio
from curl_cffi.requests import AsyncSession

urls = [
  "http://httpbin.org/html",
  "http://httpbin.org/html",
  "http://httpbin.org/html",
]

async with AsyncSession() as s:
    tasks = []
    for url in urls:
        task = s.get(url)
        tasks.append(task)
    # scrape concurrently:
    responses = await asyncio.gather(*tasks)

# also supports websocket connections
from curl_cffi.requests import Session, WebSocket

def on_message(ws: WebSocket, message):
    print(message)

with Session() as s:
    ws = s.ws_connect(
        "wss://api.gemini.com/v1/marketdata/BTCUSD",
        on_message=on_message,
    )
    ws.run_forever()

Alternatives / Similar


Was this page helpful?