Skip to content

curl-cffivshttpclient

MIT 31 2 1,732
463.9 thousand (month) Feb 23 2022 0.7.1(25 days ago)
699 4 95 ruby
Jul 25 2009 3.9 million (month) 2.8.3(7 years ago)

Curl-cffi is a Python library for implementing curl-impersonate which is a HTTP client that appears as one of popular web browsers like: - Google Chrome - Microsoft Edge - Safari - Firefox Unlike requests and httpx which are native Python libraries, curl-cffi uses cURL and inherits it's powerful features like extensive HTTP protocol support and detection patches for TLS and HTTP fingerprinting.

Using curl-cffi web scrapers can bypass TLS and HTTP fingerprinting.

HTTPClient is a Ruby gem that provides a simple and flexible interface for making HTTP requests. It's a full-featured HTTP client library with support for cookies, redirects, proxy, and more. It's built on top of the libwww-perl library, which is a widely-used, robust and well-documented library.

Features:

  • methods like GET/HEAD/POST/* via HTTP/1.1.
  • HTTPS(SSL), Cookies, proxy, authentication(Digest, NTLM, Basic), etc.
  • asynchronous HTTP request, streaming HTTP request.
  • debug mode CLI.
  • by contrast with net/http in standard distribution;
  • Cookies support
  • MT-safe
  • streaming POST (POST with File/IO)
  • Digest auth
  • Negotiate/NTLM auth for WWW-Authenticate (requires net/ntlm module; rubyntlm gem)
  • NTLM auth for Proxy-Authenticate (requires 'win32/sspi' module; rubysspi gem)
  • extensible with filter interface
  • you don't have to care HTTP/1.1 persistent connection (httpclient cares instead of you)

Highlights


bypasshttp2tls-fingerprinthttp-fingerprintsyncasync

Example Use


curl-cffi can be accessed as low-level curl client as well as an easy high-level HTTP client:
from curl_cffi import requests

response = requests.get('https://httpbin.org/json')
print(response.json())

# or using sessions
session = requests.Session()
response = session.get('https://httpbin.org/json')

# also supports async requests using asyncio
import asyncio
from curl_cffi.requests import AsyncSession

urls = [
  "http://httpbin.org/html",
  "http://httpbin.org/html",
  "http://httpbin.org/html",
]

async with AsyncSession() as s:
    tasks = []
    for url in urls:
        task = s.get(url)
        tasks.append(task)
    # scrape concurrently:
    responses = await asyncio.gather(*tasks)

# also supports websocket connections
from curl_cffi.requests import Session, WebSocket

def on_message(ws: WebSocket, message):
    print(message)

with Session() as s:
    ws = s.ws_connect(
        "wss://api.gemini.com/v1/marketdata/BTCUSD",
        on_message=on_message,
    )
    ws.run_forever()
require 'httpclient'

client = HTTPClient.new
# GET requests
response = client.get("http://httpbin.org/get")
puts response.content

# POST requests
data = { name: "value" }
response = client.post("http://httpbin.org/post", data)
puts response.content

Alternatives / Similar


Was this page helpful?