Skip to content

curl-cffivsem-http-request

MIT 34 2 1,751
594.9 thousand (month) Feb 23 2022 0.7.1(3 months ago)
1,219 4 16 MIT
Oct 25 2009 260.4 thousand (month) 1.1.7(4 years ago)

Curl-cffi is a Python library for implementing curl-impersonate which is a HTTP client that appears as one of popular web browsers like: - Google Chrome - Microsoft Edge - Safari - Firefox Unlike requests and httpx which are native Python libraries, curl-cffi uses cURL and inherits it's powerful features like extensive HTTP protocol support and detection patches for TLS and HTTP fingerprinting.

Using curl-cffi web scrapers can bypass TLS and HTTP fingerprinting.

em-http-request is a Ruby gem for making asynchronous HTTP requests using EventMachine. It allows you to perform multiple requests simultaneously and handle the responses as they come in, rather than waiting for each request to complete before making the next one.

In short it supports: - Asynchronous HTTP API for single & parallel request execution - Keep-Alive and HTTP pipelining support - Auto-follow 3xx redirects with max depth - Automatic gzip & deflate decoding - Streaming response processing - Streaming file uploads - HTTP proxy and SOCKS5 support - Basic Auth & OAuth - Connection-level & global middleware support - HTTP parser via http_parser.rb - Works wherever EventMachine runs: Rubinius, JRuby, MRI

Highlights


bypasshttp2tls-fingerprinthttp-fingerprintsyncasync

Example Use


curl-cffi can be accessed as low-level curl client as well as an easy high-level HTTP client:
from curl_cffi import requests

response = requests.get('https://httpbin.org/json')
print(response.json())

# or using sessions
session = requests.Session()
response = session.get('https://httpbin.org/json')

# also supports async requests using asyncio
import asyncio
from curl_cffi.requests import AsyncSession

urls = [
  "http://httpbin.org/html",
  "http://httpbin.org/html",
  "http://httpbin.org/html",
]

async with AsyncSession() as s:
    tasks = []
    for url in urls:
        task = s.get(url)
        tasks.append(task)
    # scrape concurrently:
    responses = await asyncio.gather(*tasks)

# also supports websocket connections
from curl_cffi.requests import Session, WebSocket

def on_message(ws: WebSocket, message):
    print(message)

with Session() as s:
    ws = s.ws_connect(
        "wss://api.gemini.com/v1/marketdata/BTCUSD",
        on_message=on_message,
    )
    ws.run_forever()
EventMachine.run {
  http = EventMachine::HttpRequest.new('http://google.com/').get :query => {'keyname' => 'value'}

  # add callback for errors:
  http.errback { p 'Uh oh'; EM.stop }

  # add callback for successful requests
  http.callback {
    p http.response_header.status
    p http.response_header
    p http.response

    EventMachine.stop
  }
}

Alternatives / Similar


Was this page helpful?