Skip to content

superagentvscurl-cffi

MIT 180 16 16,610
41.1 million (month) Aug 22 2011 10.1.1(2024-10-22 17:26:05 ago)
1,751 2 34 MIT
Feb 23 2022 594.9 thousand (month) 0.7.1(2024-07-13 09:07:25 ago)

superagent is an HTTP client library for Node.js that provides a simple, flexible, and powerful API for making HTTP requests. It supports all major HTTP methods, and has a clean and easy-to-use interface for handling responses and errors.

what differentiates superagent from other http clients is its simple declarative API.

Curl-cffi is a Python library for implementing curl-impersonate which is a HTTP client that appears as one of popular web browsers like: - Google Chrome - Microsoft Edge - Safari - Firefox Unlike requests and httpx which are native Python libraries, curl-cffi uses cURL and inherits it's powerful features like extensive HTTP protocol support and detection patches for TLS and HTTP fingerprinting.

Using curl-cffi web scrapers can bypass TLS and HTTP fingerprinting.

Highlights


declarativeproxypopular
bypasshttp2tls-fingerprinthttp-fingerprintsyncasync

Example Use


```javascript const superagent = require('superagent'); // superagent supports both Promises and async/await superagent.get('https://httpbin.org/get') .then(res => console.log(res.text)) .catch(err => console.error(err)); const response = superagent.get('https://httpbin.org/get') // post requests: superagent.post('https://httpbin.org/post').send({ name: 'John Doe' }) // setting proxy superagent.get('https://httpbin.org/ip').proxy('http://proxy.example.com:8080') // settings headers and proxies superagent.get('https://httpbin.org/headers').set('Cookie', 'myCookie=123').set('X-My-Header', 'myValue') ```
curl-cffi can be accessed as low-level curl client as well as an easy high-level HTTP client: ```python from curl_cffi import requests response = requests.get('https://httpbin.org/json') print(response.json()) # or using sessions session = requests.Session() response = session.get('https://httpbin.org/json') # also supports async requests using asyncio import asyncio from curl_cffi.requests import AsyncSession urls = [ "http://httpbin.org/html", "http://httpbin.org/html", "http://httpbin.org/html", ] async with AsyncSession() as s: tasks = [] for url in urls: task = s.get(url) tasks.append(task) # scrape concurrently: responses = await asyncio.gather(*tasks) # also supports websocket connections from curl_cffi.requests import Session, WebSocket def on_message(ws: WebSocket, message): print(message) with Session() as s: ws = s.ws_connect( "wss://api.gemini.com/v1/marketdata/BTCUSD", on_message=on_message, ) ws.run_forever() ```

Alternatives / Similar


Was this page helpful?