Skip to content

wreckvshrequests

BSD-3-Clause 4 7 378
300.2 thousand (month) Aug 06 2011 18.1.0(2025-07-24 23:01:15 ago)
1,001 1 51 MIT
Feb 23 2022 33.3 thousand (month) 0.9.2(2024-12-01 02:55:27 ago)

Wreck is an HTTP client library for Node.js. It provides a simple, consistent API for making HTTP requests, including support for both the client and server side of an HTTP transaction.

Wreck is a very minimal but stable as it's part of Hapi web framework project. For web scraping, it doesn't offer required features like proxy configuration or http2 support so it's not recommended.

hrequests is a feature rich modern replacement for a famous requests library for Python. It provides a feature rich HTTP client capable of resisting popular scraper identification techniques: - Seamless transition between headless browser and http client based requests - Integrated HTML parser - Mimicking of real browser TLS fingerprints - Javascript rendering - HTTP2 support - Realistic browser headers

Highlights


bypasshttp2tls-fingerprinthttp-fingerprintsyncasync

Example Use


```javascript const Wreck = require('wreck'); // get request Wreck.get('http://example.com', (err, res, payload) => { if (err) { throw err; } console.log(payload.toString()); }); // post request const options = { headers: { 'content-type': 'application/json' }, payload: JSON.stringify({ name: 'John Doe' }) }; Wreck.post('http://example.com', options, (err, res, payload) => { if (err) { throw err; } console.log(payload.toString()); }); ```
hrequests has almost identical API and UX as requests and here's a quick overview: ```python import hrequests # perform HTTP client requests resp = hrequests.get('https://httpbin.org/html') print(resp.status_code) # 200 # use headless browsers and sessions: session = hrequests.Session('chrome', version=122, os="mac") # supports asyncio and easy concurrency requests = [ hrequests.async_get('https://www.google.com/', browser='firefox'), hrequests.async_get('https://www.duckduckgo.com/'), hrequests.async_get('https://www.yahoo.com/'), hrequests.async_get('https://www.httpbin.org/'), ] responses = hrequests.map(requests, size=3) # max 3 conccurency ```

Alternatives / Similar


Was this page helpful?