Wreck is an HTTP client library for Node.js.
It provides a simple, consistent API for making HTTP requests, including support for both the client and server side of an HTTP transaction.
Wreck is a very minimal but stable as it's part of Hapi web framework project.
For web scraping, it doesn't offer required features like proxy configuration or http2 support so it's not recommended.
hrequests is a feature rich modern replacement for a famous requests library for Python.
It provides a feature rich HTTP client capable of resisting popular scraper identification techniques:
- Seamless transition between headless browser and http client based requests
- Integrated HTML parser
- Mimicking of real browser TLS fingerprints
- Javascript rendering
- HTTP2 support
- Realistic browser headers
```javascript
const Wreck = require('wreck');
// get request
Wreck.get('http://example.com', (err, res, payload) => {
if (err) {
throw err;
}
console.log(payload.toString());
});
// post request
const options = {
headers: { 'content-type': 'application/json' },
payload: JSON.stringify({ name: 'John Doe' })
};
Wreck.post('http://example.com', options, (err, res, payload) => {
if (err) {
throw err;
}
console.log(payload.toString());
});
```
hrequests has almost identical API and UX as requests and here's a quick overview:
```python
import hrequests
# perform HTTP client requests
resp = hrequests.get('https://httpbin.org/html')
print(resp.status_code)
# 200
# use headless browsers and sessions:
session = hrequests.Session('chrome', version=122, os="mac")
# supports asyncio and easy concurrency
requests = [
hrequests.async_get('https://www.google.com/', browser='firefox'),
hrequests.async_get('https://www.duckduckgo.com/'),
hrequests.async_get('https://www.yahoo.com/'),
hrequests.async_get('https://www.httpbin.org/'),
]
responses = hrequests.map(requests, size=3) # max 3 conccurency
```