node-fetch is a lightweight library that provides a fetch()-like API for making HTTP requests in Node.js.
It is a light-weight implementation of the Fetch API, which is mostly compatible with the browser's version.
node-fetch is primarily known as almost identical package fetch() is included in web browsers so it
shares the same use common API. It's great starting point for people coming from front-end environment.
Primp is a Python HTTP client that impersonates real web browsers by replicating their
TLS fingerprints, HTTP/2 settings, and header ordering. It is a lightweight alternative
to curl-cffi for bypassing TLS and HTTP fingerprinting-based bot detection.
Key features include:
- Browser impersonation
Can impersonate Chrome, Firefox, Safari, Edge, and OkHttp clients by replicating their
exact TLS fingerprints (JA3/JA4), HTTP/2 frame settings, header ordering, and other
connection-level characteristics.
- HTTP/2 support
Full HTTP/2 support with configurable settings that match real browser behavior.
- Lightweight
Smaller and simpler than curl-cffi while providing similar impersonation capabilities.
Built on Rust for performance.
- Familiar API
Provides a requests-like API with Session support, making it easy to adopt for
developers familiar with the Python requests library.
- Proxy support
HTTP and SOCKS5 proxy support with authentication.
- Cookie management
Automatic cookie handling across requests within a session.
Primp fills a similar niche to curl-cffi and hrequests — HTTP clients designed to avoid
TLS/HTTP fingerprinting — but takes a Rust-powered approach for better performance. It is
particularly useful when you need to bypass bot detection that relies on connection-level
fingerprinting without using a full browser.
```javascript
const fetch = require('node-fetch');
// fetch supports both Promises and async/await
fetch('http://httpbin.org/get')
.then(res => res.text())
.then(body => console.log(body))
.catch(err => console.error(err));
const response = await fetch('http://httpbin.org/get');
// for concurrent scraping Promise.all can be used
const results = await Promise.all([
fetch('http://httpbin.org/html'),
fetch('http://httpbin.org/html'),
fetch('http://httpbin.org/html'),
])
// POST requests
await fetch('http://httpbin.org/post', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: 'John Doe' }),
})
// Proxy use:
const agent = new https.Agent({
rejectUnauthorized: false,
proxy: {
host: 'proxy.example.com',
port: 8080
}
});
await fetch('https://httpbin.org/ip', { agent })
// setting headers and cookies
const headers = new fetch.Headers();
headers.append('Cookie', 'myCookie=123');
headers.append('X-My-Header', 'myValue');
await fetch('https://httpbin.org/headers', { headers })
```
```python
import primp
# Create a session that impersonates Chrome
session = primp.Session(impersonate="chrome_131")
# Make requests - TLS fingerprint matches real Chrome
response = session.get("https://example.com")
print(response.status_code)
print(response.text)
# POST with JSON data
response = session.post(
"https://api.example.com/data",
json={"key": "value"},
)
# With proxy
session = primp.Session(
impersonate="firefox_133",
proxy="http://user:pass@proxy.example.com:8080",
)
response = session.get("https://example.com")
# Different browser impersonation profiles
for browser in ["chrome_131", "firefox_133", "safari_18", "edge_131"]:
session = primp.Session(impersonate=browser)
resp = session.get("https://tls.peet.ws/api/all")
print(f"{browser}: {resp.json()['ja3_hash']}")
```