Skip to content

node-fetchvstreq

MIT 207 10 8,709
224.9 million (month) Dec 28 2012 3.3.2(7 months ago)
585 14 56 NOASSERTION
Dec 28 2012 93.2 thousand (month) 23.11.0(8 months ago)

node-fetch is a lightweight library that provides a fetch()-like API for making HTTP requests in Node.js. It is a light-weight implementation of the Fetch API, which is mostly compatible with the browser's version.

node-fetch is primarily known as almost identical package fetch() is included in web browsers so it shares the same use common API. It's great starting point for people coming from front-end environment.

treq is a Python library for making HTTP requests that provides a simple, convenient API for interacting with web services. It is inspired byt the popular requests library, but powered by Twisted asynchronous engine which allows promise based concurrency.

treq provides a simple, high-level API for making HTTP requests, including methods for GET, POST, PUT, DELETE, etc. It also allows for easy handling of JSON data, automatic decompression of gzipped responses, and connection pooling.

treq is a lightweight library and it's easy to use, it's a good choice for small to medium-sized projects where ease of use is more important than performance.

In web scraping treq isn't commonly used as it doesn't support HTTP2 but it's the only Twisted based HTTP client. treq is also based on callback/errback promises (like Scrapy) which can be easier to understand and maintain compared to asyncio's corountines.

Highlights


popular
uses-twistedno-http2

Example Use


const fetch = require('node-fetch');

// fetch supports both Promises and async/await
fetch('http://httpbin.org/get')
    .then(res => res.text())
    .then(body => console.log(body))
    .catch(err => console.error(err));
const response = await fetch('http://httpbin.org/get');

// for concurrent scraping Promise.all can be used
const results = await Promise.all([
  fetch('http://httpbin.org/html'),
  fetch('http://httpbin.org/html'),
  fetch('http://httpbin.org/html'),
])

// POST requests
await fetch('http://httpbin.org/post', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ name: 'John Doe' }),
})

// Proxy use:
const agent = new https.Agent({
    rejectUnauthorized: false,
    proxy: {
        host: 'proxy.example.com',
        port: 8080
    }
});

await fetch('https://httpbin.org/ip', { agent })

// setting headers and cookies
const headers = new fetch.Headers();
headers.append('Cookie', 'myCookie=123');
headers.append('X-My-Header', 'myValue');

await fetch('https://httpbin.org/headers', { headers })
from twisted.internet import reactor
from twisted.internet.task import react
from twisted.internet.defer import ensureDeferred
import treq

# treq can be used with twisted's reactor with callbacks
response_deferred = treq.get(
    "http://httpbin.org/get"
)
# or POST
response_deferred = treq.post(
    "http://httpbin.org/post",
    json={"key": "value"},  # JSON
    data={"key": "value"},  # Form Data
)

# add callback or errback
def handle_response(response):
    print(response.code)
    response.text().addCallback(lambda body: print(body))
def handle_error(failure):
    print(failure)
# this callback will be called when request completes:
response_deferred.addCallback(handle_response)
# this errback will be called if request fails
response_deferred.addErrback(handle_error)
# this will be called if request completes or fails:
response_deferred.addBoth(lambda _: reactor.stop())  # close twisted once finished

if __name__ == '__main__':
    reactor.run()

#Note that treq can also be used with async/await:
async def main():
    # content reads response data and get sends a get request:
    print(await treq.content(await treq.get("https://example.com/")))

if __name__ == '__main__':
    react(lambda reactor: ensureDeferred(main()))
```

Alternatives / Similar


Was this page helpful?