Got is a lightweight and powerful HTTP client for Node.js.
It is built on top of the http and https modules and provides a simple, consistent API for making HTTP requests.
Got is one of the most feature-rich http clients in NodeJS ecosystem offering http2, proxy and asynchronous support
making it ideal for web scraping.
Got also supports many specific domain integrations like AWS, plugins for various public APIs like github.
Note that Got has some inconsistent behaviors when it comes to web scraping use.
For example, it normalizes http headers
which is undesired functionality in scraping and should be disabled.
The requests package is a popular library for making HTTP requests in Python.
It provides a simple, easy-to-use API for sending HTTP/1.1 requests, and it abstracts away many of the low-level details of working with HTTP.
One of the key features of requests is its simple API. You can send a GET request with a single line of code:
python
import requests
response = requests.get('https://webscraping.fyi/lib/requests/')
requests makes it easy to send data along with your requests, including JSON data and files. It also automatically handles redirects and cookies, and it can handle both basic and digest authentication.
Additionally, it's also providing powerful functionality for handling exceptions, managing timeouts and session, also handling a wide range of well-known content-encoding types.
One thing to keep in mind is that requests is a synchronous library, which means that your program will block (stop execution) while waiting for a response. In some situations, this may not be desirable, and you may want to use an asynchronous library like httpx or aiohttp.
You can install requests package via pip package manager:
shell
pip install requests
requests is a very popular library and has a large and active community, which means that there are many third-party libraries that build on top of it, and it has a wide range of usage.
```python
const got = require('got');
// GET requests are default and can be made calling the module as is:
const response = await got('https://api.example.com');
console.log(response.body);
// POST requests can send
const response = await got.post('https://api.example.com', {
json: { name: 'John Doe' },
});
console.log(response.body);
// handling cookies
import {CookieJar} from 'tough-cookie';
const cookieJar = new CookieJar();
await cookieJar.setCookie('foo=bar', 'https://httpbin.org');
await got('https://httpbin.org/anything', {cookieJar});
// using proxy
import got from 'got';
import {HttpsProxyAgent} from 'hpagent';
await got('https://httpbin.org/ip', {
agent: {
https: new HttpsProxyAgent({
keepAlive: true,
keepAliveMsecs: 1000,
maxSockets: 256,
maxFreeSockets: 256,
scheduling: 'lifo',
proxy: 'https://localhost:8080'
})
}
});
```
```python
import requests
# get request:
response = requests.get("http://webscraping.fyi/")
response.status_code
200
response.text
"text"
response.content
b"bytes"
# requests can automatically convert json responses to Python dictionaries:
response = requests.get("http://httpbin.org/json")
print(response.json())
{'slideshow': {'author': 'Yours Truly', 'date': 'date of publication', 'slides': [{'title': 'Wake up to WonderWidgets!', 'type': 'all'}, {'items': ['Why WonderWidgets are great', 'Who buys WonderWidgets'], 'title': 'Overview', 'type': 'all'}], 'title': 'Sample Slide Show'}}
# for POST request it can ingest Python's dictionaries as JSON:
response = requests.post("http://httpbin.org/post", json={"query": "hello world"})
# or form data:
response = requests.post("http://httpbin.org/post", data={"query": "hello world"})
# Session object can be used to automatically keep track of cookies and set defaults:
from requests import Session
s = Session()
s.headers = {"User-Agent": "webscraping.fyi"}
s.get('http://httpbin.org/cookies/set/foo/bar')
print(s.cookies['foo'])
'bar'
print(s.get('http://httpbin.org/cookies').json())
{'cookies': {'foo': 'bar'}}
```