requestsvshttr
The requests package is a popular library for making HTTP requests in Python.
It provides a simple, easy-to-use API for sending HTTP/1.1 requests, and it abstracts away many of the low-level details of working with HTTP.
One of the key features of requests is its simple API. You can send a GET request with a single line of code:
import requests
response = requests.get('https://webscraping.fyi/lib/requests/')
pip install requests
The aim of httr is to provide a wrapper for the curl package, customised to the demands of modern web APIs.
Key features:
- Functions for the most important http verbs: GET(), HEAD(), PATCH(), PUT(), DELETE() and POST().
- Automatic connection sharing across requests to the same website (by default, curl handles are managed automatically), cookies are maintained across requests, and a up-to-date root-level SSL certificate store is used.
- Requests return a standard reponse object that captures the http status line, headers and body, along with other useful information.
- Response content is available with content() as a raw vector (as = "raw"), a character vector (as = "text"), or parsed into an R object (as = "parsed"), currently for html, xml, json, png and jpeg.
- You can convert http errors into R errors with stop_for_status().
- Config functions make it easier to modify the request in common ways: set_cookies(), add_headers(), authenticate(), use_proxy(), verbose(), timeout(), content_type(), accept(), progress().
- Support for OAuth 1.0 and 2.0 with oauth1.0_token() and oauth2.0_token(). The demo directory has eight OAuth demos: four for 1.0 (twitter, vimeo, withings and yahoo) and four for 2.0 (facebook, github, google, linkedin). OAuth credentials are automatically cached within a project.
Highlights
syncease-of-useno-http2no-asyncpopular
Example Use
import requests
# get request:
response = requests.get("http://webscraping.fyi/")
response.status_code
200
response.text
"text"
response.content
b"bytes"
# requests can automatically convert json responses to Python dictionaries:
response = requests.get("http://httpbin.org/json")
print(response.json())
{'slideshow': {'author': 'Yours Truly', 'date': 'date of publication', 'slides': [{'title': 'Wake up to WonderWidgets!', 'type': 'all'}, {'items': ['Why <em>WonderWidgets</em> are great', 'Who <em>buys</em> WonderWidgets'], 'title': 'Overview', 'type': 'all'}], 'title': 'Sample Slide Show'}}
# for POST request it can ingest Python's dictionaries as JSON:
response = requests.post("http://httpbin.org/post", json={"query": "hello world"})
# or form data:
response = requests.post("http://httpbin.org/post", data={"query": "hello world"})
# Session object can be used to automatically keep track of cookies and set defaults:
from requests import Session
s = Session()
s.headers = {"User-Agent": "webscraping.fyi"}
s.get('http://httpbin.org/cookies/set/foo/bar')
print(s.cookies['foo'])
'bar'
print(s.get('http://httpbin.org/cookies').json())
{'cookies': {'foo': 'bar'}}
library(httr)
# GET requests:
resp <- GET("http://httpbin.org/get")
status_code(resp) # status code
headers(resp) # headers
str(content(resp)) # body
# POST requests:
# Form encoded
resp <- POST(url, body = body, encode = "form")
# Multipart encoded
resp <- POST(url, body = body, encode = "multipart")
# JSON encoded
resp <- POST(url, body = body, encode = "json")
# setting cookies:
resp <- GET("http://httpbin.org/cookies", set_cookies("MeWant" = "cookies"))
content(r)$cookies # get response cookies