Skip to content

requestsvsrvest

Apache-2.0 236 30 51,729
440.1 million (month) Feb 14 2011 2.32.3(a month ago)
1,485 1 23 MIT
Nov 22 2014 483.1 thousand (month) 1.0.4(1 year, 10 months ago)

The requests package is a popular library for making HTTP requests in Python. It provides a simple, easy-to-use API for sending HTTP/1.1 requests, and it abstracts away many of the low-level details of working with HTTP. One of the key features of requests is its simple API. You can send a GET request with a single line of code:

import requests
response = requests.get('https://webscraping.fyi/lib/requests/')
requests makes it easy to send data along with your requests, including JSON data and files. It also automatically handles redirects and cookies, and it can handle both basic and digest authentication. Additionally, it's also providing powerful functionality for handling exceptions, managing timeouts and session, also handling a wide range of well-known content-encoding types. One thing to keep in mind is that requests is a synchronous library, which means that your program will block (stop execution) while waiting for a response. In some situations, this may not be desirable, and you may want to use an asynchronous library like httpx or aiohttp. You can install requests package via pip package manager:
pip install requests
requests is a very popular library and has a large and active community, which means that there are many third-party libraries that build on top of it, and it has a wide range of usage.

rvest is a popular R library for web scraping and parsing HTML and XML documents. It is built on top of the xml2 and httr libraries and provides a simple and consistent API for interacting with web pages.

One of the main advantages of using rvest is its simplicity and ease of use. It provides a number of functions that make it easy to extract information from web pages, even for those who are not familiar with web scraping. The html_nodes and html_node functions allow you to select elements from an HTML document using CSS selectors, similar to how you would select elements in JavaScript.

rvest also provides functions for interacting with forms, including html_form, set_values, and submit_form functions. These functions make it easy to navigate through forms and submit data to the server, which can be useful when scraping sites that require authentication or when interacting with dynamic web pages.

rvest also provides functions for parsing XML documents. It includes xml_nodes and xml_node functions, which also use CSS selectors to select elements from an XML document, as well as xml_attrs and xml_attr functions to extract attributes from elements.

Another advantage of rvest is that it provides a way to handle cookies, so you can keep the session alive while scraping a website, and also you can handle redirections with handle_redirects

Highlights


syncease-of-useno-http2no-asyncpopular

Example Use


import requests

# get request:
response = requests.get("http://webscraping.fyi/")
response.status_code
200
response.text
"text"
response.content
b"bytes"

# requests can automatically convert json responses to Python dictionaries:
response = requests.get("http://httpbin.org/json")
print(response.json())
{'slideshow': {'author': 'Yours Truly', 'date': 'date of publication', 'slides': [{'title': 'Wake up to WonderWidgets!', 'type': 'all'}, {'items': ['Why <em>WonderWidgets</em> are great', 'Who <em>buys</em> WonderWidgets'], 'title': 'Overview', 'type': 'all'}], 'title': 'Sample Slide Show'}}

# for POST request it can ingest Python's dictionaries as JSON:
response = requests.post("http://httpbin.org/post", json={"query": "hello world"})
# or form data:
response = requests.post("http://httpbin.org/post", data={"query": "hello world"})

# Session object can be used to automatically keep track of cookies and set defaults:
from requests import Session
s = Session()
s.headers = {"User-Agent": "webscraping.fyi"}
s.get('http://httpbin.org/cookies/set/foo/bar')
print(s.cookies['foo'])
'bar'
print(s.get('http://httpbin.org/cookies').json())
{'cookies': {'foo': 'bar'}}
library("rvest")

# Rvest can use basic HTTP client to download remote HTML:
tree <- read_html("http://webscraping.fyi/lib/r/rvest")
# or read from string:
tree <- read_html('
<div class="products">
  <a href="/product/1">Cat Food</a>
  <a href="/product/2">Dog Food</a>
</div>
')

# to parse HTML trees with rvest we use r pipes (the %>% symbol) and html_element function:
# we can use css selectors:
print(tree %>% html_element(".products>a") %>% html_text())
# "[1] "\nCat Food\nDog Food\n""

# or XPath:
print(tree %>% html_element(xpath="//div[@class='products']/a") %>% html_text())
# "[1] "\nCat Food\nDog Food\n""

# Additionally rvest offers many quality of life functions:
# html_text2 - removes trailing and leading spaces and joins values
print(tree %>% html_element("div") %>% html_text2())
# "[1] "Cat Food Dog Food""

# html_attr - selects element's attribute:
print(tree %>% html_element("div") %>% html_attr('class'))
# "products"

Alternatives / Similar


Was this page helpful?