Skip to content

requestsvscrul

Apache-2.0 248 30 52,128
589.4 million (month) Feb 14 2011 2.32.3(4 months ago)
106 1 15 MIT
Nov 09 2016 30.7 thousand (month) 1.5.0(6 months ago)

The requests package is a popular library for making HTTP requests in Python. It provides a simple, easy-to-use API for sending HTTP/1.1 requests, and it abstracts away many of the low-level details of working with HTTP. One of the key features of requests is its simple API. You can send a GET request with a single line of code:

import requests
response = requests.get('https://webscraping.fyi/lib/requests/')
requests makes it easy to send data along with your requests, including JSON data and files. It also automatically handles redirects and cookies, and it can handle both basic and digest authentication. Additionally, it's also providing powerful functionality for handling exceptions, managing timeouts and session, also handling a wide range of well-known content-encoding types. One thing to keep in mind is that requests is a synchronous library, which means that your program will block (stop execution) while waiting for a response. In some situations, this may not be desirable, and you may want to use an asynchronous library like httpx or aiohttp. You can install requests package via pip package manager:
pip install requests
requests is a very popular library and has a large and active community, which means that there are many third-party libraries that build on top of it, and it has a wide range of usage.

crul is a R library for sending HTTP requests and web scraping. It is designed to be simple and easy to use, while still providing powerful functionality for working with HTTP requests and scraping web pages.

One of the main features of crul is its intuitive and easy-to-use syntax for sending HTTP requests. It allows you to easily specify the HTTP method, headers, and body of a request, and also provides a simple way to handle the response.

crul also has the ability to handle different types of requests and responses, including GET, POST, PUT, DELETE, and PATCH. It also support for handling redirects, cookies, and authentication.

Another feature of crul is its support for web scraping. The library provides a simple and efficient way to extract data from web pages, using a syntax similar to that of the XML and httr libraries. It also allows to easily filter the extracted data based on a specific criteria.

crul also supports parallel scraping, which allows to make multiple requests at the same time, thus speeding up the scraping process.

In addition to these features, crul has a good compatibility with other R packages such as tidyverse and purrr which facilitates the manipulation of the data obtained after scraping.

Highlights


syncease-of-useno-http2no-asyncpopular
http2uses-curlasync

Example Use


import requests

# get request:
response = requests.get("http://webscraping.fyi/")
response.status_code
200
response.text
"text"
response.content
b"bytes"

# requests can automatically convert json responses to Python dictionaries:
response = requests.get("http://httpbin.org/json")
print(response.json())
{'slideshow': {'author': 'Yours Truly', 'date': 'date of publication', 'slides': [{'title': 'Wake up to WonderWidgets!', 'type': 'all'}, {'items': ['Why <em>WonderWidgets</em> are great', 'Who <em>buys</em> WonderWidgets'], 'title': 'Overview', 'type': 'all'}], 'title': 'Sample Slide Show'}}

# for POST request it can ingest Python's dictionaries as JSON:
response = requests.post("http://httpbin.org/post", json={"query": "hello world"})
# or form data:
response = requests.post("http://httpbin.org/post", data={"query": "hello world"})

# Session object can be used to automatically keep track of cookies and set defaults:
from requests import Session
s = Session()
s.headers = {"User-Agent": "webscraping.fyi"}
s.get('http://httpbin.org/cookies/set/foo/bar')
print(s.cookies['foo'])
'bar'
print(s.get('http://httpbin.org/cookies').json())
{'cookies': {'foo': 'bar'}}
library(crul)

# Sending a GET request to a website
response <- HttpClient$new("https://www.example.com")$get()
# Sending a POST request to a website
request_body <- list(param1 = "value1", param2 = "value2")
response <- HttpClient$new("https://www.example.com")$post(body = request_body)

# Extracting the status code and body of the response
status_code <- response$status_code()
body <- response$body()

# crul also allows easy asynchronous requests:
urls <- c("https://www.example1.com", "https://www.example2.com", "https://www.example3.com")
# Creating a list of request objects from urls
requests <- lapply(urls, function(url) {
  HttpClient$new(url)$get()
})

# Sending the requests asynchronously
responses <- async(requests)

# Extracting the status code and body of the responses
status_codes <- lapply(responses, function(response) response$status_code())
bodies <- lapply(responses, function(response) response$body())

Alternatives / Similar


Was this page helpful?