crulvshttparty
crul is a R library for sending HTTP requests and web scraping. It is designed to be simple and easy to use, while still providing powerful functionality for working with HTTP requests and scraping web pages.
One of the main features of crul is its intuitive and easy-to-use syntax for sending HTTP requests. It allows you to easily specify the HTTP method, headers, and body of a request, and also provides a simple way to handle the response.
crul also has the ability to handle different types of requests and responses, including GET, POST, PUT, DELETE, and PATCH. It also support for handling redirects, cookies, and authentication.
Another feature of crul is its support for web scraping. The library provides a simple and efficient way to extract data from web pages, using a syntax similar to that of the XML and httr libraries. It also allows to easily filter the extracted data based on a specific criteria.
crul also supports parallel scraping, which allows to make multiple requests at the same time, thus speeding up the scraping process.
In addition to these features, crul has a good compatibility with other R packages such as tidyverse and purrr which facilitates the manipulation of the data obtained after scraping.
HTTParty is a Ruby library that makes it easy to work with HTTP requests and responses. It is built on top of the Ruby standard library's Net::HTTP and provides a simple, easy-to-use interface for making requests and handling responses.
One of the main features of HTTParty is its ability to automatically parse response bodies as JSON, XML, or other formats. This allows developers to easily access the data returned by an API without having to manually parse the response.
Highlights
Example Use
library(crul)
# Sending a GET request to a website
response <- HttpClient$new("https://www.example.com")$get()
# Sending a POST request to a website
request_body <- list(param1 = "value1", param2 = "value2")
response <- HttpClient$new("https://www.example.com")$post(body = request_body)
# Extracting the status code and body of the response
status_code <- response$status_code()
body <- response$body()
# crul also allows easy asynchronous requests:
urls <- c("https://www.example1.com", "https://www.example2.com", "https://www.example3.com")
# Creating a list of request objects from urls
requests <- lapply(urls, function(url) {
HttpClient$new(url)$get()
})
# Sending the requests asynchronously
responses <- async(requests)
# Extracting the status code and body of the responses
status_codes <- lapply(responses, function(response) response$status_code())
bodies <- lapply(responses, function(response) response$body())
require 'httparty'
# get request:
response = HTTParty.get('http://httpbin.org/get')
puts response.body
puts response.code
puts response.message
puts response.headers.inspect
# post request
response = HTTParty.post('http://httpbin.org/post',
:body => { :title => 'foo', :body => 'bar', :userId => 1 }.to_json,
:headers => { 'Content-Type' => 'application/json' } )
puts response.body