guzzlevscrul
Guzzle is a PHP HTTP client library that makes it easy to send HTTP requests and trivial to integrate with web services. It allows you to send HTTP/1.1 requests with various methods like GET, POST, PUT, DELETE, and others.
Guzzle also supports sending both synchronous and asynchronous requests, caching, and even has built-in support for OAuth 1.0a. Additionally, it can handle different HTTP errors and handle redirects automatically. It also has built-in support for serializing and deserializing data using formats like JSON and XML, as well as sending multipart file uploads.
Overall Guzzle is an easy to use and powerful library for working with HTTP in PHP.
crul is a R library for sending HTTP requests and web scraping. It is designed to be simple and easy to use, while still providing powerful functionality for working with HTTP requests and scraping web pages.
One of the main features of crul is its intuitive and easy-to-use syntax for sending HTTP requests. It allows you to easily specify the HTTP method, headers, and body of a request, and also provides a simple way to handle the response.
crul also has the ability to handle different types of requests and responses, including GET, POST, PUT, DELETE, and PATCH. It also support for handling redirects, cookies, and authentication.
Another feature of crul is its support for web scraping. The library provides a simple and efficient way to extract data from web pages, using a syntax similar to that of the XML and httr libraries. It also allows to easily filter the extracted data based on a specific criteria.
crul also supports parallel scraping, which allows to make multiple requests at the same time, thus speeding up the scraping process.
In addition to these features, crul has a good compatibility with other R packages such as tidyverse and purrr which facilitates the manipulation of the data obtained after scraping.
Highlights
Example Use
use GuzzleHttp\Client;
// Create a client session:
$client = new Client();
// can also set session details like headers
$client = new Client([
'headers' => [
'User-Agent' => 'webscraping.fyi',
]
]);
// GET request:
$response = $client->get('http://httpbin.org/get');
// print all details
var_dump($response);
// or the important bits
printf("status: %s\n", $response->getStatusCode());
printf("headers: %s\n", json_encode($response->getHeaders(), JSON_PRETTY_PRINT));
printf("body: %s", $response->getBody()->getContents());
// POST request
$response = $client->post(
'https://httpbin.org/post',
// for JSON use json argument:
['json' => ['query' => 'foobar', 'page' => 2]]
// or formdata use form_params:
// ['form_params' => ['query' => 'foobar', 'page' => 2]]
);
// For ASYNC requests getAsync function can be used:
$promise1 = $client->getAsync('https://httpbin.org/get');
$promise2 = $client->getAsync('https://httpbin.org/get?foo=bar');
// await it:
$results = Promise\unwrap([$promise1, $promise2]);
foreach ($results as $result) {
echo $result->getBody();
}
// or add promise callback
Promise\each([$promise1, $promise2], function ($response, $index, $callable) {
echo $response->getBody();
});
library(crul)
# Sending a GET request to a website
response <- HttpClient$new("https://www.example.com")$get()
# Sending a POST request to a website
request_body <- list(param1 = "value1", param2 = "value2")
response <- HttpClient$new("https://www.example.com")$post(body = request_body)
# Extracting the status code and body of the response
status_code <- response$status_code()
body <- response$body()
# crul also allows easy asynchronous requests:
urls <- c("https://www.example1.com", "https://www.example2.com", "https://www.example3.com")
# Creating a list of request objects from urls
requests <- lapply(urls, function(url) {
HttpClient$new(url)$get()
})
# Sending the requests asynchronously
responses <- async(requests)
# Extracting the status code and body of the responses
status_codes <- lapply(responses, function(response) response$status_code())
bodies <- lapply(responses, function(response) response$body())