Skip to content

requestsvscrul

ISC 102 8 3,564
14.9 thousand (month) Oct 06 2013 v2.0.11(3 months ago)
102 1 15 MIT
Nov 09 2016 74.3 thousand (month) 1.4.2(1 year, 1 month ago)

PHP library "Requests" is an HTTP library written in PHP, for making HTTP requests. It's heavily inspired by a popular Python library called Requests and aims for the same goals of simplifying HTTP client complexities.

It abstracts the complexities of making requests behind a simple API so that you can focus on interacting with services and consuming data in your application.

Requests allows you to send HTTP/1.1 HEAD, GET, POST, PUT, DELETE, and PATCH HTTP requests. You can add headers, form data, multipart files, and parameters with basic arrays, and access the response data in the same way.

Requests uses cURL and fsockopen, depending on what your system has available, but abstracts all the nasty stuff out of your way, providing a consistent API.

Features:

  • International Domains and URLs
  • Browser-style SSL Verification
  • Basic/Digest Authentication
  • Automatic Decompression
  • Connection Timeouts

crul is a R library for sending HTTP requests and web scraping. It is designed to be simple and easy to use, while still providing powerful functionality for working with HTTP requests and scraping web pages.

One of the main features of crul is its intuitive and easy-to-use syntax for sending HTTP requests. It allows you to easily specify the HTTP method, headers, and body of a request, and also provides a simple way to handle the response.

crul also has the ability to handle different types of requests and responses, including GET, POST, PUT, DELETE, and PATCH. It also support for handling redirects, cookies, and authentication.

Another feature of crul is its support for web scraping. The library provides a simple and efficient way to extract data from web pages, using a syntax similar to that of the XML and httr libraries. It also allows to easily filter the extracted data based on a specific criteria.

crul also supports parallel scraping, which allows to make multiple requests at the same time, thus speeding up the scraping process.

In addition to these features, crul has a good compatibility with other R packages such as tidyverse and purrr which facilitates the manipulation of the data obtained after scraping.

Highlights


http2uses-curlasync

Example Use


require 'vendor/autoload.php';
use Requests;

// make GET request
$response = Requests::get('https://httpbin.org/get');
echo $response->status_code;

// make POST request
$data = array('name' => 'Bob', 'age' => 35);
$options = array('auth' => array('user', 'pass'));
$response = Requests::post('https://httpbin.org/post', array(), $data, $options);
echo $response->status_code;
library(crul)

# Sending a GET request to a website
response <- HttpClient$new("https://www.example.com")$get()
# Sending a POST request to a website
request_body <- list(param1 = "value1", param2 = "value2")
response <- HttpClient$new("https://www.example.com")$post(body = request_body)

# Extracting the status code and body of the response
status_code <- response$status_code()
body <- response$body()

# crul also allows easy asynchronous requests:
urls <- c("https://www.example1.com", "https://www.example2.com", "https://www.example3.com")
# Creating a list of request objects from urls
requests <- lapply(urls, function(url) {
  HttpClient$new(url)$get()
})

# Sending the requests asynchronously
responses <- async(requests)

# Extracting the status code and body of the responses
status_codes <- lapply(responses, function(response) response$status_code())
bodies <- lapply(responses, function(response) response$body())

Alternatives / Similar


Was this page helpful?