Skip to content

reqvscrul

MIT 53 1 4,266
58.1 thousand (month) Aug 11 2023 v3.48.0(13 days ago)
106 1 15 MIT
Nov 09 2016 30.7 thousand (month) 1.5.0(6 months ago)

The Go library "req" is a simple and easy-to-use library for making HTTP requests in Go. It is designed to make working with HTTP requests as simple as possible, by providing a clean and consistent API for handling various types of requests, including GET, POST, PUT, and DELETE.

One of the key features of req is its support for handling JSON data. The library automatically serializes and deserializes JSON data, making it easy to work with JSON data in your Go applications. Additionally, it supports multipart file uploads and automatic decompression of gzip and deflate encoded responses.

req also includes a number of convenience functions for working with common HTTP request types, such as sending GET and POST requests, handling redirects, and setting headers and query parameters. The library can also be easily extended with custom middleware and request handlers.

Overall, req is a powerful and flexible library that makes it easy to work with HTTP requests in Go. It is well-documented and actively maintained, making it a great choice for any Go project that needs to work with HTTP requests.

crul is a R library for sending HTTP requests and web scraping. It is designed to be simple and easy to use, while still providing powerful functionality for working with HTTP requests and scraping web pages.

One of the main features of crul is its intuitive and easy-to-use syntax for sending HTTP requests. It allows you to easily specify the HTTP method, headers, and body of a request, and also provides a simple way to handle the response.

crul also has the ability to handle different types of requests and responses, including GET, POST, PUT, DELETE, and PATCH. It also support for handling redirects, cookies, and authentication.

Another feature of crul is its support for web scraping. The library provides a simple and efficient way to extract data from web pages, using a syntax similar to that of the XML and httr libraries. It also allows to easily filter the extracted data based on a specific criteria.

crul also supports parallel scraping, which allows to make multiple requests at the same time, thus speeding up the scraping process.

In addition to these features, crul has a good compatibility with other R packages such as tidyverse and purrr which facilitates the manipulation of the data obtained after scraping.

Highlights


http2uses-curlasync

Example Use


req.DevMode() //  Use Client.DevMode to enable debugging details

// There are 2 ways to use req (like many other http clients)
// First way is to create a persistent session client:
client := req.C(). // defaults like timeout and headers can be set for the whole session
    SetUserAgent("my-custom-client").
    SetTimeout(5 * time.Second)
// defaults can be overriden and extended in each request
resp, err := client.R(). // Use R() to create a request and set with chainable request settings.
    SetHeader("Accept", "application/vnd.github.v3+json").
    SetPathParam("username", "imroc").
    SetQueryParam("page", "1").
    SetResult(&result). // Unmarshal response into struct automatically if status code >= 200 and <= 299.
    SetError(&errMsg). // Unmarshal response into struct automatically if status code >= 400.
    EnableDump(). // Enable dump at request level to help troubleshoot, log content only when an unexpected exception occurs.
    Get("https://api.github.com/users/{username}/repos")

// Alternatively, it can be used as is without establishing a client

resp := client.Get("https://api.github.com/users/{username}/repos"). // Create a GET request with specified URL.
    SetHeader("Accept", "application/vnd.github.v3+json").
    SetPathParam("username", "imroc").
    SetQueryParam("page", "1").
    SetResult(&result).
    SetError(&errMsg).
    EnableDump().
    Do() // Send request with Do.
library(crul)

# Sending a GET request to a website
response <- HttpClient$new("https://www.example.com")$get()
# Sending a POST request to a website
request_body <- list(param1 = "value1", param2 = "value2")
response <- HttpClient$new("https://www.example.com")$post(body = request_body)

# Extracting the status code and body of the response
status_code <- response$status_code()
body <- response$body()

# crul also allows easy asynchronous requests:
urls <- c("https://www.example1.com", "https://www.example2.com", "https://www.example3.com")
# Creating a list of request objects from urls
requests <- lapply(urls, function(url) {
  HttpClient$new(url)$get()
})

# Sending the requests asynchronously
responses <- async(requests)

# Extracting the status code and body of the responses
status_codes <- lapply(responses, function(response) response$status_code())
bodies <- lapply(responses, function(response) response$body())

Alternatives / Similar


Was this page helpful?