Skip to content

crulvsem-http-request

MIT 15 1 102
74.3 thousand (month) Nov 09 2016 1.4.2(1 year, 1 month ago)
1,220 4 14 MIT
Oct 25 2009 253.3 thousand (month) 1.1.7(3 years ago)

crul is a R library for sending HTTP requests and web scraping. It is designed to be simple and easy to use, while still providing powerful functionality for working with HTTP requests and scraping web pages.

One of the main features of crul is its intuitive and easy-to-use syntax for sending HTTP requests. It allows you to easily specify the HTTP method, headers, and body of a request, and also provides a simple way to handle the response.

crul also has the ability to handle different types of requests and responses, including GET, POST, PUT, DELETE, and PATCH. It also support for handling redirects, cookies, and authentication.

Another feature of crul is its support for web scraping. The library provides a simple and efficient way to extract data from web pages, using a syntax similar to that of the XML and httr libraries. It also allows to easily filter the extracted data based on a specific criteria.

crul also supports parallel scraping, which allows to make multiple requests at the same time, thus speeding up the scraping process.

In addition to these features, crul has a good compatibility with other R packages such as tidyverse and purrr which facilitates the manipulation of the data obtained after scraping.

em-http-request is a Ruby gem for making asynchronous HTTP requests using EventMachine. It allows you to perform multiple requests simultaneously and handle the responses as they come in, rather than waiting for each request to complete before making the next one.

In short it supports: - Asynchronous HTTP API for single & parallel request execution - Keep-Alive and HTTP pipelining support - Auto-follow 3xx redirects with max depth - Automatic gzip & deflate decoding - Streaming response processing - Streaming file uploads - HTTP proxy and SOCKS5 support - Basic Auth & OAuth - Connection-level & global middleware support - HTTP parser via http_parser.rb - Works wherever EventMachine runs: Rubinius, JRuby, MRI

Highlights


http2uses-curlasync

Example Use


library(crul)

# Sending a GET request to a website
response <- HttpClient$new("https://www.example.com")$get()
# Sending a POST request to a website
request_body <- list(param1 = "value1", param2 = "value2")
response <- HttpClient$new("https://www.example.com")$post(body = request_body)

# Extracting the status code and body of the response
status_code <- response$status_code()
body <- response$body()

# crul also allows easy asynchronous requests:
urls <- c("https://www.example1.com", "https://www.example2.com", "https://www.example3.com")
# Creating a list of request objects from urls
requests <- lapply(urls, function(url) {
  HttpClient$new(url)$get()
})

# Sending the requests asynchronously
responses <- async(requests)

# Extracting the status code and body of the responses
status_codes <- lapply(responses, function(response) response$status_code())
bodies <- lapply(responses, function(response) response$body())
EventMachine.run {
  http = EventMachine::HttpRequest.new('http://google.com/').get :query => {'keyname' => 'value'}

  # add callback for errors:
  http.errback { p 'Uh oh'; EM.stop }

  # add callback for successful requests
  http.callback {
    p http.response_header.status
    p http.response_header
    p http.response

    EventMachine.stop
  }
}

Alternatives / Similar


Was this page helpful?