Skip to content

rvestvsspidr

MIT 18 1 1,450
555.9 thousand (month) Nov 22 2014 1.0.4(1 year, 6 months ago)
785 2 16 MIT
0.7.1(26 days ago) Jul 25 2009 1.4 thousand (month)

rvest is a popular R library for web scraping and parsing HTML and XML documents. It is built on top of the xml2 and httr libraries and provides a simple and consistent API for interacting with web pages.

One of the main advantages of using rvest is its simplicity and ease of use. It provides a number of functions that make it easy to extract information from web pages, even for those who are not familiar with web scraping. The html_nodes and html_node functions allow you to select elements from an HTML document using CSS selectors, similar to how you would select elements in JavaScript.

rvest also provides functions for interacting with forms, including html_form, set_values, and submit_form functions. These functions make it easy to navigate through forms and submit data to the server, which can be useful when scraping sites that require authentication or when interacting with dynamic web pages.

rvest also provides functions for parsing XML documents. It includes xml_nodes and xml_node functions, which also use CSS selectors to select elements from an XML document, as well as xml_attrs and xml_attr functions to extract attributes from elements.

Another advantage of rvest is that it provides a way to handle cookies, so you can keep the session alive while scraping a website, and also you can handle redirections with handle_redirects

Spidr is a Ruby gem that provides a simple and flexible way to spider and scrape websites. It allows you to easily navigate through a website, following links and scraping data as you go. It is built on top of Nokogiri, a popular Ruby gem for parsing and searching HTML and XML documents, and it provides a simple and intuitive API for defining and running web scraping operations.

One of the main features of Spidr is its ability to spider a website, following all the links on a page and visiting all the pages of a website. This allows you to easily and quickly scrape large amounts of data from a website, without having to manually specify which pages to visit.

In addition to its spidering capabilities, Spidr also provides a variety of other features that can simplify the web scraping process. It can automatically filter which links to follow and which pages to visit, it can handle cookies and authentication, and it can automatically store the scraped data in a database or file. It also provides a built-in support for parallelism and queueing to speed up the scraping process.

Example Use


library("rvest")

# Rvest can use basic HTTP client to download remote HTML:
tree <- read_html("http://webscraping.fyi/lib/r/rvest")
# or read from string:
tree <- read_html('
<div class="products">
  <a href="/product/1">Cat Food</a>
  <a href="/product/2">Dog Food</a>
</div>
')

# to parse HTML trees with rvest we use r pipes (the %>% symbol) and html_element function:
# we can use css selectors:
print(tree %>% html_element(".products>a") %>% html_text())
# "[1] "\nCat Food\nDog Food\n""

# or XPath:
print(tree %>% html_element(xpath="//div[@class='products']/a") %>% html_text())
# "[1] "\nCat Food\nDog Food\n""

# Additionally rvest offers many quality of life functions:
# html_text2 - removes trailing and leading spaces and joins values
print(tree %>% html_element("div") %>% html_text2())
# "[1] "Cat Food Dog Food""

# html_attr - selects element's attribute:
print(tree %>% html_element("div") %>% html_attr('class'))
# "products"
require 'spidr'

Spidr.start_at("http://example.com") do |spider|
  spider.every_page do |page|
    puts "Visiting: #{page.url}"

    # Extract data from the page using Nokogiri
    doc = Nokogiri::HTML(page.body)
    title = doc.css("title").text
    puts "Title: #{title}"
  end
end

Alternatives / Similar