Skip to content

lxmlvsralger

NOASSERTION 19 13 2,614
80.4 million (month) Dec 13 2022 5.2.2(2 months ago)
153 1 3 MIT
Dec 22 2019 876 (month) 2.2.4(3 years ago)

lxml is a low-level XML and HTML tree processor. It's used by many other libraries such as parsel or beautifulsoup for higher level HTML parsing.

One of the main features of lxml is its speed and efficiency.
It is built on top of the libxml2 and libxslt C libraries, which are known for their high performance and low memory footprint. This makes lxml well-suited for processing large and complex XML and HTML documents.

One of the key components of lxml is the ElementTree API, which is modeled after the ElementTree API from the Python standard library's xml module. This API provides a simple and intuitive way to access and manipulate the elements and attributes of an XML or HTML document. It also provides a powerful and flexible Xpath engine that allows you to select elements based on their names, attributes, and contents.

Another feature of lxml is its support for parsing and creating XML documents using the XSLT standard. The lxml library provides a powerful and easy-to-use interface for applying XSLT stylesheets to XML documents, which can be used to transform and convert XML documents into other formats, such as HTML, PDF, or even other XML formats.

For web scraping it's best to use other higher level libraries that use lxml like parsel or beautifulsoup

ralger is a small web scraping framework for R based on rvest and xml2.

It's goal to simplify basic web scraping and it provides a convenient and easy to use API.

It offers functions for retrieving pages, parsing HTML using CSS selectors, automatic table parsing and auto link, title, image and paragraph extraction.

Highlights


low-levelfast

Example Use


from lxml import etree

# this is our HTML page:
html = """
<head>
  <title>Hello World!</title>
</head>
<body>
  <div id="product">
    <h1>Product Title</h1>
    <p>paragraph 1</p>
    <p>paragraph2</p>
    <span class="price">$10</span>
  </div>
</body>
"""

tree = tree.fromstring(html)

# for parsing, LXML only supports XPath selectors:
tree.xpath('//span[@class="price"]')[0].text
"$10"
library("ralger")

url <- "http://www.shanghairanking.com/rankings/arwu/2021"

# retrieve HTML and select elements using CSS selectors:
best_uni <- scrap(link = url, node = "a span", clean = TRUE)
head(best_uni, 5)
#>  [1] "Harvard University"
#>  [2] "Stanford University"
#>  [3] "University of Cambridge"
#>  [4] "Massachusetts Institute of Technology (MIT)"
#>  [5] "University of California, Berkeley"

# ralger can also parse HTML attributes
attributes <- attribute_scrap(
  link = "https://ropensci.org/",
  node = "a", # the a tag
  attr = "class" # getting the class attribute
)

head(attributes, 10) # NA values are a tags without a class attribute
#>  [1] "navbar-brand logo" "nav-link"          NA
#>  [4] NA                  NA                  "nav-link"
#>  [7] NA                  "nav-link"          NA
#> [10] NA
#

# ralger can automatically scrape tables:
data <- table_scrap(link ="https://www.boxofficemojo.com/chart/top_lifetime_gross/?area=XWW")

head(data)
#> # A tibble: 6 × 4
#>    Rank Title                                      `Lifetime Gross`  Year
#>   <int> <chr>                                      <chr>            <int>
#> 1     1 Avatar                                     $2,847,397,339    2009
#> 2     2 Avengers: Endgame                          $2,797,501,328    2019
#> 3     3 Titanic                                    $2,201,647,264    1997
#> 4     4 Star Wars: Episode VII - The Force Awakens $2,069,521,700    2015
#> 5     5 Avengers: Infinity War                     $2,048,359,754    2018
#> 6     6 Spider-Man: No Way Home                    $1,901,216,740    2021

Alternatives / Similar


Was this page helpful?