Skip to content

lxmlvsnokogiri

NOASSERTION 19 13 2,614
80.4 million (month) Dec 13 2022 5.2.2(2 months ago)
6,118 21 88 MIT
Jul 25 2009 4.5 million (month) 1.16.6(a month ago)

lxml is a low-level XML and HTML tree processor. It's used by many other libraries such as parsel or beautifulsoup for higher level HTML parsing.

One of the main features of lxml is its speed and efficiency.
It is built on top of the libxml2 and libxslt C libraries, which are known for their high performance and low memory footprint. This makes lxml well-suited for processing large and complex XML and HTML documents.

One of the key components of lxml is the ElementTree API, which is modeled after the ElementTree API from the Python standard library's xml module. This API provides a simple and intuitive way to access and manipulate the elements and attributes of an XML or HTML document. It also provides a powerful and flexible Xpath engine that allows you to select elements based on their names, attributes, and contents.

Another feature of lxml is its support for parsing and creating XML documents using the XSLT standard. The lxml library provides a powerful and easy-to-use interface for applying XSLT stylesheets to XML documents, which can be used to transform and convert XML documents into other formats, such as HTML, PDF, or even other XML formats.

For web scraping it's best to use other higher level libraries that use lxml like parsel or beautifulsoup

Nokogiri is a Ruby gem that provides a simple and powerful way to parse and search XML and HTML documents. It is built on top of the underlying C library libxml2, which is known for its speed and reliability.

Nokogiri provides a simple and intuitive API for parsing and searching XML and HTML documents, and it is widely used in the Ruby ecosystem for web scraping and data extraction.

One of the main features of Nokogiri is its ability to search and navigate through XML and HTML documents using a CSS or XPath selectors.

Nokogiri also provides a variety of other features that can simplify the process of working with XML and HTML documents. It can automatically handle character encodings and normalize documents, it can parse and search large documents with low memory usage, and it can validate documents against a DTD or schema.

Highlights


low-levelfast
css-selectorsxpathpopular

Example Use


from lxml import etree

# this is our HTML page:
html = """
<head>
  <title>Hello World!</title>
</head>
<body>
  <div id="product">
    <h1>Product Title</h1>
    <p>paragraph 1</p>
    <p>paragraph2</p>
    <span class="price">$10</span>
  </div>
</body>
"""

tree = tree.fromstring(html)

# for parsing, LXML only supports XPath selectors:
tree.xpath('//span[@class="price"]')[0].text
"$10"
require 'nokogiri'

html_string = '<html><head><title>Page Title</title></head><body><h1 class="header-class">Hello World!</h1><p>This is a sample webpage.</p></body></html>'

# Parse the HTML string
doc = Nokogiri::HTML(html_string)

# Extract the class attribute of h1 tag using CSS selector
h1_class = doc.css("h1")[0]['class']
# or XPath
h1_class = doc.xpath("//h1")[0]['class']
puts "H1 class: #{h1_class}"

Alternatives / Similar


Was this page helpful?