Skip to content

lxmlvsselectolax

BSD-3-Clause 19 12 2,562
85.1 million (month) Dec 13 2022 5.2.1(3 days ago)
946 1 15 MIT license
0.3.21(26 days ago) Mar 01 2018 112.3 thousand (month)

lxml is a low-level XML and HTML tree processor. It's used by many other libraries such as parsel or beautifulsoup for higher level HTML parsing.

One of the main features of lxml is its speed and efficiency.
It is built on top of the libxml2 and libxslt C libraries, which are known for their high performance and low memory footprint. This makes lxml well-suited for processing large and complex XML and HTML documents.

One of the key components of lxml is the ElementTree API, which is modeled after the ElementTree API from the Python standard library's xml module. This API provides a simple and intuitive way to access and manipulate the elements and attributes of an XML or HTML document. It also provides a powerful and flexible Xpath engine that allows you to select elements based on their names, attributes, and contents.

Another feature of lxml is its support for parsing and creating XML documents using the XSLT standard. The lxml library provides a powerful and easy-to-use interface for applying XSLT stylesheets to XML documents, which can be used to transform and convert XML documents into other formats, such as HTML, PDF, or even other XML formats.

For web scraping it's best to use other higher level libraries that use lxml like parsel or beautifulsoup

selectolax is a fast and lightweight library for parsing HTML and XML documents in Python. It is designed to be a drop-in replacement for the popular BeautifulSoup library, with significantly faster performance.

selectolax uses a Cython-based parser to quickly parse and navigate through HTML and XML documents. It provides a simple and intuitive API for working with the document's structure, similar to BeautifulSoup.

To use selectolax, you first need to install it via pip by running pip install selectolax``. Once it is installed, you can use theselectolax.html.fromstring()function to parse an HTML document and create a selectolax object. For example:

from selectolax.parser import HTMLParser

html_string = "<html><body>Hello, World!</body></html>"
root = HTMLParser(html_string).root
print(root.tag) # html
You can also useselectolax.html.fromstring()with file-like objects, bytes or file paths, as well asselectolax.xml.fromstring()`` for parsing XML documents.

Once you have a selectolax object, you can use the select() method to search for elements in the document using CSS selectors, similar to BeautifulSoup. For example:

body = root.select("body")[0]
print(body.text()) # "Hello, World!"

Like BeautifulSoups find and find_all methods selectolax also supports searching using the search()`` method, which returns the first matching element, and thesearch_all()`` method, which returns all matching elements.

Highlights


low-levelfast

Example Use


from lxml import etree

# this is our HTML page:
html = """
<head>
  <title>Hello World!</title>
</head>
<body>
  <div id="product">
    <h1>Product Title</h1>
    <p>paragraph 1</p>
    <p>paragraph2</p>
    <span class="price">$10</span>
  </div>
</body>
"""

tree = tree.fromstring(html)

# for parsing, LXML only supports XPath selectors:
tree.xpath('//span[@class="price"]')[0].text
"$10"
from selectolax.parser import HTMLParser

html_string = "<html><body>Hello, World!</body></html>"
root = HTMLParser(html_string).root
print(root.tag) # html

# use css selectors:
body = root.select("body")[0]
print(body.text()) # "Hello, World!"

# find first matching element:
body = root.search("body")
print(body.text()) # "Hello, World!"

# or all matching elements:
html_string = "<html><body><p>paragraph1</p><p>paragraph2</p></body></html>"
root = HTMLParser(html_string).root
for el in root.search_all("p"):
  print(el.text()) 
# will print:
# paragraph 1
# paragraph 2

Alternatives / Similar