lxmlvspyquery
lxml is a low-level XML and HTML tree processor. It's used by many other libraries such as parsel or beautifulsoup for higher level HTML parsing.
One of the main features of lxml is its speed and efficiency.
It is built on top of the libxml2 and libxslt C libraries, which are known for their high performance and low memory footprint.
This makes lxml well-suited for processing large and complex XML and HTML documents.
One of the key components of lxml is the ElementTree API, which is modeled after the ElementTree API from the Python standard library's xml module. This API provides a simple and intuitive way to access and manipulate the elements and attributes of an XML or HTML document. It also provides a powerful and flexible Xpath engine that allows you to select elements based on their names, attributes, and contents.
Another feature of lxml is its support for parsing and creating XML documents using the XSLT standard. The lxml library provides a powerful and easy-to-use interface for applying XSLT stylesheets to XML documents, which can be used to transform and convert XML documents into other formats, such as HTML, PDF, or even other XML formats.
For web scraping it's best to use other higher level libraries that use lxml like parsel or beautifulsoup
PyQuery is a Python library for working with XML and HTML documents. It is similar to BeautifulSoup and is often used as a drop-in replacement for it.
PyQuery is inspired by javascript's jQuery and uses similar API allowing selecting of HTML nodes through CSS selectors. This makes it easy for developers who are already familiar with jQuery to use PyQuery in Python.
Unlike jQuery, PyQuery doesn't support XPath selectors and relies entirely on CSS selectors though offers similar HTML parsing features like selection of HTML elements, their attributes and text as well as html tree modification.
PyQuery also comes with a http client (through requests
) so it can load and parse web URLs by itself.
Highlights
Example Use
from lxml import etree
# this is our HTML page:
html = """
<head>
<title>Hello World!</title>
</head>
<body>
<div id="product">
<h1>Product Title</h1>
<p>paragraph 1</p>
<p>paragraph2</p>
<span class="price">$10</span>
</div>
</body>
"""
tree = tree.fromstring(html)
# for parsing, LXML only supports XPath selectors:
tree.xpath('//span[@class="price"]')[0].text
"$10"
from pyquery import PyQuery as pq
# this is our HTML page:
html = """
<head>
<title>Hello World!</title>
</head>
<body>
<div id="product">
<h1>Product Title</h1>
<p>paragraph 1</p>
<p>paragraph2</p>
<span class="price">$10</span>
</div>
</body>
"""
doc = pq(html)
# we can use CSS selectors:
print(doc('#product .price').text())
"$10"
# it's also possible to modify HTML tree in various ways:
# insert text into selected element:
print(doc('h1').append('<span>discounted</span>'))
"<h1>Product Title<span>discounted</span></h1>"
# or remove elements
doc('p').remove()
print(doc('#product').html())
"""
<h1>Product Title<span>discounted</span></h1>
<span class="price">$10</span>
"""
# pyquery can also retrieve web documents using requests:
doc = pq(url='http://httpbin.org/html', headers={"User-Agent": "webscraping.fyi"})
print(doc('h1').html())