Skip to content

lxmlvschompjs

BSD-3-Clause 14 13 3,010
270.5 million (month) Dec 13 2022 6.0.3(2026-04-09 14:33:38 ago)
218 1 5 MIT
Jul 30 2007 47.0 thousand (month) 1.4.0(2025-08-04 21:07:54 ago)

lxml is a low-level XML and HTML tree processor. It's used by many other libraries such as parsel or beautifulsoup for higher level HTML parsing.

One of the main features of lxml is its speed and efficiency.
It is built on top of the libxml2 and libxslt C libraries, which are known for their high performance and low memory footprint. This makes lxml well-suited for processing large and complex XML and HTML documents.

One of the key components of lxml is the ElementTree API, which is modeled after the ElementTree API from the Python standard library's xml module. This API provides a simple and intuitive way to access and manipulate the elements and attributes of an XML or HTML document. It also provides a powerful and flexible Xpath engine that allows you to select elements based on their names, attributes, and contents.

Another feature of lxml is its support for parsing and creating XML documents using the XSLT standard. The lxml library provides a powerful and easy-to-use interface for applying XSLT stylesheets to XML documents, which can be used to transform and convert XML documents into other formats, such as HTML, PDF, or even other XML formats.

For web scraping it's best to use other higher level libraries that use lxml like parsel or beautifulsoup

chompjs can be used in web scrapping for turning JavaScript objects embedded in pages into valid Python dictionaries.

In web scraping this is particularly useful for parsing Javascript variables like: python import chompjs js = """ var myObj = { myMethod: function(params) { // ... }, myValue: 100 } """ chompjs.parse_js_object(js, json_params={'strict': False}) {'myMethod': 'function(params) {\n // ...\n }', 'myValue': 100}

In practice this can be used to extract hidden JSON data like data from <script id=__NEXT_DATA__> elements from nextjs (and similar) websites. Unlike json.loads command chompjs can ingest json documents that contain javascript natives like functions making it a super easy way to scrape hidden web data objects.

Highlights


low-levelfast

Example Use


```python from lxml import etree # this is our HTML page: html = """ Hello World!

Product Title

paragraph 1

paragraph2

$10
""" tree = tree.fromstring(html) # for parsing, LXML only supports XPath selectors: tree.xpath('//span[@class="price"]')[0].text "$10" ```
```python # basic use import chompjs js = """ var myObj = { myMethod: function(params) { // ... }, myValue: 100 } """ chompjs.parse_js_object(js, json_params={'strict': False}) {'myMethod': 'function(params) {\n // ...\n }', 'myValue': 100} # example how to use with hidden data parsing: import httpx import chompjs from parsel import Selector response = httpx.get("http://example.com") hidden_script = Selector(response.text).css("script#__NEXT_DATA__::text").get() data = chompjs.parse_js_object(hidden_script) print(data['props']) ```

Alternatives / Similar


Was this page helpful?