Skip to content

parselvsxmltodict

BSD-3-Clause 41 8 1,101
1.3 million (month) Jul 26 2019 1.9.1(3 months ago)
5,416 1 96 MIT
Jul 30 2007 41.5 million (month) 0.13.0(2 years ago)

parsel is a library for parsing HTML and XML using selectors, similar to beautifulsoup. It is built on top of the lxml library and allows for easy extraction of data from HTML and XML files using selectors, similar to how you would use CSS selectors in web development. It is a light-weight library which is specifically designed for web scraping and parsing, so it is more efficient and faster than beautifulsoup in some use cases.

Some of the key features of parsel include:

  • CSS selector & XPath selector support:
    Two most common html parsing path languages are both supported in parsel. This allows selecting attributes, tags, text and complex matching rules that use regular expressions or XPath functions.
  • Modifying data:
    parsel allows you to modify the contents of an element, remove elements or add new elements to a document.
  • Support for both HTML and XML:
    parsel supports both HTML and XML documents and you can use the same selectors for both formats.

It is easy to use and less verbose than beautifulsoup, so it's quite popular among the developers who are working with Web scraping projects and parse data from large volume of web pages.

xmltodict is a Python library that allows you to work with XML data as if it were JSON. It allows you to parse XML documents and convert them to dictionaries, which can then be easily manipulated using standard dictionary operations.

You can also use the library to convert a dictionary back into an XML document. xmltodict is built on top of the popular lxml library and provides a simple, intuitive API for working with XML data.

Note that despite using lxml conversion speeds can be quite slow for large XML documents and in web scraping this should be used to parse specific snippets instead of whole HTML documents.

xmltodict pairs well with JSON parsing tools like jmespath or jsonpath. Alternatively, it can be used in reverse mode to parse JSON documents using HTML parsing tools like CSS selectors and XPath.

It can be installed via pip by running pip install xmltodict command.

Highlights


css-selectorsxpath-selectors

Example Use


from parsel import Selector

# this is our HTML page:
html = """
<head>
  <title>Hello World!</title>
</head>
<body>
  <div id="product">
    <h1>Product Title</h1>
    <p>paragraph 1</p>
    <p>paragraph2</p>
    <span class="price">$10</span>
  </div>
</body>
"""

selector = Selector(html)

# we can use CSS selectors:
selector.css("#product .price::text").get()
"$10"

# or XPath:
selector.xpath('//span[@class="price"]').get()
"$10"

# or get all matching elements:
print(selector.css("#product p::text").getall())
["paragraph 1", "paragraph2"]

# parsel also comes with utility methods like regular expression parsing:
selector.xpath('//span[@class="price"]').re("\d+")
["10"]
import xmltodict

xml_string = """
<book>
    <title>The Great Gatsby</title>
    <author>F. Scott Fitzgerald</author>
    <publisher>Charles Scribner's Sons</publisher>
    <publication_date>1925</publication_date>
</book>
"""

book_dict = xmltodict.parse(xml_string)
print(book_dict)
{'book': {'title': 'The Great Gatsby',
'author': 'F. Scott Fitzgerald',
'publisher': "Charles Scribner's Sons",
'publication_date': '1925'}}

# and to reverse:
book_xml = xmltodict.unparse(book_dict)
print(book_xml)

# the xml can be loaded and parsed using parsel or beautifulsoup:
from parsel import Selector
sel = Selector(book_xml)
print(sel.css('publication_date::text').get())
'1925'

Alternatives / Similar


Was this page helpful?