feedparservshtml5-parser
feedparser is a Python module for downloading and parsing syndicated feeds. It can handle RSS 0.90, Netscape RSS 0.91, Userland RSS 0.91, RSS 0.92, RSS 0.93, RSS 0.94, RSS 1.0, RSS 2.0, Atom 0.3, Atom 1.0, and CDF feeds. It also parses several popular extension modules, including Dublin Core and Appleās iTunes extensions.
To use Universal Feed Parser, you will need Python 3.6 or later. Universal Feed Parser is not meant to run standalone; it is a module for you to use as part of a larger Python program.
feedparser can be used to scrape data feeds as it can download them and parse the XML structured data.
html5-parser is a Python library for parsing HTML and XML documents.
A fast implementation of the HTML 5 parsing spec for Python. Parsing is done in C using a variant of the gumbo parser. The gumbo parse tree is then transformed into an lxml tree, also in C, yielding parse times that can be a thirtieth of the html5lib parse times. That is a speedup of 30x. This differs, for instance, from the gumbo python bindings, where the initial parsing is done in C but the transformation into the final tree is done in python.
It is built on top of the popular lxml library and provides a simple and intuitive API for working with the document's structure.
html5-parser uses the HTML5 parsing algorithm, which is more lenient and forgiving than the traditional XML-based parsing algorithm. This means that it can parse HTML documents with malformed or missing tags and still produce a usable parse tree.
To use html5-parser, you first need to install it via pip by running pip install html5-parser
.
Once it is installed, you can use the html5_parser.parse() function to parse an HTML document and create a parse tree. For example:
from html5_parser import parse
html_string = "<html><body>Hello, World!</body></html>"
root = parse(html_string)
print(root.tag) # html
Once you have a parse tree, you can use the find()
and findall()
methods to search for elements
in the document similar to BeautifulSoup.
html5-parser also supports searching using xpath, similar to lxml.
Example Use
import feedparser
# the feed can be loaded from a remote URL
data = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml')
# local path
data = feedparser.parse('/home/user/data.xml')
# or raw string
data = feedparser.parse('<xml>...</xml>')
# the result dataset is a nested python dictionary containing feed data:
data['feed']['title']
from html5_parser import parse
html_string = "<html><body>Hello, World!</body></html>"
root = parse(html_string)
print(root.tag) # html
body = root.find("body")
# or find all
print(body.text) # "Hello, World!"
for el in root.findall("p"):
print(el.text) # "Hello