soupvshtml5-parser
soup is a Go library for parsing and querying HTML documents.
It provides a simple and intuitive interface for extracting information from HTML pages. It's inspired by popular Python web scraping
library BeautifulSoup and shares similar use API implementing functions like Find
and FindAll
.
soup
can also use go's built-in http client to download HTML content.
Note that unlike beautifulsoup, soup
does not support CSS selectors or XPath.
html5-parser is a Python library for parsing HTML and XML documents.
A fast implementation of the HTML 5 parsing spec for Python. Parsing is done in C using a variant of the gumbo parser. The gumbo parse tree is then transformed into an lxml tree, also in C, yielding parse times that can be a thirtieth of the html5lib parse times. That is a speedup of 30x. This differs, for instance, from the gumbo python bindings, where the initial parsing is done in C but the transformation into the final tree is done in python.
It is built on top of the popular lxml library and provides a simple and intuitive API for working with the document's structure.
html5-parser uses the HTML5 parsing algorithm, which is more lenient and forgiving than the traditional XML-based parsing algorithm. This means that it can parse HTML documents with malformed or missing tags and still produce a usable parse tree.
To use html5-parser, you first need to install it via pip by running pip install html5-parser
.
Once it is installed, you can use the html5_parser.parse() function to parse an HTML document and create a parse tree. For example:
from html5_parser import parse
html_string = "<html><body>Hello, World!</body></html>"
root = parse(html_string)
print(root.tag) # html
Once you have a parse tree, you can use the find()
and findall()
methods to search for elements
in the document similar to BeautifulSoup.
html5-parser also supports searching using xpath, similar to lxml.
Example Use
package main
import (
"fmt"
"log"
"github.com/anaskhan96/soup"
)
func main() {
url := "https://www.bing.com/search?q=weather+Toronto"
# soup has basic HTTP client though it's not recommended for scraping:
resp, err := soup.Get(url)
if err != nil {
log.Fatal(err)
}
# create soup object from HTML
doc := soup.HTMLParse(resp)
# html elements can be found using Find or FindStrict methods:
# in this case find <div> elements where "class" attribute matches some values:
grid := doc.FindStrict("div", "class", "b_antiTopBleed b_antiSideBleed b_antiBottomBleed")
# note: to find all elements FindAll() method can be used the same way
# elements can be further searched for descendents:
heading := grid.Find("div", "class", "wtr_titleCtrn").Find("div").Text()
conditions := grid.Find("div", "class", "wtr_condition")
primaryCondition := conditions.Find("div")
secondaryCondition := primaryCondition.FindNextElementSibling()
temp := primaryCondition.Find("div", "class", "wtr_condiTemp").Find("div").Text()
others := primaryCondition.Find("div", "class", "wtr_condiAttribs").FindAll("div")
caption := secondaryCondition.Find("div").Text()
fmt.Println("City Name : " + heading)
fmt.Println("Temperature : " + temp + "˚C")
for _, i := range others {
fmt.Println(i.Text())
}
fmt.Println(caption)
}
from html5_parser import parse
html_string = "<html><body>Hello, World!</body></html>"
root = parse(html_string)
print(root.tag) # html
body = root.find("body")
# or find all
print(body.text) # "Hello, World!"
for el in root.findall("p"):
print(el.text) # "Hello