soupvsparsel
soup is a Go library for parsing and querying HTML documents.
It provides a simple and intuitive interface for extracting information from HTML pages. It's inspired by popular Python web scraping
library BeautifulSoup and shares similar use API implementing functions like Find
and FindAll
.
soup
can also use go's built-in http client to download HTML content.
Note that unlike beautifulsoup, soup
does not support CSS selectors or XPath.
parsel
is a library for parsing HTML and XML using selectors, similar to beautifulsoup
. It is built on top of the lxml
library and allows for easy extraction of data from HTML and XML files using selectors, similar to how you would use CSS selectors in web development. It is a light-weight library which is specifically designed for web scraping and parsing, so it is more efficient and faster than beautifulsoup
in some use cases.
Some of the key features of parsel
include:
- CSS selector & XPath selector support:
Two most common html parsing path languages are both supported in parsel. This allows selecting attributes, tags, text and complex matching rules that use regular expressions or XPath functions. - Modifying data:
parsel
allows you to modify the contents of an element, remove elements or add new elements to a document. - Support for both HTML and XML:
parsel
supports both HTML and XML documents and you can use the same selectors for both formats.
It is easy to use and less verbose than beautifulsoup, so it's quite popular among the developers who are working with Web scraping projects and parse data from large volume of web pages.
Highlights
Example Use
package main
import (
"fmt"
"log"
"github.com/anaskhan96/soup"
)
func main() {
url := "https://www.bing.com/search?q=weather+Toronto"
# soup has basic HTTP client though it's not recommended for scraping:
resp, err := soup.Get(url)
if err != nil {
log.Fatal(err)
}
# create soup object from HTML
doc := soup.HTMLParse(resp)
# html elements can be found using Find or FindStrict methods:
# in this case find <div> elements where "class" attribute matches some values:
grid := doc.FindStrict("div", "class", "b_antiTopBleed b_antiSideBleed b_antiBottomBleed")
# note: to find all elements FindAll() method can be used the same way
# elements can be further searched for descendents:
heading := grid.Find("div", "class", "wtr_titleCtrn").Find("div").Text()
conditions := grid.Find("div", "class", "wtr_condition")
primaryCondition := conditions.Find("div")
secondaryCondition := primaryCondition.FindNextElementSibling()
temp := primaryCondition.Find("div", "class", "wtr_condiTemp").Find("div").Text()
others := primaryCondition.Find("div", "class", "wtr_condiAttribs").FindAll("div")
caption := secondaryCondition.Find("div").Text()
fmt.Println("City Name : " + heading)
fmt.Println("Temperature : " + temp + "˚C")
for _, i := range others {
fmt.Println(i.Text())
}
fmt.Println(caption)
}
from parsel import Selector
# this is our HTML page:
html = """
<head>
<title>Hello World!</title>
</head>
<body>
<div id="product">
<h1>Product Title</h1>
<p>paragraph 1</p>
<p>paragraph2</p>
<span class="price">$10</span>
</div>
</body>
"""
selector = Selector(html)
# we can use CSS selectors:
selector.css("#product .price::text").get()
"$10"
# or XPath:
selector.xpath('//span[@class="price"]').get()
"$10"
# or get all matching elements:
print(selector.css("#product p::text").getall())
["paragraph 1", "paragraph2"]
# parsel also comes with utility methods like regular expression parsing:
selector.xpath('//span[@class="price"]').re("\d+")
["10"]