goqueryvsparsel
goquery brings a syntax and a set of features similar to jQuery to the Go language. goquery is a popular and easy-to-use library for Go that allows you to use a CSS selector-like syntax to select elements from an HTML document.
It is based on Go's net/html package and the CSS Selector library cascadia. Since the net/html parser returns nodes, and not a full-featured DOM tree, jQuery's stateful manipulation functions (like height(), css(), detach()) have been left off.
Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML. See the wiki for various options to do this. Syntax-wise, it is as close as possible to jQuery, with the same function names when possible, and that warm and fuzzy chainable interface. jQuery being the ultra-popular library that it is, I felt that writing a similar HTML-manipulating library was better to follow its API than to start anew (in the same spirit as Go's fmt package), even though some of its methods are less than intuitive (looking at you, index()...).
goquery can download HTML by itself (using built-in http client) though it's not recommended for web scraping as it's likely to be blocked.
parsel
is a library for parsing HTML and XML using selectors, similar to beautifulsoup
. It is built on top of the lxml
library and allows for easy extraction of data from HTML and XML files using selectors, similar to how you would use CSS selectors in web development. It is a light-weight library which is specifically designed for web scraping and parsing, so it is more efficient and faster than beautifulsoup
in some use cases.
Some of the key features of parsel
include:
- CSS selector & XPath selector support:
Two most common html parsing path languages are both supported in parsel. This allows selecting attributes, tags, text and complex matching rules that use regular expressions or XPath functions. - Modifying data:
parsel
allows you to modify the contents of an element, remove elements or add new elements to a document. - Support for both HTML and XML:
parsel
supports both HTML and XML documents and you can use the same selectors for both formats.
It is easy to use and less verbose than beautifulsoup, so it's quite popular among the developers who are working with Web scraping projects and parse data from large volume of web pages.
Highlights
Example Use
package main
import (
"fmt"
"github.com/PuerkitoBio/goquery"
)
func main() {
// Use goquery.NewDocument to load an HTML document
// This can load from URL
doc, err := goquery.NewDocument("http://example.com")
// or HTML string:
doc, err := goquery.NewDocumentFromReader("some html")
if err != nil {
fmt.Println("Error:", err)
return
}
// Use the Selection.Find method to select elements from the document
doc.Find("a").Each(func(i int, s *goquery.Selection) {
// Use the Selection.Text method to get the text of the element
fmt.Println(s.Text())
// Use the Selection.Attr method to get the value of an attribute
fmt.Println(s.Attr("href"))
})
}
from parsel import Selector
# this is our HTML page:
html = """
<head>
<title>Hello World!</title>
</head>
<body>
<div id="product">
<h1>Product Title</h1>
<p>paragraph 1</p>
<p>paragraph2</p>
<span class="price">$10</span>
</div>
</body>
"""
selector = Selector(html)
# we can use CSS selectors:
selector.css("#product .price::text").get()
"$10"
# or XPath:
selector.xpath('//span[@class="price"]').get()
"$10"
# or get all matching elements:
print(selector.css("#product p::text").getall())
["paragraph 1", "paragraph2"]
# parsel also comes with utility methods like regular expression parsing:
selector.xpath('//span[@class="price"]').re("\d+")
["10"]