htmlqueryvsfeedparser
htmlquery is a Go library that allows you to parse and extract data from HTML documents using XPath expressions. It provides a simple and intuitive API for traversing and querying the HTML tree structure, and it is built on top of the popular Goquery library.
feedparser is a Python module for downloading and parsing syndicated feeds. It can handle RSS 0.90, Netscape RSS 0.91, Userland RSS 0.91, RSS 0.92, RSS 0.93, RSS 0.94, RSS 1.0, RSS 2.0, Atom 0.3, Atom 1.0, and CDF feeds. It also parses several popular extension modules, including Dublin Core and Appleās iTunes extensions.
To use Universal Feed Parser, you will need Python 3.6 or later. Universal Feed Parser is not meant to run standalone; it is a module for you to use as part of a larger Python program.
feedparser can be used to scrape data feeds as it can download them and parse the XML structured data.
Example Use
package main
import (
"fmt"
"log"
"github.com/antchfx/htmlquery"
)
func main() {
// Parse the HTML string
doc, err := htmlquery.Parse([]byte(`
<html>
<body>
<h1>Hello, World!</h1>
<ul>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
</body>
</html>
`))
if err != nil {
log.Fatal(err)
}
// Extract the text of the first <h1> element
h1 := htmlquery.FindOne(doc, "//h1")
fmt.Println(htmlquery.InnerText(h1)) // "Hello, World!"
// Extract the text of all <li> elements
lis := htmlquery.Find(doc, "//li")
for _, li := range lis {
fmt.Println(htmlquery.InnerText(li))
}
// "Item 1"
// "Item 2"
// "Item 3"
}
import feedparser
# the feed can be loaded from a remote URL
data = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml')
# local path
data = feedparser.parse('/home/user/data.xml')
# or raw string
data = feedparser.parse('<xml>...</xml>')
# the result dataset is a nested python dictionary containing feed data:
data['feed']['title']