goqueryvsfeedparser
goquery brings a syntax and a set of features similar to jQuery to the Go language. goquery is a popular and easy-to-use library for Go that allows you to use a CSS selector-like syntax to select elements from an HTML document.
It is based on Go's net/html package and the CSS Selector library cascadia. Since the net/html parser returns nodes, and not a full-featured DOM tree, jQuery's stateful manipulation functions (like height(), css(), detach()) have been left off.
Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML. See the wiki for various options to do this. Syntax-wise, it is as close as possible to jQuery, with the same function names when possible, and that warm and fuzzy chainable interface. jQuery being the ultra-popular library that it is, I felt that writing a similar HTML-manipulating library was better to follow its API than to start anew (in the same spirit as Go's fmt package), even though some of its methods are less than intuitive (looking at you, index()...).
goquery can download HTML by itself (using built-in http client) though it's not recommended for web scraping as it's likely to be blocked.
feedparser is a Python module for downloading and parsing syndicated feeds. It can handle RSS 0.90, Netscape RSS 0.91, Userland RSS 0.91, RSS 0.92, RSS 0.93, RSS 0.94, RSS 1.0, RSS 2.0, Atom 0.3, Atom 1.0, and CDF feeds. It also parses several popular extension modules, including Dublin Core and Appleās iTunes extensions.
To use Universal Feed Parser, you will need Python 3.6 or later. Universal Feed Parser is not meant to run standalone; it is a module for you to use as part of a larger Python program.
feedparser can be used to scrape data feeds as it can download them and parse the XML structured data.
Example Use
package main
import (
"fmt"
"github.com/PuerkitoBio/goquery"
)
func main() {
// Use goquery.NewDocument to load an HTML document
// This can load from URL
doc, err := goquery.NewDocument("http://example.com")
// or HTML string:
doc, err := goquery.NewDocumentFromReader("some html")
if err != nil {
fmt.Println("Error:", err)
return
}
// Use the Selection.Find method to select elements from the document
doc.Find("a").Each(func(i int, s *goquery.Selection) {
// Use the Selection.Text method to get the text of the element
fmt.Println(s.Text())
// Use the Selection.Attr method to get the value of an attribute
fmt.Println(s.Attr("href"))
})
}
import feedparser
# the feed can be loaded from a remote URL
data = feedparser.parse('http://feedparser.org/docs/examples/atom10.xml')
# local path
data = feedparser.parse('/home/user/data.xml')
# or raw string
data = feedparser.parse('<xml>...</xml>')
# the result dataset is a nested python dictionary containing feed data:
data['feed']['title']