goqueryvsrequests-html
goquery brings a syntax and a set of features similar to jQuery to the Go language. goquery is a popular and easy-to-use library for Go that allows you to use a CSS selector-like syntax to select elements from an HTML document.
It is based on Go's net/html package and the CSS Selector library cascadia. Since the net/html parser returns nodes, and not a full-featured DOM tree, jQuery's stateful manipulation functions (like height(), css(), detach()) have been left off.
Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML. See the wiki for various options to do this. Syntax-wise, it is as close as possible to jQuery, with the same function names when possible, and that warm and fuzzy chainable interface. jQuery being the ultra-popular library that it is, I felt that writing a similar HTML-manipulating library was better to follow its API than to start anew (in the same spirit as Go's fmt package), even though some of its methods are less than intuitive (looking at you, index()...).
goquery can download HTML by itself (using built-in http client) though it's not recommended for web scraping as it's likely to be blocked.
requests-html is a Python package that allows you to easily make HTTP requests and parse the HTML content of web pages. It is built on top of the popular requests package and uses the html parser from the lxml library, which makes it fast and efficient. This package is designed to provide a simple and convenient API for web scraping, and it supports features such as JavaScript rendering, CSS selectors, and form submissions.
It also offers a lot of functionalities such as cookie, session, and proxy support, which makes it an easy-to-use package for web scraping and web automation tasks.
In short requests-html offers:
- Full JavaScript support!
- CSS Selectors (a.k.a jQuery-style, thanks to PyQuery).
- XPath Selectors, for the faint of heart.
- Mocked user-agent (like a real web browser).
- Automatic following of redirects.
- Connection–pooling and cookie persistence.
- The Requests experience you know and love, with magical parsing abilities.
- Async Support
Example Use
package main
import (
"fmt"
"github.com/PuerkitoBio/goquery"
)
func main() {
// Use goquery.NewDocument to load an HTML document
// This can load from URL
doc, err := goquery.NewDocument("http://example.com")
// or HTML string:
doc, err := goquery.NewDocumentFromReader("some html")
if err != nil {
fmt.Println("Error:", err)
return
}
// Use the Selection.Find method to select elements from the document
doc.Find("a").Each(func(i int, s *goquery.Selection) {
// Use the Selection.Text method to get the text of the element
fmt.Println(s.Text())
// Use the Selection.Attr method to get the value of an attribute
fmt.Println(s.Attr("href"))
})
}
from requests_html import HTMLSession
session = HTMLSession()
r = session.get('https://www.example.com')
# print the HTML content of the page
print(r.html.html)
# use CSS selectors to find specific elements on the page
title = r.html.find('title', first=True)
print(title.text)