Skip to content

soupvsselectolax

MIT 22 1 2,176
58.1 thousand (month) Apr 29 2017 v1.2.5(2 years ago)
1,118 1 20 MIT
Mar 01 2018 1.4 million (month) 0.3.25(2 days ago)

soup is a Go library for parsing and querying HTML documents.

It provides a simple and intuitive interface for extracting information from HTML pages. It's inspired by popular Python web scraping library BeautifulSoup and shares similar use API implementing functions like Find and FindAll.

soup can also use go's built-in http client to download HTML content.

Note that unlike beautifulsoup, soup does not support CSS selectors or XPath.

selectolax is a fast and lightweight library for parsing HTML and XML documents in Python. It is designed to be a drop-in replacement for the popular BeautifulSoup library, with significantly faster performance.

selectolax uses a Cython-based parser to quickly parse and navigate through HTML and XML documents. It provides a simple and intuitive API for working with the document's structure, similar to BeautifulSoup.

To use selectolax, you first need to install it via pip by running pip install selectolax``. Once it is installed, you can use theselectolax.html.fromstring()function to parse an HTML document and create a selectolax object. For example:

from selectolax.parser import HTMLParser

html_string = "<html><body>Hello, World!</body></html>"
root = HTMLParser(html_string).root
print(root.tag) # html
You can also useselectolax.html.fromstring()with file-like objects, bytes or file paths, as well asselectolax.xml.fromstring()`` for parsing XML documents.

Once you have a selectolax object, you can use the select() method to search for elements in the document using CSS selectors, similar to BeautifulSoup. For example:

body = root.select("body")[0]
print(body.text()) # "Hello, World!"

Like BeautifulSoups find and find_all methods selectolax also supports searching using the search()`` method, which returns the first matching element, and thesearch_all()`` method, which returns all matching elements.

Example Use


package main

import (
  "fmt"
  "log"

  "github.com/anaskhan96/soup"
)

func main() {

  url := "https://www.bing.com/search?q=weather+Toronto"

  # soup has basic HTTP client though it's not recommended for scraping:
  resp, err := soup.Get(url)
  if err != nil {
    log.Fatal(err)
  }

  # create soup object from HTML
  doc := soup.HTMLParse(resp)

  # html elements can be found using Find or FindStrict methods:
  # in this case find <div> elements where "class" attribute matches some values:
  grid := doc.FindStrict("div", "class", "b_antiTopBleed b_antiSideBleed b_antiBottomBleed")
  # note: to find all elements FindAll() method can be used the same way

  # elements can be further searched for descendents:
  heading := grid.Find("div", "class", "wtr_titleCtrn").Find("div").Text()
  conditions := grid.Find("div", "class", "wtr_condition")
  primaryCondition := conditions.Find("div")
  secondaryCondition := primaryCondition.FindNextElementSibling()
  temp := primaryCondition.Find("div", "class", "wtr_condiTemp").Find("div").Text()
  others := primaryCondition.Find("div", "class", "wtr_condiAttribs").FindAll("div")
  caption := secondaryCondition.Find("div").Text()

  fmt.Println("City Name : " + heading)
  fmt.Println("Temperature : " + temp + "˚C")
  for _, i := range others {
    fmt.Println(i.Text())
  }
  fmt.Println(caption)
}
from selectolax.parser import HTMLParser

html_string = "<html><body>Hello, World!</body></html>"
root = HTMLParser(html_string).root
print(root.tag) # html

# use css selectors:
body = root.select("body")[0]
print(body.text()) # "Hello, World!"

# find first matching element:
body = root.search("body")
print(body.text()) # "Hello, World!"

# or all matching elements:
html_string = "<html><body><p>paragraph1</p><p>paragraph2</p></body></html>"
root = HTMLParser(html_string).root
for el in root.search_all("p"):
  print(el.text()) 
# will print:
# paragraph 1
# paragraph 2

Alternatives / Similar


Was this page helpful?