Skip to content

htmlqueryvssoup

MIT 8 1 693
58.1 thousand (month) Feb 07 2019 v1.3.0(1 year, 2 months ago)
2,116 1 22 MIT
v1.2.5(2 years ago) Apr 29 2017 58.1 thousand (month)

htmlquery is a Go library that allows you to parse and extract data from HTML documents using XPath expressions. It provides a simple and intuitive API for traversing and querying the HTML tree structure, and it is built on top of the popular Goquery library.

soup is a Go library for parsing and querying HTML documents.

It provides a simple and intuitive interface for extracting information from HTML pages. It's inspired by popular Python web scraping library BeautifulSoup and shares similar use API implementing functions like Find and FindAll.

soup can also use go's built-in http client to download HTML content.

Note that unlike beautifulsoup, soup does not support CSS selectors or XPath.

Example Use


package main

import (
  "fmt"
  "log"

  "github.com/antchfx/htmlquery"
)

func main() {
  // Parse the HTML string
  doc, err := htmlquery.Parse([]byte(`
    <html>
      <body>
        <h1>Hello, World!</h1>
        <ul>
          <li>Item 1</li>
          <li>Item 2</li>
          <li>Item 3</li>
        </ul>
      </body>
    </html>
  `))
  if err != nil {
    log.Fatal(err)
  }

  // Extract the text of the first <h1> element
  h1 := htmlquery.FindOne(doc, "//h1")
  fmt.Println(htmlquery.InnerText(h1)) // "Hello, World!"

  // Extract the text of all <li> elements
  lis := htmlquery.Find(doc, "//li")
  for _, li := range lis {
    fmt.Println(htmlquery.InnerText(li))
  }
  // "Item 1"
  // "Item 2"
  // "Item 3"
}
package main

import (
  "fmt"
  "log"

  "github.com/anaskhan96/soup"
)

func main() {

  url := "https://www.bing.com/search?q=weather+Toronto"

  # soup has basic HTTP client though it's not recommended for scraping:
  resp, err := soup.Get(url)
  if err != nil {
    log.Fatal(err)
  }

  # create soup object from HTML
  doc := soup.HTMLParse(resp)

  # html elements can be found using Find or FindStrict methods:
  # in this case find <div> elements where "class" attribute matches some values:
  grid := doc.FindStrict("div", "class", "b_antiTopBleed b_antiSideBleed b_antiBottomBleed")
  # note: to find all elements FindAll() method can be used the same way

  # elements can be further searched for descendents:
  heading := grid.Find("div", "class", "wtr_titleCtrn").Find("div").Text()
  conditions := grid.Find("div", "class", "wtr_condition")
  primaryCondition := conditions.Find("div")
  secondaryCondition := primaryCondition.FindNextElementSibling()
  temp := primaryCondition.Find("div", "class", "wtr_condiTemp").Find("div").Text()
  others := primaryCondition.Find("div", "class", "wtr_condiAttribs").FindAll("div")
  caption := secondaryCondition.Find("div").Text()

  fmt.Println("City Name : " + heading)
  fmt.Println("Temperature : " + temp + "˚C")
  for _, i := range others {
    fmt.Println(i.Text())
  }
  fmt.Println(caption)
}

Alternatives / Similar