Skip to content

soupvspyquery

MIT 22 1 2,176
58.1 thousand (month) Apr 29 2017 v1.2.5(2 years ago)
2,298 5 55 NOASSERTION
Dec 05 2008 1.9 million (month) 2.0.1(a month ago)

soup is a Go library for parsing and querying HTML documents.

It provides a simple and intuitive interface for extracting information from HTML pages. It's inspired by popular Python web scraping library BeautifulSoup and shares similar use API implementing functions like Find and FindAll.

soup can also use go's built-in http client to download HTML content.

Note that unlike beautifulsoup, soup does not support CSS selectors or XPath.

PyQuery is a Python library for working with XML and HTML documents. It is similar to BeautifulSoup and is often used as a drop-in replacement for it.

PyQuery is inspired by javascript's jQuery and uses similar API allowing selecting of HTML nodes through CSS selectors. This makes it easy for developers who are already familiar with jQuery to use PyQuery in Python.

Unlike jQuery, PyQuery doesn't support XPath selectors and relies entirely on CSS selectors though offers similar HTML parsing features like selection of HTML elements, their attributes and text as well as html tree modification.

PyQuery also comes with a http client (through requests) so it can load and parse web URLs by itself.

Highlights


css-selectors

Example Use


package main

import (
  "fmt"
  "log"

  "github.com/anaskhan96/soup"
)

func main() {

  url := "https://www.bing.com/search?q=weather+Toronto"

  # soup has basic HTTP client though it's not recommended for scraping:
  resp, err := soup.Get(url)
  if err != nil {
    log.Fatal(err)
  }

  # create soup object from HTML
  doc := soup.HTMLParse(resp)

  # html elements can be found using Find or FindStrict methods:
  # in this case find <div> elements where "class" attribute matches some values:
  grid := doc.FindStrict("div", "class", "b_antiTopBleed b_antiSideBleed b_antiBottomBleed")
  # note: to find all elements FindAll() method can be used the same way

  # elements can be further searched for descendents:
  heading := grid.Find("div", "class", "wtr_titleCtrn").Find("div").Text()
  conditions := grid.Find("div", "class", "wtr_condition")
  primaryCondition := conditions.Find("div")
  secondaryCondition := primaryCondition.FindNextElementSibling()
  temp := primaryCondition.Find("div", "class", "wtr_condiTemp").Find("div").Text()
  others := primaryCondition.Find("div", "class", "wtr_condiAttribs").FindAll("div")
  caption := secondaryCondition.Find("div").Text()

  fmt.Println("City Name : " + heading)
  fmt.Println("Temperature : " + temp + "˚C")
  for _, i := range others {
    fmt.Println(i.Text())
  }
  fmt.Println(caption)
}
from pyquery import PyQuery as pq

# this is our HTML page:
html = """
<head>
  <title>Hello World!</title>
</head>
<body>
  <div id="product">
    <h1>Product Title</h1>
    <p>paragraph 1</p>
    <p>paragraph2</p>
    <span class="price">$10</span>
  </div>
</body>
"""

doc = pq(html)

# we can use CSS selectors:
print(doc('#product .price').text())
"$10"


# it's also possible to modify HTML tree in various ways:
# insert text into selected element:
print(doc('h1').append('<span>discounted</span>'))
"<h1>Product Title<span>discounted</span></h1>"

# or remove elements
doc('p').remove()
print(doc('#product').html())
"""
<h1>Product Title<span>discounted</span></h1>
<span class="price">$10</span>
"""


# pyquery can also retrieve web documents using requests:
doc = pq(url='http://httpbin.org/html', headers={"User-Agent": "webscraping.fyi"})
print(doc('h1').html())

Alternatives / Similar


Was this page helpful?