soupvsuntangle
soup is a Go library for parsing and querying HTML documents.
It provides a simple and intuitive interface for extracting information from HTML pages. It's inspired by popular Python web scraping
library BeautifulSoup and shares similar use API implementing functions like Find
and FindAll
.
soup
can also use go's built-in http client to download HTML content.
Note that unlike beautifulsoup, soup
does not support CSS selectors or XPath.
untangle is a simple library for parsing XML documents in Python. It allows you to access data in an XML file as if it were a Python object, making it easy to work with the data in your code.
To use untangle, you first need to install it via pip by running pip install untangle``.
Once it is installed, you can use the
untangle.parse()`` function to parse an XML file and create a Python object.
For example:
import untangle
obj = untangle.parse("example.xml")
print(obj.root.element.child)
You can also pass a file-like object or a string containing XML data to the untangle.parse() function. Once you have an untangle object, you can access elements in the XML document using dot notation.
You can also access the attributes of an element by using attrib property, eg. `obj.root.element['attrib_name']`` untangle also supports xpath-like syntax to access the elements, obj.root.xpath("path/to/element")
It also supports iteration over the elements using obj.root.element.children
for child in obj.root.element.children:
print(child)
Example Use
package main
import (
"fmt"
"log"
"github.com/anaskhan96/soup"
)
func main() {
url := "https://www.bing.com/search?q=weather+Toronto"
# soup has basic HTTP client though it's not recommended for scraping:
resp, err := soup.Get(url)
if err != nil {
log.Fatal(err)
}
# create soup object from HTML
doc := soup.HTMLParse(resp)
# html elements can be found using Find or FindStrict methods:
# in this case find <div> elements where "class" attribute matches some values:
grid := doc.FindStrict("div", "class", "b_antiTopBleed b_antiSideBleed b_antiBottomBleed")
# note: to find all elements FindAll() method can be used the same way
# elements can be further searched for descendents:
heading := grid.Find("div", "class", "wtr_titleCtrn").Find("div").Text()
conditions := grid.Find("div", "class", "wtr_condition")
primaryCondition := conditions.Find("div")
secondaryCondition := primaryCondition.FindNextElementSibling()
temp := primaryCondition.Find("div", "class", "wtr_condiTemp").Find("div").Text()
others := primaryCondition.Find("div", "class", "wtr_condiAttribs").FindAll("div")
caption := secondaryCondition.Find("div").Text()
fmt.Println("City Name : " + heading)
fmt.Println("Temperature : " + temp + "˚C")
for _, i := range others {
fmt.Println(i.Text())
}
fmt.Println(caption)
}
import untangle
obj = untangle.parse("example.xml")
print(obj.root.element.child)
# access attributes:
print(obj.root.element['attrib_name'])
# use xpath:
element = obj.root.xpath("path/to/element")