Skip to content

xml2vsgazpacho

MIT 63 3 216
624.5 thousand (month) Apr 20 2015 1.3.6(9 months ago)
730 1 14 MIT
1.1(3 years ago) Dec 28 2012 9.6 thousand (month)

The xml2 package is a binding to libxml2, making it easy to work with HTML and XML from R. The API is somewhat inspired by jQuery.

xml2 can be used to parse HTML documents using XPath selectors and is a successor to R's XML package with a few improvements:

  • xml2 takes care of memory management for you. It will automatically free the memory used by an XML document as soon as the last reference to it goes away.
  • xml2 has a very simple class hierarchy so don't need to think about exactly what type of object you have, xml2 will just do the right thing.
  • More convenient handling of namespaces in Xpath expressions - see xml_ns() and xml_ns_strip() to get started.

gazpacho is a Python library for scraping web pages. It is designed to make it easy to extract information from a web page by providing a simple and intuitive API for working with the page's structure.

gazpacho uses the requests library to download the page and the lxml library to parse the HTML or XML code. It provides a way to search for elements in the page using CSS selectors, similar to BeautifulSoup.

To use gazpacho, you first need to install it via pip by running pip install gazpacho. Once it is installed, you can use the gazpacho.get() function to download a web page and create a gazpacho object. For example:

from gazpacho import get, Soup

url = "https://en.wikipedia.org/wiki/Web_scraping"
html = get(url)
soup = Soup(html)
print(soup.find('title').text)
You can also use gazpacho.get() with file-like objects, bytes or file paths.

Once you have a gazpacho object, you can use the find() and find_all() methods to search for elements in the page using CSS selectors, similar to BeautifulSoup.

gazpacho also supports searching using the select() method, which returns the first matching element, and the select_all() method, which returns all matching elements.

Example Use


library("xml2")
x <- read_xml("<foo> <bar> text <baz/> </bar> </foo>")
x

xml_name(x)
xml_children(x)
xml_text(x)
xml_find_all(x, ".//baz")

h <- read_html("<html><p>Hi <b>!")
h
xml_name(h)
from gazpacho import get, Soup

# gazpacho can retrieve web pages
url = "https://webscraping.fyi/"
html = get(url)
# and parse them:
soup = Soup(html)
print(soup.find('title').text)

# search for elements like beautifulsoup:
body = soup.find("div", {"class":"item"})
print(body.text)

Alternatives / Similar