Skip to content

gazpachovsxmltodict

MIT 15 1 738
5.9 thousand (month) Dec 28 2012 1.1(3 years ago)
5,416 1 96 MIT
Jul 30 2007 41.5 million (month) 0.13.0(2 years ago)

gazpacho is a Python library for scraping web pages. It is designed to make it easy to extract information from a web page by providing a simple and intuitive API for working with the page's structure.

gazpacho uses the requests library to download the page and the lxml library to parse the HTML or XML code. It provides a way to search for elements in the page using CSS selectors, similar to BeautifulSoup.

To use gazpacho, you first need to install it via pip by running pip install gazpacho. Once it is installed, you can use the gazpacho.get() function to download a web page and create a gazpacho object. For example:

from gazpacho import get, Soup

url = "https://en.wikipedia.org/wiki/Web_scraping"
html = get(url)
soup = Soup(html)
print(soup.find('title').text)
You can also use gazpacho.get() with file-like objects, bytes or file paths.

Once you have a gazpacho object, you can use the find() and find_all() methods to search for elements in the page using CSS selectors, similar to BeautifulSoup.

gazpacho also supports searching using the select() method, which returns the first matching element, and the select_all() method, which returns all matching elements.

xmltodict is a Python library that allows you to work with XML data as if it were JSON. It allows you to parse XML documents and convert them to dictionaries, which can then be easily manipulated using standard dictionary operations.

You can also use the library to convert a dictionary back into an XML document. xmltodict is built on top of the popular lxml library and provides a simple, intuitive API for working with XML data.

Note that despite using lxml conversion speeds can be quite slow for large XML documents and in web scraping this should be used to parse specific snippets instead of whole HTML documents.

xmltodict pairs well with JSON parsing tools like jmespath or jsonpath. Alternatively, it can be used in reverse mode to parse JSON documents using HTML parsing tools like CSS selectors and XPath.

It can be installed via pip by running pip install xmltodict command.

Example Use


from gazpacho import get, Soup

# gazpacho can retrieve web pages
url = "https://webscraping.fyi/"
html = get(url)
# and parse them:
soup = Soup(html)
print(soup.find('title').text)

# search for elements like beautifulsoup:
body = soup.find("div", {"class":"item"})
print(body.text)
import xmltodict

xml_string = """
<book>
    <title>The Great Gatsby</title>
    <author>F. Scott Fitzgerald</author>
    <publisher>Charles Scribner's Sons</publisher>
    <publication_date>1925</publication_date>
</book>
"""

book_dict = xmltodict.parse(xml_string)
print(book_dict)
{'book': {'title': 'The Great Gatsby',
'author': 'F. Scott Fitzgerald',
'publisher': "Charles Scribner's Sons",
'publication_date': '1925'}}

# and to reverse:
book_xml = xmltodict.unparse(book_dict)
print(book_xml)

# the xml can be loaded and parsed using parsel or beautifulsoup:
from parsel import Selector
sel = Selector(book_xml)
print(sel.css('publication_date::text').get())
'1925'

Alternatives / Similar


Was this page helpful?