Skip to content


MIT 14 1 730
9.6 thousand (month) Dec 28 2012 1.1(3 years ago)
5,361 1 94 MIT
0.13.0(1 year, 10 months ago) Jul 30 2007 44.7 million (month)

gazpacho is a Python library for scraping web pages. It is designed to make it easy to extract information from a web page by providing a simple and intuitive API for working with the page's structure.

gazpacho uses the requests library to download the page and the lxml library to parse the HTML or XML code. It provides a way to search for elements in the page using CSS selectors, similar to BeautifulSoup.

To use gazpacho, you first need to install it via pip by running pip install gazpacho. Once it is installed, you can use the gazpacho.get() function to download a web page and create a gazpacho object. For example:

from gazpacho import get, Soup

url = ""
html = get(url)
soup = Soup(html)
You can also use gazpacho.get() with file-like objects, bytes or file paths.

Once you have a gazpacho object, you can use the find() and find_all() methods to search for elements in the page using CSS selectors, similar to BeautifulSoup.

gazpacho also supports searching using the select() method, which returns the first matching element, and the select_all() method, which returns all matching elements.

xmltodict is a Python library that allows you to work with XML data as if it were JSON. It allows you to parse XML documents and convert them to dictionaries, which can then be easily manipulated using standard dictionary operations.

You can also use the library to convert a dictionary back into an XML document. xmltodict is built on top of the popular lxml library and provides a simple, intuitive API for working with XML data.

Note that despite using lxml conversion speeds can be quite slow for large XML documents and in web scraping this should be used to parse specific snippets instead of whole HTML documents.

xmltodict pairs well with JSON parsing tools like jmespath or jsonpath. Alternatively, it can be used in reverse mode to parse JSON documents using HTML parsing tools like CSS selectors and XPath.

It can be installed via pip by running pip install xmltodict command.

Example Use

from gazpacho import get, Soup

# gazpacho can retrieve web pages
url = ""
html = get(url)
# and parse them:
soup = Soup(html)

# search for elements like beautifulsoup:
body = soup.find("div", {"class":"item"})
import xmltodict

xml_string = """
    <title>The Great Gatsby</title>
    <author>F. Scott Fitzgerald</author>
    <publisher>Charles Scribner's Sons</publisher>

book_dict = xmltodict.parse(xml_string)
{'book': {'title': 'The Great Gatsby',
'author': 'F. Scott Fitzgerald',
'publisher': "Charles Scribner's Sons",
'publication_date': '1925'}}

# and to reverse:
book_xml = xmltodict.unparse(book_dict)

# the xml can be loaded and parsed using parsel or beautifulsoup:
from parsel import Selector
sel = Selector(book_xml)

Alternatives / Similar