Skip to content


MIT 14 1 730
9.6 thousand (month) Dec 28 2012 1.1(3 years ago)
2,270 5 54 BSD
2.0.0(1 year, 3 months ago) Dec 05 2008 3.5 million (month)

gazpacho is a Python library for scraping web pages. It is designed to make it easy to extract information from a web page by providing a simple and intuitive API for working with the page's structure.

gazpacho uses the requests library to download the page and the lxml library to parse the HTML or XML code. It provides a way to search for elements in the page using CSS selectors, similar to BeautifulSoup.

To use gazpacho, you first need to install it via pip by running pip install gazpacho. Once it is installed, you can use the gazpacho.get() function to download a web page and create a gazpacho object. For example:

from gazpacho import get, Soup

url = ""
html = get(url)
soup = Soup(html)
You can also use gazpacho.get() with file-like objects, bytes or file paths.

Once you have a gazpacho object, you can use the find() and find_all() methods to search for elements in the page using CSS selectors, similar to BeautifulSoup.

gazpacho also supports searching using the select() method, which returns the first matching element, and the select_all() method, which returns all matching elements.

PyQuery is a Python library for working with XML and HTML documents. It is similar to BeautifulSoup and is often used as a drop-in replacement for it.

PyQuery is inspired by javascript's jQuery and uses similar API allowing selecting of HTML nodes through CSS selectors. This makes it easy for developers who are already familiar with jQuery to use PyQuery in Python.

Unlike jQuery, PyQuery doesn't support XPath selectors and relies entirely on CSS selectors though offers similar HTML parsing features like selection of HTML elements, their attributes and text as well as html tree modification.

PyQuery also comes with a http client (through requests) so it can load and parse web URLs by itself.



Example Use

from gazpacho import get, Soup

# gazpacho can retrieve web pages
url = ""
html = get(url)
# and parse them:
soup = Soup(html)

# search for elements like beautifulsoup:
body = soup.find("div", {"class":"item"})
from pyquery import PyQuery as pq

# this is our HTML page:
html = """
  <title>Hello World!</title>
  <div id="product">
    <h1>Product Title</h1>
    <p>paragraph 1</p>
    <span class="price">$10</span>

doc = pq(html)

# we can use CSS selectors:
print(doc('#product .price').text())

# it's also possible to modify HTML tree in various ways:
# insert text into selected element:
"<h1>Product Title<span>discounted</span></h1>"

# or remove elements
<h1>Product Title<span>discounted</span></h1>
<span class="price">$10</span>

# pyquery can also retrieve web documents using requests:
doc = pq(url='', headers={"User-Agent": ""})

Alternatives / Similar