Skip to content

gazpachovspyquery

MIT 15 1 738
5.9 thousand (month) Dec 28 2012 1.1(3 years ago)
2,279 5 55 NOASSERTION
Dec 05 2008 2.2 million (month) 2.0.0(1 year, 6 months ago)

gazpacho is a Python library for scraping web pages. It is designed to make it easy to extract information from a web page by providing a simple and intuitive API for working with the page's structure.

gazpacho uses the requests library to download the page and the lxml library to parse the HTML or XML code. It provides a way to search for elements in the page using CSS selectors, similar to BeautifulSoup.

To use gazpacho, you first need to install it via pip by running pip install gazpacho. Once it is installed, you can use the gazpacho.get() function to download a web page and create a gazpacho object. For example:

from gazpacho import get, Soup

url = "https://en.wikipedia.org/wiki/Web_scraping"
html = get(url)
soup = Soup(html)
print(soup.find('title').text)
You can also use gazpacho.get() with file-like objects, bytes or file paths.

Once you have a gazpacho object, you can use the find() and find_all() methods to search for elements in the page using CSS selectors, similar to BeautifulSoup.

gazpacho also supports searching using the select() method, which returns the first matching element, and the select_all() method, which returns all matching elements.

PyQuery is a Python library for working with XML and HTML documents. It is similar to BeautifulSoup and is often used as a drop-in replacement for it.

PyQuery is inspired by javascript's jQuery and uses similar API allowing selecting of HTML nodes through CSS selectors. This makes it easy for developers who are already familiar with jQuery to use PyQuery in Python.

Unlike jQuery, PyQuery doesn't support XPath selectors and relies entirely on CSS selectors though offers similar HTML parsing features like selection of HTML elements, their attributes and text as well as html tree modification.

PyQuery also comes with a http client (through requests) so it can load and parse web URLs by itself.

Highlights


css-selectors

Example Use


from gazpacho import get, Soup

# gazpacho can retrieve web pages
url = "https://webscraping.fyi/"
html = get(url)
# and parse them:
soup = Soup(html)
print(soup.find('title').text)

# search for elements like beautifulsoup:
body = soup.find("div", {"class":"item"})
print(body.text)
from pyquery import PyQuery as pq

# this is our HTML page:
html = """
<head>
  <title>Hello World!</title>
</head>
<body>
  <div id="product">
    <h1>Product Title</h1>
    <p>paragraph 1</p>
    <p>paragraph2</p>
    <span class="price">$10</span>
  </div>
</body>
"""

doc = pq(html)

# we can use CSS selectors:
print(doc('#product .price').text())
"$10"


# it's also possible to modify HTML tree in various ways:
# insert text into selected element:
print(doc('h1').append('<span>discounted</span>'))
"<h1>Product Title<span>discounted</span></h1>"

# or remove elements
doc('p').remove()
print(doc('#product').html())
"""
<h1>Product Title<span>discounted</span></h1>
<span class="price">$10</span>
"""


# pyquery can also retrieve web documents using requests:
doc = pq(url='http://httpbin.org/html', headers={"User-Agent": "webscraping.fyi"})
print(doc('h1').html())

Alternatives / Similar


Was this page helpful?