Skip to content

gazpachovsselectolax

MIT 15 1 738
5.9 thousand (month) Dec 28 2012 1.1(3 years ago)
1,052 1 18 MIT
Mar 01 2018 791.7 thousand (month) 0.3.21(4 months ago)

gazpacho is a Python library for scraping web pages. It is designed to make it easy to extract information from a web page by providing a simple and intuitive API for working with the page's structure.

gazpacho uses the requests library to download the page and the lxml library to parse the HTML or XML code. It provides a way to search for elements in the page using CSS selectors, similar to BeautifulSoup.

To use gazpacho, you first need to install it via pip by running pip install gazpacho. Once it is installed, you can use the gazpacho.get() function to download a web page and create a gazpacho object. For example:

from gazpacho import get, Soup

url = "https://en.wikipedia.org/wiki/Web_scraping"
html = get(url)
soup = Soup(html)
print(soup.find('title').text)
You can also use gazpacho.get() with file-like objects, bytes or file paths.

Once you have a gazpacho object, you can use the find() and find_all() methods to search for elements in the page using CSS selectors, similar to BeautifulSoup.

gazpacho also supports searching using the select() method, which returns the first matching element, and the select_all() method, which returns all matching elements.

selectolax is a fast and lightweight library for parsing HTML and XML documents in Python. It is designed to be a drop-in replacement for the popular BeautifulSoup library, with significantly faster performance.

selectolax uses a Cython-based parser to quickly parse and navigate through HTML and XML documents. It provides a simple and intuitive API for working with the document's structure, similar to BeautifulSoup.

To use selectolax, you first need to install it via pip by running pip install selectolax``. Once it is installed, you can use theselectolax.html.fromstring()function to parse an HTML document and create a selectolax object. For example:

from selectolax.parser import HTMLParser

html_string = "<html><body>Hello, World!</body></html>"
root = HTMLParser(html_string).root
print(root.tag) # html
You can also useselectolax.html.fromstring()with file-like objects, bytes or file paths, as well asselectolax.xml.fromstring()`` for parsing XML documents.

Once you have a selectolax object, you can use the select() method to search for elements in the document using CSS selectors, similar to BeautifulSoup. For example:

body = root.select("body")[0]
print(body.text()) # "Hello, World!"

Like BeautifulSoups find and find_all methods selectolax also supports searching using the search()`` method, which returns the first matching element, and thesearch_all()`` method, which returns all matching elements.

Example Use


from gazpacho import get, Soup

# gazpacho can retrieve web pages
url = "https://webscraping.fyi/"
html = get(url)
# and parse them:
soup = Soup(html)
print(soup.find('title').text)

# search for elements like beautifulsoup:
body = soup.find("div", {"class":"item"})
print(body.text)
from selectolax.parser import HTMLParser

html_string = "<html><body>Hello, World!</body></html>"
root = HTMLParser(html_string).root
print(root.tag) # html

# use css selectors:
body = root.select("body")[0]
print(body.text()) # "Hello, World!"

# find first matching element:
body = root.search("body")
print(body.text()) # "Hello, World!"

# or all matching elements:
html_string = "<html><body><p>paragraph1</p><p>paragraph2</p></body></html>"
root = HTMLParser(html_string).root
for el in root.search_all("p"):
  print(el.text()) 
# will print:
# paragraph 1
# paragraph 2

Alternatives / Similar


Was this page helpful?