Skip to content

selectolaxvsscrapling

MIT 10 1 1,607
4.5 million (month) Mar 01 2018 0.4.7(2026-03-06 09:23:35 ago)
36,206 2 7 BSD-3-Clause
Aug 01 2024 397.4 thousand (month) 0.4.5(2026-04-07 04:22:27 ago)

selectolax is a fast and lightweight library for parsing HTML and XML documents in Python. It is designed to be a drop-in replacement for the popular BeautifulSoup library, with significantly faster performance.

selectolax uses a Cython-based parser to quickly parse and navigate through HTML and XML documents. It provides a simple and intuitive API for working with the document's structure, similar to BeautifulSoup.

To use selectolax, you first need to install it via pip by running pip install selectolax``. Once it is installed, you can use theselectolax.html.fromstring()` function to parse an HTML document and create a selectolax object. For example: ``` from selectolax.parser import HTMLParser

html_string = "Hello, World!" root = HTMLParser(html_string).root print(root.tag) # html ` You can also use `selectolax.html.fromstring()` with file-like objects, bytes or file paths, as well as `selectolax.xml.fromstring() for parsing XML documents.

Once you have a selectolax object, you can use the select() method to search for elements in the document using CSS selectors, similar to BeautifulSoup. For example: body = root.select("body")[0] print(body.text()) # "Hello, World!"

Like BeautifulSoups find and find_all methods selectolax also supports searching using the search()`` method, which returns the first matching element, and thesearch_all()`` method, which returns all matching elements.

Scrapling is an adaptive web scraping framework for Python that introduces "self-healing" selectors — selectors that can track and find elements even when the website's DOM structure changes. This solves one of the biggest maintenance headaches in web scraping: broken selectors after website updates.

Key features include:

  • Self-healing selectors Scrapling uses smart element matching that can identify target elements even after the page structure changes. It builds a fingerprint of the element based on multiple attributes (text, position, siblings, attributes) and uses fuzzy matching to relocate it.
  • Multiple parsing backends Supports different parsing engines including lxml (fast) and a custom engine, allowing you to choose the right balance of speed and features.
  • Scrapy-like Spider API Provides a familiar Spider class pattern for organizing crawling logic, similar to Scrapy but with the added benefit of adaptive selectors.
  • CSS and XPath selectors Full support for CSS selectors and XPath, plus the adaptive matching system on top.
  • Type hints and modern Python Built with full type annotations and 92% test coverage for reliability.
  • Async support Supports asynchronous crawling for efficient concurrent scraping.

Scrapling gained massive traction in 2025 as one of the most starred new Python scraping libraries. It is particularly useful for scraping targets that frequently update their HTML structure, where traditional selector-based scrapers would break.

Highlights


css-selectorsxpathfastpopular

Example Use


```python from selectolax.parser import HTMLParser html_string = "Hello, World!" root = HTMLParser(html_string).root print(root.tag) # html # use css selectors: body = root.select("body")[0] print(body.text()) # "Hello, World!" # find first matching element: body = root.search("body") print(body.text()) # "Hello, World!" # or all matching elements: html_string = "

paragraph1

paragraph2

" root = HTMLParser(html_string).root for el in root.search_all("p"): print(el.text()) # will print: # paragraph 1 # paragraph 2 ```
```python from scrapling import Fetcher, StealthFetcher, PlayWrightFetcher # Simple fetching with adaptive parsing fetcher = Fetcher() page = fetcher.get("https://example.com/products") # CSS selectors work as expected products = page.css(".product-card") for product in products: name = product.css_first(".name").text() price = product.css_first(".price").text() print(f"{name}: {price}") # Adaptive selector - finds the element even if DOM changes # Uses element fingerprinting for resilient matching element = page.find("Product Title", auto_match=True) # Stealth fetching with anti-bot bypass stealth = StealthFetcher() page = stealth.get("https://protected-site.com") # Playwright-based fetching for JS-rendered pages pw = PlayWrightFetcher() page = pw.get("https://spa-example.com", headless=True) ```

Alternatives / Similar


Was this page helpful?