It also offers a lot of functionalities such as cookie, session, and proxy support, which makes it an easy-to-use package for web scraping and web automation tasks.
In short requests-html offers:
- CSS Selectors (a.k.a jQuery-style, thanks to PyQuery).
- XPath Selectors, for the faint of heart.
- Mocked user-agent (like a real web browser).
- Automatic following of redirects.
- Connection–pooling and cookie persistence.
- The Requests experience you know and love, with magical parsing abilities.
- Async Support
from requests_html import HTMLSession session = HTMLSession() r = session.get('https://www.example.com') # print the HTML content of the page print(r.html.html) # use CSS selectors to find specific elements on the page title = r.html.find('title', first=True) print(title.text)