Skip to content

sax-jsvsrequests-html

ISC 98 1 1,086
142.6 million (month) Feb 09 2011 1.4.1(3 months ago)
13,699 2 225 MIT
Feb 25 2018 1.1 million (month) 0.10.0(5 years ago)

sax-js is a streaming XML parser for Node.js that is built on top of the sax C library. It is designed to be fast, low-memory, and easy to use. It is commonly used for parsing large XML files, as it allows you to process the XML data incrementally, rather than loading the entire file into memory at once.

sax-js is a low-level html tree parser and does not provide html query capabilities (like CSS selectors) though it can be useful in HTML tree parsing and serialization.

requests-html is a Python package that allows you to easily make HTTP requests and parse the HTML content of web pages. It is built on top of the popular requests package and uses the html parser from the lxml library, which makes it fast and efficient. This package is designed to provide a simple and convenient API for web scraping, and it supports features such as JavaScript rendering, CSS selectors, and form submissions.

It also offers a lot of functionalities such as cookie, session, and proxy support, which makes it an easy-to-use package for web scraping and web automation tasks.

In short requests-html offers:

  • Full JavaScript support!
  • CSS Selectors (a.k.a jQuery-style, thanks to PyQuery).
  • XPath Selectors, for the faint of heart.
  • Mocked user-agent (like a real web browser).
  • Automatic following of redirects.
  • Connection–pooling and cookie persistence.
  • The Requests experience you know and love, with magical parsing abilities.
  • Async Support

Example Use


const fs = require("fs");
const sax = require("sax");

const xmlStream = fs.createReadStream("example.xml");
const saxParser = sax.createStream(true, {});

saxParser.on("opentag", function(node) {
    console.log(`<${node.name}>`);
});

saxParser.on("closetag", function(nodeName) {
    console.log(`</${nodeName}>`);
});

saxParser.on("text", function(text) {
    console.log(text);
});

xmlStream.pipe(saxParser);
from requests_html import HTMLSession

session = HTMLSession()
r = session.get('https://www.example.com')

# print the HTML content of the page
print(r.html.html)

# use CSS selectors to find specific elements on the page
title = r.html.find('title', first=True)
print(title.text)

Alternatives / Similar


Was this page helpful?