Skip to content

photonvsspidr

GPL-3.0 52 3 10,672
265 (month) Aug 24 2018 1.1.9(5 years ago)
795 2 16 MIT
Jul 25 2009 1.7 thousand (month) 0.7.1(5 months ago)

Photon is a Python library for web scraping. It is designed to be lightweight and fast, and can be used to extract data from websites and web pages. Photon can extract the following data while crawling:

  • URLs (in-scope & out-of-scope)
  • URLs with parameters (example.com/gallery.php?id=2)
  • Intel (emails, social media accounts, amazon buckets etc.)
  • Files (pdf, png, xml etc.)
  • Secret keys (auth/API keys & hashes)
  • JavaScript files & Endpoints present in them
  • Strings matching custom regex pattern
  • Subdomains & DNS related data

The extracted information is saved in an organized manner or can be exported as json.

Spidr is a Ruby gem that provides a simple and flexible way to spider and scrape websites. It allows you to easily navigate through a website, following links and scraping data as you go. It is built on top of Nokogiri, a popular Ruby gem for parsing and searching HTML and XML documents, and it provides a simple and intuitive API for defining and running web scraping operations.

One of the main features of Spidr is its ability to spider a website, following all the links on a page and visiting all the pages of a website. This allows you to easily and quickly scrape large amounts of data from a website, without having to manually specify which pages to visit.

In addition to its spidering capabilities, Spidr also provides a variety of other features that can simplify the web scraping process. It can automatically filter which links to follow and which pages to visit, it can handle cookies and authentication, and it can automatically store the scraped data in a database or file. It also provides a built-in support for parallelism and queueing to speed up the scraping process.

Example Use


from photon import Photon

#Create a new Photon instance
ph = Photon()

#Extract data from a specific element of the website
url = "https://www.example.com"
selector = "div.main"
data = ph.get_data(url, selector)

#Print the extracted data
print(data)


#Extract data from multiple websites asynchronously
urls = ["https://www.example1.com", "https://www.example2.com"]
data = ph.get_data_async(urls)
require 'spidr'

Spidr.start_at("http://example.com") do |spider|
  spider.every_page do |page|
    puts "Visiting: #{page.url}"

    # Extract data from the page using Nokogiri
    doc = Nokogiri::HTML(page.body)
    title = doc.css("title").text
    puts "Title: #{title}"
  end
end

Alternatives / Similar


Was this page helpful?