Skip to content


Apache-2.0 52 7 5,606
58.1 thousand (month) Aug 06 2019 v0.18.0(1 year, 9 days ago)
10,458 3 52 MIT
1.1.9(5 years ago) Aug 24 2018 305 (month)

Ferret is a web scraping system. It aims to simplify data extraction from the web for UI testing, machine learning, analytics and more. ferret allows users to focus on the data. It abstracts away the technical details and complexity of underlying technologies using its own declarative language. It is extremely portable, extensible, and fast.


  • Declarative language
  • Support of both static and dynamic web pages
  • Embeddable
  • Extensible

Ferret is always implemented in Python through pyfer

Photon is a Python library for web scraping. It is designed to be lightweight and fast, and can be used to extract data from websites and web pages. Photon can extract the following data while crawling:

  • URLs (in-scope & out-of-scope)
  • URLs with parameters (
  • Intel (emails, social media accounts, amazon buckets etc.)
  • Files (pdf, png, xml etc.)
  • Secret keys (auth/API keys & hashes)
  • JavaScript files & Endpoints present in them
  • Strings matching custom regex pattern
  • Subdomains & DNS related data

The extracted information is saved in an organized manner or can be exported as json.

Example Use

// Example scraper for Google in Ferret:
LET google = DOCUMENT("", {
    driver: "cdp",
    userAgent: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36"

HOVER(google, 'input[name="q"]')
INPUT(google, 'input[name="q"]', @criteria, 30)
CLICK(google, 'input[name="btnK"]')

WAITFOR EVENT "navigation" IN google

WAIT_ELEMENT(google, "#res")

LET results = ELEMENTS(google, X("//*[text() = 'Search Results']/following-sibling::*/*"))

FOR el IN results
    RETURN {
        title: INNER_TEXT(el, 'h3')?,
        description: INNER_TEXT(el, X("//em/parent::*")),
        url: ELEMENT(el, 'a')?.attributes.href
from photon import Photon

#Create a new Photon instance
ph = Photon()

#Extract data from a specific element of the website
url = ""
selector = "div.main"
data = ph.get_data(url, selector)

#Print the extracted data

#Extract data from multiple websites asynchronously
urls = ["", ""]
data = ph.get_data_async(urls)

Alternatives / Similar