pholcusvsphoton
Pholcus is a minimalistic web crawler library written in the Go programming language. It is designed to be flexible and easy to use, and it supports concurrent, distributed, and modular crawling.
Note that Pholcus is documented and maintained in the Chinese language and has no english resources other than the code source itself.
Photon is a Python library for web scraping. It is designed to be lightweight and fast, and can be used to extract data from websites and web pages. Photon can extract the following data while crawling:
- URLs (in-scope & out-of-scope)
- URLs with parameters (example.com/gallery.php?id=2)
- Intel (emails, social media accounts, amazon buckets etc.)
- Files (pdf, png, xml etc.)
- Secret keys (auth/API keys & hashes)
- JavaScript files & Endpoints present in them
- Strings matching custom regex pattern
- Subdomains & DNS related data
The extracted information is saved in an organized manner or can be exported as json.
Example Use
package main
import (
"github.com/henrylee2cn/pholcus/exec"
_ "github.com/henrylee2cn/pholcus/spider/standard" // standard spider
)
func main() {
// create spider object
spider := exec.NewSpider(exec.NewTask("demo", "https://www.example.com"))
// add a callback for URL route by regex pattern. In this case it's any route:
spider.AddRule(".*", "Parse")
// Start spider
spider.Start()
}
// define callback here
func Parse(self *exec.Spider, doc *goquery.Document) {
// callbacks receive HTMl document reference and
}
from photon import Photon
#Create a new Photon instance
ph = Photon()
#Extract data from a specific element of the website
url = "https://www.example.com"
selector = "div.main"
data = ph.get_data(url, selector)
#Print the extracted data
print(data)
#Extract data from multiple websites asynchronously
urls = ["https://www.example1.com", "https://www.example2.com"]
data = ph.get_data_async(urls)