phpscrapervscolly
PHPScraper is a universal web-util for PHP. The main goal is to get stuff done instead of getting distracted with selectors, preparing & converting data structures, etc. Instead, you can just go to a website and get the relevant information for your project.
PHPScraper is a minimalistic scraper framework that is built on top of other popular scraping tools.
Features:
- Direct access to page basic features like: Meta data, Links, Images, Headings, Content, Keywords etc.
- File downloading.
- RSS, Sitemap and other feed processing.
- CSV, XML and JSON file processing.
Colly is a popular web scraping library for the Go programming language. It's designed to be fast and easy to use, and it provides a simple and flexible API for traversing and extracting information from websites.
Colly supports:
- Concurrent scraping with a simple API
- Automatic handling of cookies and sessions
- Automatic handling of redirects
- Support for parsing HTML and XML
- Support for parsing JSON and binary data
- Support for custom storage (e.g. scraping results to a database)
- Simple JavaScript rendering with Colly's built-in rendering engine.
Colly also provides several optional features, such as support for user-agents, delay between requests, rate-limiting and proxy usage.
Colly's API is quite simple, and it is easy to get started with basic web scraping tasks. It's a good choice for scraping moderate to heavy sites, and it can be useful for a wide range of use cases, such as data mining, content extraction, and more.
Additionally, you can use it together with Goquery, a library that allow you to make jquery like queries on HTML documents and it is often used together with Colly to ease the way of parsing the HTML.
Highlights
Example Use
// create scraper object
$web = new \Spekulatius\PHPScraper\PHPScraper;
// go to URL
$web->go('https://test-pages.phpscraper.de/content/selectors.html');
// elements can be found using XPath:
echo $web->filter("//*[@id='by-id']")->text(); // "Content by ID"
// or pre-defined variables covering basic page data:
$web->links; // for all links
$web->headings;
$web->images;
$web->contentKeywords;
$web->orderedLists;
$web->unorderedLists;
$web->paragraphs;
$web->outline; // basic page outline
$web->cleanOutlineWithParagraphs; // basic page outline
package main
import (
"fmt"
"github.com/gocolly/colly/v2"
)
func main() {
// Instantiate default collector
c := colly.NewCollector(
// Visit only domains: hackerspaces.org, wiki.hackerspaces.org
colly.AllowedDomains("hackerspaces.org", "wiki.hackerspaces.org"),
)
// On every a element which has href attribute call callback
c.OnHTML("a[href]", func(e *colly.HTMLElement) {
link := e.Attr("href")
// Print link
fmt.Printf("Link found: %q -> %s\n", e.Text, link)
// Visit link found on page
// Only those links are visited which are in AllowedDomains
c.Visit(e.Request.AbsoluteURL(link))
})
// Before making a request print "Visiting ..."
c.OnRequest(func(r *colly.Request) {
fmt.Println("Visiting", r.URL.String())
})
// Start scraping on https://hackerspaces.org
c.Visit("https://hackerspaces.org/")
}