Skip to content

scrapydvsralger

BSD-3-Clause 7 7 2,955
46.7 thousand (month) Sep 04 2013 1.5.0(25 days ago)
156 1 3 MIT
Dec 22 2019 289 (month) 2.2.4(3 years ago)

Scrapyd is a service for running Scrapy spiders. It allows you to schedule spiders to run at regular intervals and also allows you to run spiders on remote machines. It is built in Python, and it is meant to be used in a server-client architecture, where the scrapyd server runs on a remote machine, and clients can schedule and control spider runs on the server using an HTTP API. With Scrapyd, you can schedule spider runs on a regular basis, schedule spider runs on demand, and view the status of running spiders.

You can also see the logs of completed spiders, and manage spider settings and configurations. Scrapyd also provides an API that allows you to schedule spider runs, cancel spider runs, and view the status of running spiders. You can install the package via pip by running pip install scrapyd and then you can run the package by running scrapyd command in your command prompt. By default, it will start a web server on port 6800, but you can specify a different port using the `--port`` option.

Scrapyd is a good solution if you need to run Scrapy spiders on a remote machine, or if you need to schedule spider runs on a regular basis. It's also useful if you have multiple spiders, and you need a way to manage and monitor them all in one place.

for more web interface see scrapydweb

ralger is a small web scraping framework for R based on rvest and xml2.

It's goal to simplify basic web scraping and it provides a convenient and easy to use API.

It offers functions for retrieving pages, parsing HTML using CSS selectors, automatic table parsing and auto link, title, image and paragraph extraction.

Example Use


$ scrapyd
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=spider2
library("ralger")

url <- "http://www.shanghairanking.com/rankings/arwu/2021"

# retrieve HTML and select elements using CSS selectors:
best_uni <- scrap(link = url, node = "a span", clean = TRUE)
head(best_uni, 5)
#>  [1] "Harvard University"
#>  [2] "Stanford University"
#>  [3] "University of Cambridge"
#>  [4] "Massachusetts Institute of Technology (MIT)"
#>  [5] "University of California, Berkeley"

# ralger can also parse HTML attributes
attributes <- attribute_scrap(
  link = "https://ropensci.org/",
  node = "a", # the a tag
  attr = "class" # getting the class attribute
)

head(attributes, 10) # NA values are a tags without a class attribute
#>  [1] "navbar-brand logo" "nav-link"          NA
#>  [4] NA                  NA                  "nav-link"
#>  [7] NA                  "nav-link"          NA
#> [10] NA
#

# ralger can automatically scrape tables:
data <- table_scrap(link ="https://www.boxofficemojo.com/chart/top_lifetime_gross/?area=XWW")

head(data)
#> # A tibble: 6 × 4
#>    Rank Title                                      `Lifetime Gross`  Year
#>   <int> <chr>                                      <chr>            <int>
#> 1     1 Avatar                                     $2,847,397,339    2009
#> 2     2 Avengers: Endgame                          $2,797,501,328    2019
#> 3     3 Titanic                                    $2,201,647,264    1997
#> 4     4 Star Wars: Episode VII - The Force Awakens $2,069,521,700    2015
#> 5     5 Avengers: Infinity War                     $2,048,359,754    2018
#> 6     6 Spider-Man: No Way Home                    $1,901,216,740    2021

Alternatives / Similar


Was this page helpful?