Skip to content

Latest commit

 

History

History
13 lines (10 loc) · 473 Bytes

README.md

File metadata and controls

13 lines (10 loc) · 473 Bytes

Recursive website crawler

Description:

  • It is a recursive crawler that explores and returns all links on a website.
  • It works by scraping all the links on the page. Then, it scraps all the links on the scraped links.

Usage:

python3 website_crawler.py <url>

More info:

  • In order to use the tor network, you need to download the tor package and start the tor service.