Being Cyber geek 💻 its always fun to test web applications. Which information gathering is one of the biggest part of the testing. Some times it is very important to know all links in a website, which will be very useful in testing any web application. So the only way to retrive all links in a web site is using crawling 🪲 a web application.
- Beauty soup - To retrive and process html data
- urllib - To send request to web server
- XLWT - to save retrived data to excel sheet
- Date time - To get current date and time
Initially we have to install all required libraries Install all require libraries using command
pip install -r"requirements.txt"
Run script using python command in your command prompt/terminal
python Crawler.py
Crawling of web pages is restricted ⛔ in some sites without written proof 📝 from the owner of site. I/Myteam does not took the responsiblity of wrong usage of this tool. Use at user own risk 💯
Every one who love open source projects and web developement can contribute to my project ❤️ .Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated. Every contribution is valuable and if you feel any issues in the project feel free to open an issue ✊
Email: [email protected]
LinkedIn: www.linkedin.com/in/harshareddy794
Thank you every who helped me in building such an amazing project