Skip to content

Latest commit

 

History

History
65 lines (44 loc) · 2.06 KB

CONTRIBUTING.md

File metadata and controls

65 lines (44 loc) · 2.06 KB

📁 Directory Structure

The repository follows a structured approach for organizing scraping scripts:

📦 web-scrapper-repo
┣ 📂 site1
┃ ┣ 📜 script.py
┃ ┗ 📜 README.md
┃ ┗ 📜 requirements.txt
┣ 📂 site2
┃ ┣ 📜 script.py
┃ ┗ 📜 README.md
┃ ┗ 📜 requirements.txt
┗ ...

Each folder (site1, site2, etc.) corresponds to a different website that we have scraped. Inside each folder, you'll find a scraping script (script.py) and a README.md file providing information about the scraped data.

🚀 How to Contribute

We appreciate your contributions to make this repository better! Here's how you can contribute:

1. Create an Issue

If you encounter issues with the existing code, or if you have suggestions for improvements, please create an issue. Be sure to provide details and context about the problem or enhancement you're proposing.

2. Fork and Clone

To start contributing, fork this repository to your GitHub account, and then clone it to your local machine.

git clone https://github.com/anupammaurya6767/web_scrapper.git

3. Create a New Feature Branch

Before making changes, create a new branch for your feature or bug fix.

git checkout -b feature-name

4. Make Changes and Commit

Make your changes in the appropriate folder (e.g., site1). Update the scraping script and the README.md file to include information about the data you've scraped.

Data Scraped from Site1

  • Data 1
  • Data 2
  • ...

Last Updated: [Date]

Commit your changes and push them to your forked repository.

git add .
git commit -m "Added data from Site1"
git push origin feature-name

5. Create a Pull Request

Once you've made your changes and pushed them to your forked repository, create a pull request to merge your changes into the main repository. Provide a clear description of your changes.

We'll review your contribution and merge it if it aligns with the repository's goals.

Thank you for contributing to our Web Scrapper Repository! 🙌