Skip to content

Commit

Permalink
Merge pull request #835 from maxime-peim/fix-snake-case
Browse files Browse the repository at this point in the history
Snake case + bug fixes
  • Loading branch information
maurosoria authored May 18, 2021
2 parents c866099 + dff0a84 commit 26a6b61
Show file tree
Hide file tree
Showing 23 changed files with 779 additions and 749 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,5 @@ __pycache__/
db/test.txt

default.conf

.ropeproject/
1 change: 1 addition & 0 deletions CONTRIBUTORS.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@
- [Kyle Nweeia](https://github.com/kyle-nweeia)
- [Xib3rR4dAr](https://github.com/Xib3rR4dAr)
- [Rohit Soni](https://github.com/StreetOfHackerR007/)
- [Maxime Peim](https://github.com/maxime-peim)
- [Christian Clauss](https://github.com/cclauss)

Special thanks for all the people who had helped dirsearch so far!
Expand Down
65 changes: 44 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,9 @@ Table of Contents
* [Proxies](#Proxies)
* [Reports](#Reports)
* [Some others commands](#Some-others-commands)
* [Tips](#Tips)
* [Support Docker](#Support-Docker)
* [References](#References)
* [Tips](#Tips)
* [License](#License)
* [Contributors](#Contributors)

Expand All @@ -52,7 +53,7 @@ Kali Linux
Installation & Usage
------------

**Requirement: python 3.x**
**Requirement: python 3.8 or higher**

Choose one of these installation options:

Expand All @@ -79,7 +80,7 @@ Wordlists (IMPORTANT)
---------------
**Summary:**
- Wordlist is a text file, each line is a path.
- About extensions, unlike other tools, dirsearch will only replace the `%EXT%` keyword with extensions in **-e | --extensions** flag.
- About extensions, unlike other tools, dirsearch will only replace the `%EXT%` keyword with extensions in **-e** flag.
- For wordlists without `%EXT%` (like [SecLists](https://github.com/danielmiessler/SecLists)), **-f | --force-extensions** switch is required to append extensions to every word in wordlist, as well as the `/`. And for entries in wordlist that you do not want to append extensions, you can add `%NOFORCE%` at the end of them.
- To use multiple wordlists, you can separate your wordlists with commas. Example: `wordlist1.txt,wordlist2.txt`.

Expand Down Expand Up @@ -131,8 +132,8 @@ Options:
Target URL list file
--stdin Target URL list from STDIN
--cidr=CIDR Target CIDR
--raw=FILE File contains the raw request (use `--scheme` flag to
set the scheme)
--raw=FILE Load raw HTTP request from file (use `--scheme` flag
to set the scheme)
-e EXTENSIONS, --extensions=EXTENSIONS
Extension list separated by commas (Example: php,asp)
-X EXTENSIONS, --exclude-extensions=EXTENSIONS
Expand Down Expand Up @@ -165,6 +166,10 @@ Options:
-t THREADS, --threads=THREADS
Number of threads
-r, --recursive Brute-force recursively
--deep-recursive Perform recursive scan on every directory depth
(Example: api/users -> api/)
--force-recursive Do recursive brute-force for every found path, not
only paths end with slash
--recursion-depth=DEPTH
Maximum recursion depth
--recursion-status=CODES
Expand Down Expand Up @@ -218,8 +223,11 @@ Options:
-F, --follow-redirects
Follow HTTP redirects
--random-agent Choose a random User-Agent for each request
--auth-type=TYPE Authentication type (basic, digest, bearer, ntlm)
--auth=CREDENTIAL Authentication credential (user:password or bearer
token)
--user-agent=USERAGENT
--cookie=COOKIE
--cookie=COOKIE
Connection Settings:
--timeout=TIMEOUT Connection timeout
Expand All @@ -232,8 +240,7 @@ Options:
Proxy to replay with found paths
--scheme=SCHEME Default scheme (for raw request or if there is no
scheme in the URL)
--max-rate=REQUESTS
Max requests per second
--max-rate=RATE Max requests per second
--retries=RETRIES Number of retries for failed requests
-b, --request-by-hostname
By default dirsearch requests by IP for speed. This
Expand All @@ -242,7 +249,8 @@ Options:
--exit-on-error Exit whenever an error occurs
Reports:
-o FILE Output file
-o FILE, --output=FILE
Output file
--format=FORMAT Report format (Available: simple, plain, json, xml,
md, csv, html)
```
Expand Down Expand Up @@ -272,7 +280,6 @@ recursion-depth = 0
exclude-subdirs = %%ff/
random-user-agents = False
max-time = 0
save-logs-home = False
full-url = False
quiet-mode = False
color = True
Expand All @@ -286,9 +293,11 @@ recursion-status = 200-399,401,403
# skip-on-status = 429,999

[reports]
# report-output = output.txt
report-format = plain
## Support: plain, simple, json, xml, md, csv
autosave-report = True
# report-output-folder = /home/user
# logs-location = /tmp
## Supported: plain, simple, json, xml, md, csv, html

[dictionary]
lowercase = False
Expand Down Expand Up @@ -584,15 +593,6 @@ python3 dirsearch.py -u https://target --remove-extensions
**There are more features and you will need to discover them by your self**


Tips
---------------
- The server has a request limit? That's bad, but feel free to bypass it, by randomizing proxy with `--proxy-list`
- Want to find out config files or backups? Try `--suffixes ~` and `--prefixes .`
- For some endpoints that you do not want to force extensions, add `%NOFORCE%` at the end of them
- Want to find only folders/directories? Combine `--remove-extensions` and `--suffixes /`!
- The combination of `--cidr`, `-F`, `-q` and a low `--timeout` will reduce most of the noise + false negatives when brute-forcing with a CIDR
- Scan a list of URLs, but don't want to see a 429 flood? Use `--skip-on-status` + `429` will help you to skip a target whenever it returns 429

Support Docker
---------------
### Install Docker Linux
Expand Down Expand Up @@ -620,6 +620,29 @@ docker run -it --rm "dirsearch:v0.4.1" -u target -e php,html,js,zip
```


References
---------------
- [Comprehensive Guide on Dirsearch](https://www.hackingarticles.in/comprehensive-guide-on-dirsearch/) by Shubham Sharma
- [Comprehensive Guide on Dirsearch Part 2](https://www.hackingarticles.in/comprehensive-guide-on-dirsearch-part-2/) by Shubham Sharma
- [GUÍA COMPLETA SOBRE EL USO DE DIRSEARCH](https://esgeeks.com/guia-completa-uso-dirsearch/?feed_id=5703&_unique_id=6076249cc271f) by ESGEEKS
- [How to use Dirsearch to detect web directories](https://www.ehacking.net/2020/01/how-to-find-hidden-web-directories-using-dirsearch.html) by EHacking
- [dirsearch how to](https://vk9-sec.com/dirsearch-how-to/) by VK9 Security
- [Find Hidden Web Directories with Dirsearch](https://null-byte.wonderhowto.com/how-to/find-hidden-web-directories-with-dirsearch-0201615/) by Wonder How To
- [Brute force directories and files in webservers using dirsearch](https://upadhyayraj.medium.com/brute-force-directories-and-files-in-webservers-using-dirsearch-613e4a7fa8d5) by Raj Upadhyay
- [Live Bug Bounty Recon Session on Yahoo (Amass, crts.sh, dirsearch) w/ @TheDawgyg](https://www.youtube.com/watch?v=u4dUnJ1U0T4) by Nahamsec
- [Dirsearch to find Hidden Web Directories](https://medium.com/@irfaanshakeel/dirsearch-to-find-hidden-web-directories-d0357fbe47b0) by Irfan Shakeel
- [Getting access to 25000 employees details](https://medium.com/@ehsahil/getting-access-to-25k-employees-details-c085d18b73f0) by Sahil Ahamad

Tips
---------------
- The server has a request limit? That's bad, but feel free to bypass it, by randomizing proxy with `--proxy-list`
- Want to find out config files or backups? Try `--suffixes ~` and `--prefixes .`
- For some endpoints that you do not want to force extensions, add `%NOFORCE%` at the end of them
- Want to find only folders/directories? Combine `--remove-extensions` and `--suffixes /`!
- The combination of `--cidr`, `-F`, `-q` and a low `--timeout` will reduce most of the noise + false negatives when brute-forcing with a CIDR
- Scan a list of URLs, but don't want to see a 429 flood? Use `--skip-on-status` + `429` will help you to skip a target whenever it returns 429


License
---------------
Copyright (C) Mauro Soria ([email protected])
Expand Down
61 changes: 29 additions & 32 deletions lib/connection/requester.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,14 +36,14 @@ class Requester(object):
def __init__(
self,
url,
maxPool=1,
maxRetries=5,
max_pool=1,
max_retries=5,
timeout=20,
ip=None,
proxy=None,
proxylist=None,
redirect=False,
requestByHostname=False,
request_by_hostname=False,
httpmethod="get",
data=None,
scheme=None,
Expand All @@ -52,10 +52,6 @@ def __init__(
self.data = data
self.headers = {}

# If no backslash, append one
if not url.endswith("/"):
url += "/"

parsed = urllib.parse.urlparse(url)

# If no protocol specified, set http by default
Expand All @@ -67,19 +63,19 @@ def __init__(
raise RequestException({"message": "Unsupported URL scheme: {0}".format(parsed.scheme)})

if parsed.path.startswith("/"):
self.basePath = parsed.path[1:]
self.base_path = parsed.path[1:]
else:
self.basePath = parsed.path
self.base_path = parsed.path

# Safe quote all special characters in basePath to prevent from being encoded when performing requests
self.basePath = urllib.parse.quote(self.basePath, safe="!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~")
# Safe quote all special characters in base_path to prevent from being encoded when performing requests
self.base_path = urllib.parse.quote(self.base_path, safe="!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~")
self.protocol = parsed.scheme
self.host = parsed.netloc.split(":")[0]

# Resolve DNS to decrease overhead
if ip:
self.ip = ip
# A proxy could have a different DNS that would resolve the name. Therefore,
# A proxy could have a different DNS that would resolve the name. ThereFore.
# resolving the name when using proxy to raise an error is pointless
elif not proxy and not proxylist:
try:
Expand All @@ -106,37 +102,37 @@ def __init__(
):
self.headers["Host"] += ":{0}".format(self.port)

self.maxRetries = maxRetries
self.maxPool = maxPool
self.max_retries = max_retries
self.max_pool = max_pool
self.timeout = timeout
self.pool = None
self.proxy = proxy
self.proxylist = proxylist
self.redirect = redirect
self.randomAgents = None
self.random_agents = None
self.auth = None
self.requestByHostname = requestByHostname
self.request_by_hostname = request_by_hostname
self.session = requests.Session()
self.url = "{0}://{1}:{2}/".format(
self.protocol,
self.host if self.requestByHostname else self.ip,
self.host if self.request_by_hostname else self.ip,
self.port,
)
self.baseUrl = "{0}://{1}:{2}/".format(
self.base_url = "{0}://{1}:{2}/".format(
self.protocol,
self.host,
self.port,
)

def setHeader(self, key, value):
def set_header(self, key, value):
self.headers[key.strip()] = value.strip() if value else value

def setRandomAgents(self, agents):
self.randomAgents = list(agents)
def set_random_agents(self, agents):
self.random_agents = list(agents)

def setAuth(self, type, credential):
def set_auth(self, type, credential):
if type == "bearer":
self.setHeader("Authorization", "Bearer {0}".format(credential))
self.set_header("Authorization", "Bearer {0}".format(credential))
else:
user = credential.split(":")[0]
try:
Expand All @@ -155,7 +151,7 @@ def request(self, path, proxy=None):
result = None
error = None

for i in range(self.maxRetries):
for i in range(self.max_retries):
try:
if not proxy:
if self.proxylist:
Expand All @@ -176,10 +172,10 @@ def request(self, path, proxy=None):
else:
proxies = None

url = self.url + self.basePath + path
url = self.url + self.base_path + path

if self.randomAgents:
self.headers["User-Agent"] = random.choice(self.randomAgents)
if self.random_agents:
self.headers["User-Agent"] = random.choice(self.random_agents)

request = requests.Request(
self.httpmethod,
Expand All @@ -189,6 +185,7 @@ def request(self, path, proxy=None):
data=self.data,
)
prepare = request.prepare()
prepare.url = url
response = self.session.send(
prepare,
proxies=proxies,
Expand All @@ -207,11 +204,11 @@ def request(self, path, proxy=None):
break

except requests.exceptions.SSLError:
self.url = self.baseUrl
self.url = self.base_url
continue

except requests.exceptions.TooManyRedirects:
error = "Too many redirects: {0}".format(self.baseUrl)
error = "Too many redirects: {0}".format(self.base_url)

except requests.exceptions.ProxyError:
error = "Error with the proxy: {0}".format(proxy)
Expand All @@ -220,7 +217,7 @@ def request(self, path, proxy=None):
error = "Cannot connect to: {0}:{1}".format(self.host, self.port)

except requests.exceptions.InvalidURL:
error = "Invalid URL: {0}".format(self.baseUrl)
error = "Invalid URL: {0}".format(self.base_url)

except requests.exceptions.InvalidProxyURL:
error = "Invalid proxy URL: {0}".format(proxy)
Expand All @@ -232,10 +229,10 @@ def request(self, path, proxy=None):
http.client.IncompleteRead,
socket.timeout,
):
error = "Request timeout: {0}".format(self.baseUrl)
error = "Request timeout: {0}".format(self.base_url)

except Exception:
error = "There was a problem in the request to: {0}".format(self.baseUrl)
error = "There was a problem in the request to: {0}".format(self.base_url)

if error:
raise RequestException({"message": error})
Expand Down
Loading

0 comments on commit 26a6b61

Please sign in to comment.