Skip to content

Commit

Permalink
More features and fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
GKNSB committed Jul 11, 2021
1 parent 2ae3cc0 commit d3a2349
Show file tree
Hide file tree
Showing 20 changed files with 286 additions and 62 deletions.
24 changes: 16 additions & 8 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,27 @@

### --- New Features
1. Added Markov submodule
2. Added --flush flag to purge an entry from the database and exit
3. Signature for kayako takeover
4. Signature for ning takeover
5. Signature for moosend takeover
6. Added export functionality when ctrl+c is pressed up to the latest completed module - does not create diff.
7. New Project Discovery Chaos collector
8. New ZoomEye collector
2. Added RIPE database lookup for CIDRs to be used in reverse lookup
3. Added --flush flag to purge an entry from the database and exit
4. Signature for kayako takeover
5. Signature for ning takeover
6. Signature for moosend takeover
7. Added export functionality when ctrl+c is pressed up to the latest completed module - does not create diff.
8. New Project Discovery Chaos collector
9. New ZoomEye collector
10. New ThreatMiner collector
11. New FOFA collector

### --- Bug Fixes
1. Fixed Censys collector so that it doesn't waste requests and it identifies search result limit
1. Fixed bugs in Censys collector so that search result limit is identified and identification is better
2. Fixed logical bug in portscan that caused very long duration of execution
3. Removed Entrust Certificates collector as it's no longer being used
4. Better exception handling in some minor cases
5. Fixed a bug in CertSpotter collector's result identification
6. Fixed a bug in DNSTrails collector's result identification
7. Fixed a bug in GoogleTransparency collector's response parsing
8. Fixed logic bugs in Shodan collector
9. Fixed a bug in ProjectCrobat collector

### --- Misc
1. Changed database name from findings.sqlite to lepusdb.sqlite
Expand Down
33 changes: 27 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ The Collectors mode collects subdomains from the following services:
|[CertSpotter](https://sslmate.com/certspotter/)|No|
|[CRT](https://crt.sh/)|No|
|[DNSTrails](https://securitytrails.com/dns-trails/)|Yes|
|[FOFA](https://fofa.so/)|Yes|
|[Google Transparency](https://transparencyreport.google.com/)|No|
|[HackerTarget](https://hackertarget.com/)|No|
|[PassiveTotal](https://www.riskiq.com/products/passivetotal/)|Yes|
Expand All @@ -54,6 +55,7 @@ The Collectors mode collects subdomains from the following services:
|[Shodan](https://www.shodan.io/)|Yes|
|[Spyse](https://api-doc.spyse.com/)|Yes|
|[ThreatCrowd](https://www.threatcrowd.org/)|No|
|[ThreatMiner](https://www.threatminer.org/)|No|
|[VirusTotal](https://www.virustotal.com/)|Yes|
|[Wayback Machine](https://archive.org/web/)|No|
|[ZoomEye](https://www.zoomeye.org/)|Yes|
Expand Down Expand Up @@ -83,9 +85,13 @@ lepus.py --permutate -pw customsubdomains.txt yahoo.com
```

### ReverseDNS
The ReverseDNS mode will gather all IP addresses that were resolved and perform a reverse DNS on each one in order to detect more subdomains. For example, if `www.example.com` resolves to `1.2.3.4`, lepus will perform a reverse DNS for `1.2.3.4` and gather any other subdomains belonging to `example.com`, e.g. `www2`,`internal` or `oldsite`.
The ReverseDNS mode will gather all IP addresses that were resolved and perform a reverse DNS on each one in order to detect more subdomains. For example, if `www.example.com` resolves to `1.2.3.4`, lepus will perform a reverse DNS for `1.2.3.4` and gather any other subdomains belonging to `example.com`, e.g. `www2`,`internal` or `oldsite`.

To run the ReverseDNS mode use the `--reverse` argument. Additionally, lepus supports the `--ranges` (or `-r`) argument. You can use it to make reverse DNS resolutions against CIDRs that belong to the target domain. Hint: lepus will identify `ASNs` and `Networks` during enumeration, so you can use these ranges to identify more subdomains. An example run would be:
To run the ReverseDNS module use the `--reverse` argument. Additionally, `--ripe` (or `-ripe`) can be used in order to instruct the module to query the RIPE database for potential network ranges. Moreover, lepus supports the `--ranges` (or `-r`) argument. You can use it to make reverse DNS resolutions against CIDRs that belong to the target domain.

By default this module will take into account all previously identified IPs, then defined ranges, then ranges identified through the RIPE database. In case you only want to run the module against specific or RIPE identified ranges, and not against all already identified IPs, you can use the `--only-ranges` (`-or`) argument.

An example run would be:

```
lepus.py --reverse yahoo.com
Expand All @@ -94,13 +100,23 @@ lepus.py --reverse yahoo.com
or

```
lepus.py --reverse -r 172.216.0.0/16,183.177.80.0/23 yahoo.com
lepus.py --reverse -ripe -r 172.216.0.0/16,183.177.80.0/23 yahoo.com
```

or only against the defined or identified from RIPE

```
lepus.py --reverse -or -ripe -r 172.216.0.0/16,183.177.80.0/23 yahoo.com
```

Hint: lepus will identify `ASNs` and `Networks` during enumeration, so you can also use these ranges to identify more subdomains with a subsequent run.

### Markov
With this module, Lepus will utilize Markov chains in order to train itself and then generate subdomain based on the already known ones. The bigger the general surface, the better the tool will be able to train itself and subsequently, the better the results will be.

The module can be activated with the `--markovify` argument. Parameters also include the Markov state size, the maximum length of the generated candidate addition, and the quantity of generated candidates. Predefined values are 3, 5 and 5 respectively. Those arguments can be changed with `-ms` (`--markov-state`), `-ml` (`--markov-length`) and `-mq` (`--markov-quantity`) to meet your needs. Keep in mind that the larger these values are, the more time Lepus will need to generate the candidates. It has to be noted that different executions of this module might generate different candidates, so feel free to run it a few times consecutively.
The module can be activated with the `--markovify` argument. Parameters also include the Markov state size, the maximum length of the generated candidate addition, and the quantity of generated candidates. Predefined values are 3, 5 and 5 respectively. Those arguments can be changed with `-ms` (`--markov-state`), `-ml` (`--markov-length`) and `-mq` (`--markov-quantity`) to meet your needs. Keep in mind that the larger these values are, the more time Lepus will need to generate the candidates.

It has to be noted that different executions of this module might generate different candidates, so feel free to run it a few times consecutively. Keep in mind that the higher the `-ms`, `-ml` and `-mq` values, the more time will be needed for candidate generation.

```
lepus.py --markovify yahoo.com
Expand Down Expand Up @@ -261,9 +277,13 @@ optional arguments:
lists/words.txt]
--reverse perform reverse dns lookups on resolved public IP
addresses
-ripe, --ripe query ripe database for possible networks to be used
for reverse lookups
-r RANGES, --ranges RANGES
comma seperated ip ranges to perform reverse dns
lookups on
-or, --only-ranges use only ranges provided with -r or -ripe and not all
previously identifed IPs
--portscan scan resolved public IP addresses for open ports
-p PORTS, --ports PORTS
set of ports to be used by the portscan module
Expand All @@ -278,14 +298,15 @@ optional arguments:
-mq MARKOV_QUANTITY, --markov-quantity MARKOV_QUANTITY
max quantity of markov results per candidate length
[default is 5]
-f, --flush purge all records of the specified domain from the database
-f, --flush purge all records of the specified domain from the
database
-v, --version show program's version number and exit
```

## Full command example
The following, is an example run with all available active arguments:
```
./lepus.py python.org --wordlist lists/subdomains.txt --permutate -pw ~/mypermsword.lst --reverse -r 10.11.12.0/24 --portscan -p huge --takeover --markovify -ms 3 -ml 10 -mq 10
./lepus.py python.org --wordlist lists/subdomains.txt --permutate -pw ~/mypermsword.lst --reverse -ripe -r 10.11.12.0/24 --portscan -p huge --takeover --markovify -ms 3 -ml 10 -mq 10
```

The following command flushes all database entries for a specific domain:
Expand Down
4 changes: 2 additions & 2 deletions collectors/CRT.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import json
import requests
from json import loads
from termcolor import colored


Expand All @@ -15,7 +15,7 @@ def init(domain):
response = requests.get("https://crt.sh/?", params=parameters, headers=headers)

if response.status_code == 200:
data = json.loads(response.text)
data = loads(response.text)

for d in data:
if "\n" in d["name_value"]:
Expand Down
8 changes: 4 additions & 4 deletions collectors/Censys.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import re
import requests
from re import findall
from termcolor import colored
from configparser import RawConfigParser

Expand Down Expand Up @@ -29,8 +29,8 @@ def init(domain):
print(" \__", colored("Rate limit exceeded. See https://www.censys.io/account for rate limit details.", "red"))
return C

C = re.findall("CN=([\w\.\-\d]+)\." + domain, str(res.content))
numberOfPages = re.findall("pages\":\s(\d+)?}", str(res.content))
C = findall("CN=([\w\d][\w\d\-\.]*\.{0})".format(domain.replace(".", "\.")), str(res.content))
numberOfPages = findall("pages\":\s(\d+)?}", str(res.content))

for page in range(2, int(numberOfPages[0]) + 1):
payload = {"query": domain, "page": page}
Expand All @@ -41,7 +41,7 @@ def init(domain):
break

else:
tempC = re.findall("CN=([\w\.\-\d]+)\." + domain, str(res.content))
tempC = findall("CN=([\w\d][\w\d\-\.]*\.{0})".format(domain.replace(".", "\.")), str(res.content))
C = C + tempC

C = set(C)
Expand Down
2 changes: 1 addition & 1 deletion collectors/CertSpotter.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@


def parseResponse(response, domain):
hostnameRegex = "([\w\.\-]+\.%s)" % (domain.replace(".", "\."))
hostnameRegex = "([\w\d][\w\d\-\.]*\.{0})".format(domain.replace(".", "\."))
hosts = findall(hostnameRegex, str(response))

return [host.lstrip(".") for host in hosts]
Expand Down
9 changes: 4 additions & 5 deletions collectors/DNSTrails.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import requests
from json import loads
from termcolor import colored
from configparser import RawConfigParser

Expand Down Expand Up @@ -28,12 +29,10 @@ def init(domain):
return []

else:
payload = response.json()
payload = loads(response.text)

for k, v in list(payload.items()):
if v:
for dnsvalue in v:
DT.append(".".join([dnsvalue, domain]))
for item in payload["subdomains"]:
DT.append(".".join([item, domain]))

DT = set(DT)

Expand Down
78 changes: 78 additions & 0 deletions collectors/FOFA.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
import requests
from re import findall
from json import loads
from base64 import b64encode
from termcolor import colored
from configparser import RawConfigParser


def init(domain):
FOFA = []

print(colored("[*]-Searching FOFA...", "yellow"))

parser = RawConfigParser()
parser.read("config.ini")
FOFA_EMAIL = parser.get("FOFA", "FOFA_EMAIL")
FOFA_KEY = parser.get("FOFA", "FOFA_KEY")

if FOFA_EMAIL == "" or FOFA_KEY == "":
print(" \__", colored("No FOFA API credentials configured", "red"))
return []

size = 10000
page = 1
encodedDomain = b64encode(domain.encode("utf8")).decode("utf8")
parameters = {"email": FOFA_EMAIL, "key": FOFA_KEY, "qbase64": encodedDomain, "page": page, "size": size, "full": "true", "fields": "host,title,domain,header,banner,cert"}
headers = {"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0"}

try:
response = requests.get("https://fofa.so/api/v1/search/all", params=parameters, headers=headers)

if response.status_code == 200 and loads(response.text)["error"] is False:
data = loads(response.text)

resultNumber = data["size"]

if resultNumber % size == 0:
pagesToRequest = resultNumber // size
else:
pagesToRequest = (resultNumber // size) +1

while page <= pagesToRequest:

if page != 1:
parameters = {"email": FOFA_EMAIL, "key": FOFA_KEY, "qbase64": encodedDomain, "page": page, "size": size, "full": "true", "fields": "host,title,domain,header,banner,cert"}
response = requests.get("https://fofa.so/api/v1/search/all", params=parameters, headers=headers)

FOFA.extend([item.lower() for item in findall("([\w\d][\w\d\-\.]*\.{0})".format(domain.replace(".", "\.")), response.text)])
page += 1

FOFA = set(FOFA)

print(" \__ {0}: {1}".format(colored("Subdomains found", "cyan"), colored(len(FOFA), "yellow")))
return FOFA

else:
print(" \__", colored("Something went wrong!", "red"))
return []

except requests.exceptions.RequestException as err:
print(" \__", colored(err, "red"))
return []

except requests.exceptions.HTTPError as errh:
print(" \__", colored(errh, "red"))
return []

except requests.exceptions.ConnectionError as errc:
print(" \__", colored(errc, "red"))
return []

except requests.exceptions.Timeout as errt:
print(" \__", colored(errt, "red"))
return []

except Exception:
print(" \__", colored("Something went wrong!", "red"))
return []
8 changes: 6 additions & 2 deletions collectors/GoogleTransparency.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,12 @@

def parseResponse(response, domain):
try:
token = response.split("\n]\n,[")[2].split("]\n")[0].split(",")[1].strip("\"")
hostnameRegex = "([\w\.\-]+\.%s)" % (domain.replace(".", "\."))
try:
token = findall("(\w+)\",[\w\"]+,\d+,\d+\]\]\]$", response)[0]
except Exception:
token = "null"

hostnameRegex = "([\w\d][\w\d\-\.]*\.{0})".format(domain.replace(".", "\."))
hosts = findall(hostnameRegex, response)

return token, [host.lstrip(".") for host in hosts]
Expand Down
10 changes: 5 additions & 5 deletions collectors/ProjectCrobat.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import json
import requests
from json import loads
from termcolor import colored


Expand All @@ -14,8 +14,8 @@ def init(domain, ranges):
try:
response = requests.get(url, headers=headers)

if response.status_code == 200:
data = json.loads(response.text)
if response.status_code == 200 and response.text.strip() != "null":
data = loads(response.text)

for d in data:
Crobat.append(d)
Expand All @@ -26,7 +26,7 @@ def init(domain, ranges):
response = requests.get(rev_url, headers=headers)

if response.status_code == 200:
data = json.loads(response.text)
data = loads(response.text)

if "/" in r:
for ip in data:
Expand All @@ -35,7 +35,7 @@ def init(domain, ranges):
Crobat.append(item)

else:
data = json.loads(response.text)
data = loads(response.text)

for d in data:
if domain in d:
Expand Down
1 change: 0 additions & 1 deletion collectors/ProjectSonar.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import utilities.MiscHelpers
import requests
from json import loads
from termcolor import colored
Expand Down
11 changes: 7 additions & 4 deletions collectors/Shodan.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
import shodan
from re import findall
from json import dumps
from termcolor import colored
from configparser import RawConfigParser

Expand All @@ -19,11 +21,12 @@ def init(domain):

else:
try:
results = api.search("hostname:.{0}".format(domain))

try:
for res in results["matches"]:
SD.append("".join(res["hostnames"]))
for res in api.search_cursor("hostname:.{0}".format(domain)):
SD.extend([hostname for hostname in res["hostnames"] if ".{0}".format(domain) in hostname])

for res in api.search_cursor("ssl:.{0}".format(domain)):
SD.extend(findall("([\w\d][\w\d\-\.]*\.{0})".format(domain.replace(".", "\.")), dumps(res)))

except KeyError as errk:
print(" \__", colored(errk, "red"))
Expand Down
4 changes: 2 additions & 2 deletions collectors/ThreatCrowd.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import json
import requests
from json import loads
from termcolor import colored


Expand All @@ -12,7 +12,7 @@ def init(domain):
result = requests.get("https://www.threatcrowd.org/searchApi/v2/domain/report/", params={"domain": domain})

try:
RES = json.loads(result.text)
RES = loads(result.text)
resp_code = int(RES["response_code"])

if resp_code == 1:
Expand Down
Loading

0 comments on commit d3a2349

Please sign in to comment.