Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

Galileo - Web Application Audit Framework

$
0
0

Galileo is an open source penetration testing tool for web application, which helps developers and penetration testers identify and exploit vulnerabilities in their web applications.

Installation
$ git clone https://github.com/m4ll0k/Galileo.git galileo
$ cd galileo
Install requirements
$ pip install -r requirements.txt
or
$ apt-get install python-pysocks
For windows
$ python -m pip install pysocks
Run
$ python galileo.py

Usage
Set global options:
galileo #> set
Set A Context-Specific Variable To A Value
------------------------------------------
- Usage: set <option> <value>
- Usage: set COOKIE phpsess=hacker_test


Name Current Value Required Description
---------- ------------- -------- -----------
PAUTH no Proxy auth credentials (user:pass)
PROXY no Set proxy (host:port)
REDIRECT True no Set redirect
THREADS 5 no Number of threads
TIMEOUT 5 no Set timeout
USER-AGENT Mozilla/5.0 (X11; Ubuntu; Linux x86_64) yes Set user-agent
VERBOSITY 1 yes Verbosity level (0 = minimal,1 = verbose)
Search module:
galileo #> search disclosure
[+] Searching for 'disclosure'...

Disclosure
----------
disclosure/code
disclosure/creditcard
disclosure/email
disclosure/privateip
Show modules:
galileo #> show modules

Bruteforce
----------
bruteforce/auth_brute
bruteforce/backup_brute
bruteforce/file_dir_brute

Disclosure
----------
disclosure/code
disclosure/creditcard
disclosure/email
disclosure/privateip

Exploits
--------
exploits/shellshock

Fingerprint
-----------
fingerprint/cms
fingerprint/framework
fingerprint/server

Injection
---------
injection/os_command_injection
injection/sql_injection

Scanner
-------
scanner/asp_trace

Tools
-----
tools/socket
Use module:
galileo #> use bruteforce/backup_brute
galileo bruteforce(backup_brute) #>
Set module options
galileo bruteforce(backup_brute) #> show options

Name Current Value Required Description
-------- ------------- -------- -----------
EXTS no Set backup extensions
HOST yes The target address
METHOD GET no HTTP method
PORT 80 no The target port
URL_PATH / no The target URL path
WORDLIST yes Common directory wordlist

galileo bruteforce(backup_brute) #> set HOST www.xxxxxxx.com
HOST => www.xxxxxxx.com
galileo bruteforce(backup_brute) #> set WORDLIST /home/m4ll0k/Desktop/all.txt
WORDLIST => /home/m4ll0k/Desktop/all.txt
Run:
galileo bruteforce(backup_brute) #> run




Multitor - A Tool That Lets You Create Multiple TOR Instances With A Load-Balancing

$
0
0

A tool that lets you create multiple TOR instances with a load-balancing traffic between them by HAProxy. It's provides one single endpoint for clients. In addition, you can view previously running TORprocesses and create a new identity for all or selected processes.

The multitor has been completely rewritten on the basis of:

Parameters
Provides the following options:
  Usage:
multitor <option|long-option>

Examples:
multitor --init 2 --user debian-tor --socks-port 9000 --control-port 9900
multitor --show-id --socks-port 9000

Options:
--help show this message
--debug displays information on the screen (debug mode)
--verbose displays more information about TOR processes
-i, --init <num> init new tor processes
-s, --show-id show specific tor process id
-n, --new-id regenerate tor circuit
-u, --user <string> set the user (only with -i|--init)
--socks-port <port_num|all> set socks port number
--control-port <port_num> set control port number
--proxy <socks|http> set load balancer

Requirements
Multitor uses external utilities to be installed before running:

How To Use
It's simple - for install:
./setup.sh install
For remove:
./setup.sh uninstall
  • symlink to bin/multitor is placed in /usr/local/bin
  • man page is placed in /usr/local/man/man8

Creating processes
Then an example of starting the tool:
multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900
Creates new TOR processes and specifies the number of processes to create:
  • --init 2
Specifies the user from which new processes will be created (the user must exist in the system):
  • -u debian-tor
Specifies the port number for TOR communication. Increased by 1 for each subsequent process:
  • --socks-port 9000
Specifies the port number of the TOR process control. Increased by 1 for each subsequent process:
  • --control-port 9900

Reviewing processes
Examples of obtaining information about a given TOR process created by multitor:
multitor --show-id --socks-port 9000
We want to get information about a given TOR process:
  • --show-id
You can use the all value to display all processes.
Specifies the port number for communication. Allows you to find the process after this port number:
  • --socks-port 9000

New TOR identity
There is a "Use new identity" button in TOR Browser or Vidalia. It sends a signal to the control port of TOR, to switch to a new identity. An alternative solution is to restart the multitor or wait for the time defined in the NewCircuitPeriod variable, which default value is 30s.
If there is a need to create a new identity:
multitor --new-id --socks-port 9000
We set up creating a new identity for TOR process:
  • --new-id
You can use the all value to regenerate identity for all processes. An alternative option to give new identity is to restart the multitor.
Specifies the port number for communication. Allows you to find the process after this port number:
  • --socks-port 9000

Proxy
See Load balancing.

Output example
So if We created 2 TOR processes by Multitor example output will be given:

Load balancing
Multitor uses two techniques to create a load balancing mechanism - these are socks proxy and http proxy. Each of these types of load balancing is good but its purpose is slightly different.
For browsing websites (generally for http/https traffic) it is recommended to use http proxy. In this configuration, the polipo service is used, which has many very useful functions (including cache memory) which in the case of TOR is not always well-aimed. In addition, we are confident in better handling of ssl traffic.
The socks proxy type is also reliable, however, when browsing websites through TOR nodes it can cause more problems.
Multitor uses HAProxy to create a local proxy server for all created TOR or Polipo instances and distribute traffic between them. The default configuration is in templates/haproxy-template.cfg.
HAProxy uses 16379 to communication, so all of your services to use the load balancer should have this port number.

SOCKS Proxy
Communication architecture:
Client
|
|--------> HAProxy (127.0.0.1:16379)
|
|--------> TOR Instance (127.0.0.1:9000)
|
|--------> TOR Instance (127.0.0.1:9001)
To run the load balancer you need to add the --proxy socksparameter to the command specified in the example.
multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900 --proxy socks
After launching, let's see the working processes:
netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo"
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 28976/tor
tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 29039/tor
tcp 0 0 127.0.0.1:9900 0.0.0.0:* LISTEN 28976/tor
tcp 0 0 127.0.0.1:9901 0.0.0.0:* LISTEN 29039/tor
tcp 0 0 127.0.0.1:16379 0.0.0.0:* LISTEN 29104/haproxy
tcp 0 0 127.0.0.1:16380 0.0.0.0:* LISTEN 29104/haproxy
In order to test the correctness of the setup, you can run the following command:
for i in $(seq 1 4) ; do \
printf "req %2d: " "$i" ; \
curl -k --location --socks5 127.0.0.1:16379 http://ipinfo.io/ip ; \
done

req 1: 5.254.79.66
req 2: 178.175.135.99
req 3: 5.254.79.66
req 4: 178.175.135.99
Communication through socks proxy takes place without a cache (except browsers that have their own cache). Curl and other low-level programs should work without any problems.

HTTP Proxy
Communication architecture:
Client
|
|--------> HAProxy (127.0.0.1:16379)
|
|--------> Polipo Instance (127.0.0.1:8000)
| |
| |---------> TOR Instance (127.0.0.1:9000)
|
|--------> Polipo Instance (127.0.0.1:8001)
|
|---------> TOR Instance (127.0.0.1:9001)
To run the load balancer you need to add the --proxy http parameter to the command specified in the example.
multitor --init 2 -u debian-tor --socks-port 9000 --control-port 9900 --proxy http
After launching, let's see the working processes:
netstat -tapn | grep LISTEN | grep "tor\|haproxy\|polipo"
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 32168/tor
tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 32246/tor
tcp 0 0 127.0.0.1:9900 0.0.0.0:* LISTEN 32168/tor
tcp 0 0 127.0.0.1:9901 0.0.0.0:* LISTEN 32246/tor
tcp 0 0 127.0.0.1:16379 0.0.0.0:* LISTEN 32327/haproxy
tcp 0 0 127.0.0.1:16380 0.0.0.0:* LISTEN 32327/haproxy
tcp 0 0 127.0.0.1:8000 0.0.0.0:* LISTEN 32307/polipo
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 32320/polipo
In order to test the correctness of the setup, you can run the following command:
for i in $(seq 1 4) ; do \
printf "req %2d: " "$i" ; \
curl -k --location --proxy 127.0.0.1:16379 http://ipinfo.io/ip ; \
done

req 1: 178.209.42.84
req 2: 185.100.85.61
req 3: 178.209.42.84
req 4: 185.100.85.61
In the default configuration, the Polipo cache has been turned off (look at the configuration template). If you set the network configuration in the browser so that the traffic passes through HAProxy, you must remember that browsers have their own cache, which can cause that each entry to the page will be from the same IP address. This is not a big problem because it is not always the case. After clearing the browser cache again, the web server will receive the request from a different IP address.
You can check it for example in the firefox browsers by installing the "Empty Cache Button by mvm" add-on and enter the http://myexternalip.com/ website.

Port convention
The port numbers for the TOR are set by the user using the --socks-port parameter. Additionally, the standard port on which HAProxy listens is 16379. Polipo uses ports 1000 smaller than those set for TOR.

HAProxy stats interface
If you want to view traffic statistics, go to http://127.0.0.1:16380/stats.
Login: ha_admin
Password: automatically generated (see in etc/haproxy.cfg)

Polipo configuration interface
If you wat to view or changed Polipo params, got to http://127.0.0.1:8000/polipo/config (remember the right port number).

Gateway
If you are building a gateway for TOR connections, you can put HAProxy on an external IP address by changing the bind directive in haproxy-template.cfg:
bind 0.0.0.0:16379 name proxy

Password authentication
Multitor uses password for authorization on the control port. The password is generated automatically and contains 18 random characters - it is displayed in the final report after the creation of new processes.

Logging
After running the script, the log/ directory is created and in it the following files with logs:
  • <script_name>.<date>.log - all _logger() function calls are saved in it
  • stdout.log - a standard output and errors from the _init_cmd() and other function are written in it

Project architecture
|-- LICENSE.md                 # GNU GENERAL PUBLIC LICENSE, Version 3, 29 June 2007
|-- README.md # this simple documentation
|-- CONTRIBUTING.md # principles of project support
|-- .gitignore # ignore untracked files
|-- .travis.yml # continuous integration with Travis CI
|-- setup.sh # install multitor on the system
|-- bin
|-- multitor # main script (init)
|-- doc # includes documentation, images and manuals
|-- man8
|-- multitor.8 # man page for multitor
|-- etc # contains configuration files
|-- lib # libraries, external functions
|-- log # contains logs, created after init
|-- src # includes external project files
|-- helpers # contains core functions
|-- import # appends the contents of the lib directory
|-- __init__ # contains the __main__ function
|-- settings # contains multitor settings
|-- templates # contains examples and template files
|-- haproxy-template.cfg # example of HAProxy configuration
|-- polipo-template.cfg # example of Polipo configuration


Archerysec - Open Source Vulnerability Assessment And Management Helps Developers And Pentesters To Perform Scans And Manage Vulnerabilities

$
0
0

Archery is an opensource vulnerability assessment and management tool which helps developers and pentesters to perform scans and manage vulnerabilities. Archery uses popular opensource tools to perform comprehensive scanning for web application and network. It also performs web application dynamic authenticated scanning and covers the whole applications by using selenium. The developers can also utilize the tool for implementation of their DevOps CI/CD environment.

Documentation



Demo


Overview of the tool:
  • Perform Web and Network vulnerability Scanning using opensource tools.
  • Correlates and Collaborate all raw scans data, show them in a consolidated manner.
  • Perform authenticated web scanning.
  • Perform web application scanning using selenium.
  • Vulnerability Management.
  • Enable REST API's for developers to perform scanning and Vulnerability Management.
  • Useful for DevOps teams for Vulnerability Management.

Note
Currently project is in development phase and still lot of work going on.

Requirement

Burp Scanner
Follow the instruction in order to enable Burp REST API. You can manage and trigger scans using Archery once REST API enabled.

Installation
$ git clone https://github.com/archerysec/archerysec.git
$ cd archerysec
$ pip install -r requirements.txt
$ python manage.py collectstatic
$ python manage.py makemigrations networkscanners
$ python manage.py makemigrations webscanners
$ python manage.py makemigrations projects
$ python manage.py makemigrations APIScan
$ python manage.py makemigrations osintscan
$ python manage.py makemigrations jiraticketing
$ python manage.py migrate
$ python manage.py createsuperuser
$ python manage.py runserver
Note: Make sure these steps (except createsuperuser) should be perform after every git pull.

Docker Installation
ArcherySec Docker is available from ArcherySec Docker
$ docker pull archerysec/archerysec
$ docker run -it -p 8000:8000 archerysec/archerysec:latest

# For persistence

docker run -it -p 8000:8000 -v <your_local_dir>:/root/.archerysec archerysec/archerysec:latest

Setup Setting

ZAP running daemon mode
Windows :
zap.bat -daemon -host 0.0.0.0 -port 8080 -config api.disablekey=true -config api.addrs.addr.name=.* -config api.addrs.addr.regex=true
Others :
zap.sh -daemon -host 0.0.0.0 -port 8080 -config api.disablekey=true -config api.addrs.addr.name=.* -config api.addrs.addr.regex=true

Zap Setting
  1. Go to Setting Page
  2. Edit ZAP setting or navigate URL : http://host:port/setting_edit/
  3. Fill below required information.
    Zap API Key : Leave blank if you using ZAP as daemon api.disablekey=true
    Zap API Host : Your zap API host ip or system IP Ex. 127.0.0.1 or 192.168.0.2
    Zap API Port : ZAP running port Ex. 8080

OpenVAS Setting
  1. Go to setting Page
  2. Edit OpenVAS setting or navigate URL : http://host:port/networkscanners/openvas_setting
  3. Fill all required information and click on save.

Road Map
  • API Automated vulnerability scanning.
  • Perform Reconnaissance before scanning.
  • Concurrent Scans.
  • Vulnerability POC pictures.
  • Cloud Security scanning.
  • Dashboards
  • Easy to installing.

Lead Developer
Anand Tiwari - https://github.com/anandtiwarics


Sn1per v4.4 - Automated Pentest Recon Scanner

$
0
0

Sn1per is an automated scanner that can be used during a penetration test to enumerate and scan for vulnerabilities.

DEMO VIDEO:
 

FEATURES:
  • Automatically collects basic recon (ie. whois, ping, DNS, etc.)
  • Automatically launches Google hacking queries against a target domain
  • Automatically enumerates open ports via NMap port scanning
  • Automatically brute forces sub-domains, gathers DNS info and checks for zone transfers
  • Automatically checks for sub-domain hijacking
  • Automatically runs targeted NMap scripts against open ports
  • Automatically runs targeted Metasploit scan and exploit modules
  • Automatically scans all web applications for common vulnerabilities
  • Automatically brute forces ALL open services
  • Automatically test for anonymous FTP access
  • Automatically runs WPScan, Arachni and Nikto for all web services
  • Automatically enumerates NFS shares
  • Automatically test for anonymous LDAP access
  • Automatically enumerate SSL/TLS ciphers, protocols and vulnerabilities
  • Automatically enumerate SNMP community strings, services and users
  • Automatically list SMB users and shares, check for NULL sessions and exploit MS08-067
  • Automatically exploit vulnerable JBoss, Java RMI and Tomcat servers
  • Automatically tests for open X11 servers
  • Auto-pwn added for Metasploitable, ShellShock, MS08-067, Default Tomcat Creds
  • Performs high level enumeration of multiple hosts and subnets
  • Automatically integrates with Metasploit Pro, MSFConsole and Zenmap for reporting
  • Automatically gathers screenshots of all web sites
  • Create individual workspaces to store all scan output

KALI LINUX INSTALL:
./install.sh

DOCKER INSTALL:
Credits: @menzow
Docker Install: https://github.com/menzow/sn1per-docker
Docker Build: https://hub.docker.com/r/menzo/sn1per-docker/builds/bqez3h7hwfun4odgd2axvn4/
Example usage:
$ docker pull menzo/sn1per-docker
$ docker run --rm -ti menzo/sn1per-docker sniper menzo.io

USAGE:
[*] NORMAL MODE
sniper -t|--target <TARGET>

[*] NORMAL MODE + OSINT + RECON
sniper -t|--target <TARGET> -o|--osint -re|--recon

[*] STEALTH MODE + OSINT + RECON
sniper -t|--target <TARGET> -m|--mode stealth -o|--osint -re|--recon

[*] DISCOVER MODE
sniper -t|--target <CIDR> -m|--mode discover -w|--workspace <WORSPACE_ALIAS>

[*] SCAN ONLY SPECIFIC PORT
sniper -t|--target <TARGET> -m port -p|--port <portnum>

[*] FULLPORTONLY SCAN MODE
sniper -t|--target <TARGET> -fp|--fullportonly

[*] PORT SCAN MODE
sniper -t|--target <TARGET> -m|--mode port -p|--port <PORT_NUM>

[*] WEB MODE - PORT 80 + 443 ONLY!
sniper -t|--target <TARGET> -m|--mode web

[*] HTTP WEB PORT MODE
sniper -t|--target <TARGET> -m|--mode webporthttp -p|--port <port>

[*] HTTPS WEB PORT MODE
sniper -t|--target <TARGET> -m|--mode webporthttps -p|--port <port>

[*] ENABLE BRUTEFORCE
sniper -t|--target <TARGET> -b|--bruteforce

[*] AIRSTRIKE MODE
sniper -f|--file /full/path/to/targets.txt -m|--mode airstrike

[*] NUKE MODE WITH TARGET LIST, BRUTEFORCE ENABLED, FULLPORTSCAN ENABLED, OSINT ENABLED, RECON ENABLED, WORKSPACE & LOOT ENABLED
sniper -f--file /full/path/to/targets.txt -m|--mode nuke -w|--workspace <WORKSPACE_ALIAS>

[*] ENABLE LOOT IMPORTING INTO METASPLOIT
sniper -t|--target <TARGET>

[*] LOOT REIMPORT FUNCTION
sniper -w <WORKSPACE_ALIAS> --reimport

[*] UPDATE SNIPER
sniper -u|--update

MODES:
  • NORMAL: Performs basic scan of targets and open ports using both active and passive checks for optimal performance.
  • STEALTH: Quickly enumerate single targets using mostly non-intrusive scans to avoid WAF/IPS blocking.
  • AIRSTRIKE: Quickly enumerates open ports/services on multiple hosts and performs basic fingerprinting. To use, specify the full location of the file which contains all hosts, IPs that need to be scanned and run ./sn1per /full/path/to/targets.txt airstrike to begin scanning.
  • NUKE: Launch full audit of multiple hosts specified in text file of choice. Usage example: ./sniper /pentest/loot/targets.txt nuke.
  • DISCOVER: Parses all hosts on a subnet/CIDR (ie. 192.168.0.0/16) and initiates a sniper scan against each host. Useful for internal network scans.
  • PORT: Scans a specific port for vulnerabilities. Reporting is not currently available in this mode.
  • FULLPORTONLY: Performs a full detailed port scan and saves results to XML.
  • WEB: Adds full automatic web application scans to the results (port 80/tcp & 443/tcp only). Ideal for web applications but may increase scan time significantly.
  • WEBPORTHTTP: Launches a full HTTP web application scan against a specific host and port.
  • WEBPORTHTTPS: Launches a full HTTPS web application scan against a specific host and port.
  • UPDATE: Checks for updates and upgrades all components used by sniper.
  • REIMPORT: Reimport all workspace files into Metasploit and reproduce all reports.

SAMPLE REPORT:
https://gist.github.com/1N3/8214ec2da2c91691bcbc


Salt-Scanner - Linux Vulnerability Scanner Based On Salt Open And Vulners Audit API

$
0
0

A linuxvulnerability scanner based on Vulners Audit API and Salt Open, with Slack notifications and JIRA integration.

Features
  • Slack notification and report upload
  • JIRA integration
  • OpsGenie integration

Requirements
  • Salt Open 2016.11.x (salt-master, salt-minion)¹
  • Python 2.7
  • salt (you may need to install gcc, gcc-c++, python dev)
  • slackclient
  • jira
  • opsgenie-sdk
Note: Salt Master and Minion versions should match. Salt-Scanner supports Salt version 2016.11.x. if you are using version 2017.7.x, replace "expr_form" with "tgt_type" in salt-scanner.py.

Usage
$ ./salt-scanner.py -h

==========================================================
Vulnerability scanner based on Vulners API and Salt Open
_____ _ _ _____
/ ___| | | | / ___|
\ `--. __ _| | |_ \ `--. ___ __ _ _ __ _ __ ___ _ __
`--. \/ _` | | __| `--. \/ __/ _` | '_ \| '_ \ / _ \ '__|
/\__/ / (_| | | |_ /\__/ / (_| (_| | | | | | | | __/ |
\____/ \__,_|_|\__| \____/ \___\__,_|_| |_|_| |_|\___|_|

Salt-Scanner 0.1 / by 0x4D31
==========================================================

usage: salt-scanner.py [-h] [-t TARGET_HOSTS] [-tF {glob,list,grain}]
[-oN OS_NAME] [-oV OS_VERSION]

optional arguments:
-h, --help show this help message and exit
-t TARGET_HOSTS, --target-hosts TARGET_HOSTS
-tF {glob,list,grain}, --target-form {glob,list,grain}
-oN OS_NAME, --os-name OS_NAME
-oV OS_VERSION, --os-version OS_VERSION

$ sudo SLACK_API_TOKEN="EXAMPLETOKEN" ./salt-scanner.py -t "*"

==========================================================
Vulnerability scanner based on Vulners API and Salt Open
_____ _ _ _____
/ ___| | | | / ___|
\ `--. __ _| | |_ \ `--. ___ __ _ _ __ _ __ ___ _ __
`--. \/ _` | | __| `--. \/ __/ _` | '_ \| '_ \ / _ \ '__|
/\__/ / (_| | | |_ /\__/ / (_| (_| | | | | | | | __/ |
\____/ \__,_|_|\__| \____/ \___\__,_|_| |_|_| |_|\___|_|

Salt-Scanner 0.1 / by 0x4D31
==========================================================

+ No default OS is configured. Detecting OS...
+ Detected Operating Systems:
- OS Name: centos, OS Version: 7
+ Getting the Installed Packages...
+ Started Scanning '10.10.10.55'...
- Total Packages: 357
- 6 Vulnerable Packages Found - Severity: Low
+ Started Scanning '10.10.10.56'...
- Total Packages: 392
- 6 Vulnerable Packages Found - Severity: Critical

+ Finished scanning 2 host (target hosts: '*').
2 Hosts are vulnerable!

+ Output file created: 20170622-093138_232826a7-983f-499b-ad96-7b8f1a75c1d7.txt
+ Full report uploaded to Slack
+ JIRA Issue created: VM-16
+ OpsGenie alert created
You can also use Salt Grains such as ec2_tags in target_hosts:
$ sudo ./salt-scanner.py --target-hosts "ec2_tags:Role:webapp" --target-form grain

Slack Alert



HTTPoxyScan - HTTPoxy Exploit Scanner

$
0
0

PoC/Exploit scanner to scan common CGI files on a target URL for the HTTPoxy vulnerability. Httpoxy is a set of vulnerabilities that affect application code running in CGI, or CGI-like environments. For more details, go to https://httpoxy.org.

REQUIREMENTS:
Requires ncat to establish reverse session

USAGE:
./httpoxyscan.py https://target.com cgi_list.txt 10.1.2.243 3000
This will scan https://target.com with a list of common CGI files while injecting a Proxy header back to a given IP:PORT. A reverse listener will catch the incoming connection to confirm the remote site is vulnerable.


Burpa - A Burp Suite Automation Tool

$
0
0

A Burp Suite Automation Tool With Slack Integration.

Requirements

Usage
$ python burpa.py -h

###################################################
__
/ /_ __ ___________ ____ _
/ __ \/ / / / ___/ __ \/ __ `/
/ /_/ / /_/ / / / /_/ / /_/ /
/_.___/\__,_/_/ / .___/\__,_/
/_/
burpa version 0.1 / by 0x4D31

###################################################
usage: burpa.py [-h] [-a {scan,proxy-config,stop}] [-pP PROXY_PORT]
[-aP API_PORT] [-rT {HTML,XML}] [-r {in-scope,all}] [-sR]
[-sAT SLACK_API_TOKEN]
[--include-scope [INCLUDE_SCOPE [INCLUDE_SCOPE ...]]]
[--exclude-scope [EXCLUDE_SCOPE [EXCLUDE_SCOPE ...]]]
proxy_url

positional arguments:
proxy_url Burp Proxy URL

optional arguments:
-h, --help show this help message and exit
-a {scan,proxy-config,stop}, --action {scan,proxy-config,stop}
-pP PROXY_PORT, --proxy-port PROXY_PORT
-aP API_PORT, --api-port API_PORT
-rT {HTML,XML}, --report-type {HTML,XML}
-r {in-scope,all}, --report {in-scope,all}
-sR, --slack-report
-sAT SLACK_API_TOKEN, --slack-api-token SLACK_API_TOKEN
--include-scope [INCLUDE_SCOPE [INCLUDE_SCOPE ...]]
--exclude-scope [EXCLUDE_SCOPE [EXCLUDE_SCOPE ...]]

TEST:
$ python burpa.py http://127.0.0.1 --action proxy-config

###################################################
__
/ /_ __ ___________ ____ _
/ __ \/ / / / ___/ __ \/ __ `/
/ /_/ / /_/ / / / /_/ / /_/ /
/_.___/\__,_/_/ / .___/\__,_/
/_/
burpa version 0.1 / by 0x4D31

###################################################
[+] Checking the Burp proxy configuration ...
[-] Proxy configuration needs to be updated
[+] Updating the Burp proxy configuration ...
[-] Proxy configuration updated

$ python burpa.py http://127.0.0.1 --action scan --include-scope http://testasp.vulnweb.com --report in-scope --slack-report

###################################################
__
/ /_ __ ___________ ____ _
/ __ \/ / / / ___/ __ \/ __ `/
/ /_/ / /_/ / / / /_/ / /_/ /
/_.___/\__,_/_/ / .___/\__,_/
/_/
burpa version 0.1 / by 0x4D31

###################################################
[+] Retrieving the Burp proxy history ...
[-] Found 4 unique targets in proxy history
[+] Updating the scope ...
[-] http://testasp.vulnweb.com included in scope
[+] Active scan started ...
[-] http://testasp.vulnweb.com Added to the scan queue
[-] Scan in progress: %100
[+] Scan completed
[+] Scan issues for http://testasp.vulnweb.com:
- Issue: Robots.txt file, Severity: Information
- Issue: Cross-domain Referer leakage, Severity: Information
- Issue: Cleartext submission of password, Severity: High
- Issue: Frameable response (potential Clickjacking), Severity: Information
- Issue: Password field with autocomplete enabled, Severity: Low
- Issue: Cross-site scripting (reflected), Severity: High
- Issue: Unencrypted communications, Severity: Low
- Issue: Path-relative style sheet import, Severity: Information
- Issue: Cookie without HttpOnly flag set, Severity: Low
- Issue: File path traversal, Severity: High
- Issue: SQL injection, Severity: High
[+] Downloading HTML/XML report for http://testasp.vulnweb.com
[-] Scan report saved to /tmp/burp-report_20170807-235135_http-testasp.vulnweb.com.html
[+] Burp scan report uploaded to Slack



iOSRestrictionBruteForce v2.1.0 - Crack iOS Restriction Passcodes With Python

$
0
0

This version of the application is written in Python, which is used to crack the restriction passcode of an iPhone/iPad takes advantage of a flaw in unencrypted backups allowing the hash and salt to be discovered.

DEPENDENCIES
This has been tested with Python 2.7 and Python 3.6
Requires Passlib Install with pip install passlib

Usage
usage: iOSCrack.py [-h] [-a] [-c] [-b folder] [-t]

a script to crack the restriction passcode of an iDevice

optional arguments:
-h, --help show this help message and exit
-a, --automatically automatically finds and cracks hashes
-c, --cli prompts user for input
-b folder, --backup folder
where backups are located
-t, --test runs unittest

How to Use
  1. Clone repository
     git clone https://github.com/thehappydinoa/iOSRestrictionBruteForce && cd iOSRestrictionBruteForce
  2. Make sure to use iTunes or libimobiledevice to backup the iOS device to computer
  3. Run ioscrack.py with the auto option
     python ioscrack.py -a

How to Test
Run ioscrack.py with the test option
     python ioscrack.py -t

How It Works
Done by using the pbkdf2 hash with the Passlib python module
  1. Trys the top 20 four-digit pins
  2. Trys birthdays between 1900-2017
  3. Brute force pins from 1 to 9999
  4. Adds successful pins to local database

How to Protect Against
  1. Encrpyt backups
  2. Backup only on trusted computers



Terminator - Metasploit Payload Generator

$
0
0

Terminator Metasploit Payload Generator.

Payload List :

Binaries Payloads
1) Android
2) Windows
3) Linux
4) Mac OS

Scripting Payloads
1) Python
2) Perl
3) Bash

Web Payloads
1) ASP
2) JSP
3) War

Encrypters
1) APK Encrypter
2) Python Encrypter
  • The author does not hold any responsibility for the bad use of this tool, remember that attacking targets without prior consent is illegal and punished by law.


GyoiThon - A Growing Penetration Test Tool Using Machine Learning

$
0
0

GyoiThon is a growing penetration test tool using Machine Learning.
GyoiThon identifies the software installed on web server (OS, Middleware, Framework, CMS, etc...) based on the learning data. After that, it executes valid exploits for the identified software using Metasploit. Finally, it generates reports of scan results. GyoiThon executes the above processing automatically.
  • Processing steps

GyoiThon executes the above "Step1" - "Step4" fully automatically.
User's only operation is to input the top URL of the target web server in GyoiThon.
It is very easy!
You can identify vulnerabilities of the web servers without taking time and effort.

Processing flow

Step 1. Gather HTTP responses.
GyoiThon gathers several HTTP responses of target website while crawling.
The following are example of HTTP responses gathered by GyoiThon.
  • Example.1
HTTP/1.1 200 OK
Date: Tue, 06 Mar 2018 03:01:57 GMT
Connection: close
Content-Type: text/html; charset=UTF-8
Etag: "409ed-183-53c5f732641c0"
Content-Length: 15271

...snip...
  • Example.2
HTTP/1.1 200 OK
Date: Tue, 06 Mar 2018 06:56:17 GMT
Connection: close
Content-Type: text/html; charset=UTF-8
Set-Cookie: f00e68432b68050dee9abe33c389831e=0eba9cd0f75ca0912b4849777677f587;
path=/;
Content-Length: 37496

...snip...
  • Example.3
HTTP/1.1 200 OK
Date: Tue, 06 Mar 2018 04:19:19 GMT
Connection: close
Content-Type: text/html; charset=UTF-8
Content-Length: 11819

...snip...

<script src="/core/misc/drupal.js?v=8.3.1"></script>

Step 2. Identify product name.
GyoiThon identifies product name installed on web server using following two methods.

1. Based on Machine Learning.
By using Machine Learning (Naive Bayes), GyoiThon identifies software based on a combination of slightly different features (Etag value, Cookie value, specific HTML tag etc.) for each software. Naive Bayes is learned using the training data which example below (Training data). Unlike the signature base, Naive Bayes is stochastically identified based on various features included in HTTP response when it cannot be identified software in one feature.
  • Example.1
Etag: "409ed-183-53c5f732641c0"
GyoiThon can identify the web server software Apache.
This is because GyoiThon learns features of Apache such as "Etag header value (409ed-183-53c5f732641c0). In our survey, Apache use combination of numeral and lower case letters as the Etag value. And, Etag value is separated 4-5 digits and 3-4 digits and 12 digits, final digit is 0 in many cases.
  • Example.2
Set-Cookie: f00e68432b68050dee9abe33c389831e=0eba9cd0f75ca0912b4849777677f587;
GyoiThon can identify the CMS Joomla!.
This is because GyoiThon learns features of Joomla! such as "Cookie name (f00e6 ... 9831e) " and "Cookie value (0eba9 ... 7f587). In our survey, Joomla! uses 32 lower case letters as the Cookie name and Cookie value in many cases.

Training data (One example)
  • Joomla! (CMS)
Set-Cookie: ([a-z0-9]{32})=[a-z0-9]{26,32};
Set-Cookie: [a-z0-9]{32}=([a-z0-9]{26,32});
...snip...
  • HeartCore (Japanese famous CMS)
Set-Cookie:.*=([A-Z0-9]{32});.*
<meta name=["'](author)["'] content=["']{2}.*
...snip...
  • Apache (Web server software)
Etag:.*".*-[0-9a-z]{3,4}-[0-9a-z]{13}")[\r\n]
...snip...

2. Based on String matching.
Of course, GyoiThon can identify software by string matching also used in traditional penetration test tools. Examples are shown below.
  • Example.3
<script src="/core/misc/drupal.js?v=8.3.1"></script>
GyoiThon can identify the CMS Drupal.
It is very easy.

Step 3. Exploit using Metasploit.
GyoiThon executes exploit corresponding to the identified software using Metasploit and it checks whether the software is affected by the vulnerability.

  • Running example
[*] exploit/multi/http/struts_code_exec_exception_delegator, target: 1, payload: linux/x86/shell/reverse_nonx_tcp, result: failure
[*] exploit/multi/http/struts_code_exec_exception_delegator, target: 1, payload: linux/x86/shell/reverse_tcp, result: failure
[*] exploit/multi/http/struts_code_exec_exception_delegator, target: 1, payload: linux/x86/shell/reverse_tcp_uuid, result: failure
[*] exploit/multi/http/struts_code_exec_exception_delegator, target: 1, payload: linux/x86/shell_bind_ipv6_tcp, result: failure
[*] exploit/multi/http/struts_code_exec_exception_delegator, target: 1, payload: linux/x86/shell_bind_tcp, result: failure

...snip...

[*] exploit/linux/http/apache_continuum_cmd_exec, target: 0, payload: generic/custom, result: failure
[*] exploit/linux/http/apache_continuum_cmd_exec, target: 0, payload: generic/debug_trap, result: failure
[*] exploit/linux/http/apache_continuum_cmd_exec, target: 0, payload: generic/shell_bind_tcp, result: failure
[*] exploit/linux/http/apache_continuum_cmd_exec, target: 0, payload: generic/shell_reverse_tcp, result: failure
[*] exploit/linux/http/apache_continuum_cmd_exec, target: 0, payload: generic/tight_loop, result: bingo!!

Step 4. Generate scan report.
GyoiThon generates a report that summarizes vulnerabilities.
Report's style is html.
  • sample


Demonstration movie


Usage

Step.0 Initialize Metasploit DB
Firstly, you initialize metasploit db (postgreSQL) using msfdb command.
root@kali:~# msfdb init

Step.1 Launch Metasploit Framework
You launch Metasploit on the remote server that installed Metasploit Framework such as Kali Linux.
root@kali:~# msfconsole
______________________________________________________________________________
| |
| METASPLOIT CYBER MISSILE COMMAND V4 |
|______________________________________________________________________________|
\\ / /
\\ . / / x
\\ / /
\\ / + /
\\ + / /
* / /
/ . /
X / / X
/ ###
/ # % #
/ ###
. /
. / . * .
/
*
+ *

^
#### __ __ __ ####### __ __ __ ####
#### / \\ / \\ / \\ ########### / \\ / \\ / \\ ####
################################################################################
################################################################################
# WAVE 4 ######## SCORE 31337 ################################## HIGH FFFFFFFF #
################################################################################
https://metasploit.com


=[ metasploit v4.16.15-dev ]
+ -- --=[ 1699 exploits - 968 auxiliary - 299 post ]
+ -- --=[ 503 payloads - 40 encoders - 10 nops ]
+ -- --=[ Free Metasploit Pro trial: http://r-7.co/trymsp ]

msf >

Step.2 Launch RPC Server
You launch RPC Server of Metasploit following.
msf> load msgrpc ServerHost=192.168.220.144 ServerPort=55553 User=test Pass=test1234
[*] MSGRPC Service: 192.168.220.144:55553
[*] MSGRPC Username: test
[*] MSGRPC Password: test1234
[*] Successfully loaded plugin: msgrpc
msgrpc optionsdescription
ServerHostIP address of your server that launched Metasploit. Above example is 192.168.220.144.
ServerPortAny port number of your server that launched Metasploit. Above example is 55553.
UserAny user name using authentication (default => msf). Above example is test.
PassAny password using authentication (default => random string). Above example is test1234.

Step.3 Edit config file.
You have to change following value in config.ini
...snip...

[GyoiExploit]
server_host : 192.168.220.144
server_port : 55553
msgrpc_user : test
msgrpc_pass : test1234
timeout : 10
LHOST : 192.168.220.144
LPORT : 4444

...snip...
configdescription
server_hostIP address of your server that launched Metasploit. Your setting value ServerHost in Step2.
server_portAny port number of your server that launched Metasploit. Your setting value ServerPort in Step2.
msgrpc_userMetasploit's user name using authentication. Your setting value User in Step2.
msgrpc_passMetasploit's password using authentication. Your setting value Pass in Step2.
LHOSTIP address of your server that launched Metasploit. Your setting value ServerHost in Step2.

Step.4 Edit target file.
GyoiThon accesses target server using host.txt.
So, you have to edit host.txt before executing GyoiThon.
  • sample of host.txt
    target server => 192.168.220.148
    target port => 80
    target path => /oscommerce/catalog/
192.168.220.148 80 /oscommerce/catalog/
You have to separate IP address, port number and target path using single space.
Note
Current gyoithon.py is provisional version that without crawling function. We'll upgrade gyoithon.py by April 9. Then, target path will be unnecessary.

Step.5 Run GyoiThon
You execute GyoiThon following command.
local@client:~$ python gyoithon.py

Step.6 Check scan report
Please check scan report using any web browser.
local@client:~$ firefox "gyoithon root path"/classifier4gyoithon/report/gyoithon_report.html

Tips

1. How to add string matching patterns.
signatures path includes four files corresponding to each product categories.
local@client:~$ ls "gyoithon root path"/signatures/
signature_cms.txt
signature_framework.txt
signature_os.txt
signature_web.txt
  • signature_cms.txt
    It includes string matching patterns of CMS.
  • signature_framework.txt
    It includes string matching patterns of FrameWork.
  • signature_os.txt
    It includes string matching patterns of Operating System.
  • signature_web.txt
    It includes string matching patterns of Web server software.
If you want to add new string matching patterns, you add new string matching patterns at last line in each file.
ex) How to add new string matching pattern of CMS at signature_cms.txt.
tikiwiki@(Powered by TikiWiki)
wordpress@<.*=(.*/wp-).*/.*>
wordpress@(<meta name="generator" content="WordPress).*>

...snip...

typo@.*(href="fileadmin/templates/).*>
typo@(<meta name="generator" content="TYPO3 CMS).*>
"new product name"@"regex pattern"
[EOF]
Note
Above new product name must be a name that Metasploit can identify. And you have to separate new product name and regex pattern using @.

2. How to add learning data.
signatures path includes four files corresponding to each product categories.
local@client:~$ ls "gyoithon root path"/classifier4gyoithon/train_data/
train_cms_in.txt
train_framework_in.txt
train_os_in.txt
train_web_in.txt
  • train_cms_in.txt
    It includes learning data of CMS.
  • train_framework_in.txt
    It includes learning data of FrameWork.
  • train_os_in.txt
    It includes learning data of Operating System.
  • train_web_in.txt
    It includes learning data of Web server software.
If you want to add new learning data, you add learning data at last line in each file.
ex) How to add new learning data of CMS at train_cms_in.txt.
joomla@(Set-Cookie: [a-z0-9]{32}=.*);
joomla@(Set-Cookie: .*=[a-z0-9]{26,32});

...snip...

xoops@(xoops\.js)
xoops@(xoops\.css)
"new product name"@"regex pattern"
[EOF]
Note
Above new product name must be a name that Metasploit can identify. And you have to separate new product name and regex pattern using @.
And you have to delete trained data (*.pkl).
local@client:~$ ls "gyoithon root path"/classifier4gyoithon/trained_data/
train_cms_out.pkl
train_framework_out.pkl
train_web_out.pkl
local@client:~$ rm "gyoithon root path"/classifier4gyoithon/trained_data/*.pkl

3. How to change "Exploit module's option".
When GyoiThon exploits, it uses default value of Exploit module options.
If you want to change option values, please input any value to "user_specify" in exploit_tree.json as following.

"unix/webapp/joomla_media_upload_exec": {
"targets": {
"0": [
"generic/custom",
"generic/shell_bind_tcp",
"generic/shell_reverse_tcp",

...snip...

"TARGETURI": {
"type": "string",
"required": true,
"advanced": false,
"evasion": false,
"desc": "The base path to Joomla",
"default": "/joomla",
"user_specify": "/my_original_dir/"
},
Above example is to change value of TARGETURI option in exploit module "exploit/unix/webapp/joomla_media_upload_exec" to "/my_original_dir/" from "/joomla".

Operation check environment
  • Kali Linux 2017.3 (for Metasploit)
    • Memory: 8.0GB
    • Metasploit Framework 4.16.15-dev
  • ubuntu 16.04 LTS (Host OS)
    • CPU: Intel(R) Core(TM) i5-5200U 2.20GHz
    • Memory: 8.0GB
    • Python 3.6.1(Anaconda3)
    • docopt 0.6.2
    • jinja2 2.10
    • msgpack-python 0.4.8
    • pandas 0.20.3

Contact
gyoiler3@gmail.com


pwnedOrNot - Tool To Find Passwords For Compromised Email Accounts Using HaveIBeenPwned API

$
0
0

pwnedOrNot is a python script which checks if the email account has been compromised in a data breach, if the email account is compromised it proceeds to find passwords for the compromised account.
It uses haveibeenpwned v2 api to test email accounts and searches for the password in Pastebin Dumps
This script has been tested on Kali Linux 18.2 and Ubuntu 18.04.

Installation
It's a pure python script and relies on common python modules and does not need installation :
  • os
  • re
  • time
  • json
  • requests

Usage
git clone https://github.com/thewhiteh4t/pwnedOrNot.git
cd pwnedOrNot/
python pwnedornot.py

Features
haveibeenpwned offers a lot of information about the compromised email, some useful information is displayed by this script:
  • Name of Breach
  • Domain Name
  • Date of Breach
  • Fabrication status
  • Verification Status
  • Retirement status
  • Spam Status
  • Source of Dump
  • ID of Dump
And with all this information pwnedOrNot can easily find passwords for compromised emails if the dump is accessible and it contains the password

Screenshots



Lama - Tool To Obtain A Custom Password Dictionary To A Particular Target

$
0
0
Lama, the application that does not mince words.

Description
Lama is a GNULinux tool to generate a word list. The goal is to obtain a custom password dictionary to a particular target, whether physical or moral.
It is therefore important that words in this list correspond to the target.
Keep in mind that Lama generates a simple password list and not complex, the goal is to be fast and targeted rather than slow and exhaustive.

Compilation

Install
Note that the make install must be run as root, because the binary was copyed to /bin and configurations files to /etc/lama.
make
make install
or
make all
or
make re

uninstall
Note that the make uninstall must be run as root, because the binary was removed from /bin and configurations files from /etc/lama.
make uninstall

clean
make clean
or
make fclean

Utilisation


First, you must create a words list with personnal infomations about the target. Then, you can use lama to mix your given words list:
lama 1 4 /tmp/list -ncCyh > /tmp/dico
For more details, see man lama


Diskover - File System Crawler, Storage Search Engine And Analytics Powered By Elasticsearch

$
0
0

diskover is an open source file system crawler and disk usage software that uses Elasticsearch to index and manage data across heterogeneous storage systems. Using diskover, you are able to more effectively search and organize files and system administrators are able to manage storage infrastructure, efficiently provision storage, monitor and report on storage use, and effectively make decisions about new infrastructure purchases.
As the amount of file data generated by business' continues to expand, the stress on expensive storage infrastructure, users and system administrators, and IT budgets continues to grow.
Using diskover, users can identify old and unused files and give better insights into data change, file duplication and wasted space.
diskover is written and maintained by Chris Park (shirosai) and runs on Linux and OS X/macOS using Python 2/3.

Screenshots
diskover-web (diskover's web file manager, analytics app, file system search engine, rest-api)





Kibana dashboards/saved searches/visualizations and support for Gource



Diskover Gource videos





Installation Guide

Requirements
  • Linux or OS X/macOS (tested on OS X 10.11.6, Ubuntu 16.04)
  • Python 2.7. or Python 3.5./3.6. (tested on Python 2.7.14, 3.5.3, 3.6.4)
  • Python elasticsearch client module
  • Python requests module
  • Python scandir module
  • Python progressbar2 module
  • Python redis module
  • Python rq module
  • Elasticsearch 5 (local or AWS ES Service, tested on Elasticsearch 5.4.2, 5.6.4) Elasticsearch 6 is not supported yet.
  • Redis (tested on 4.0.8)
Install the above Python modules using pip.

Optional Installs
  • diskover-web (diskover's web file manager and analytics app)
  • Redis RQ Dashboard (for monitoring redis queue)
  • sharesniffer (for scanning your network for file shares and auto-mounting for crawls)
  • Kibana (for visualizing Elasticsearch data, tested on Kibana 5.4.2, 5.6.4)
  • X-Pack (Kibana plugin for graphs, reports, monitoring and http auth)
  • Gource (for Gource visualizations of diskover Elasticsearch data, see videos above)

Download
$ git clone https://github.com/shirosaidev/diskover.git
$ cd diskover
Download latest version

Requirements
You need to have at least Python 2.7. or Python 3.5. and have installed required Python dependencies using pip.
$ pip install -r requirements.txt

Getting Started
Copy diskover config diskover.cfg.sample to diskover.cfg and edit for your environment.
Start diskover worker bots (as many as you want, a good number might be cores x 2) with:
$ cd /path/with/diskover
$ python diskover_worker_bot.py
Worker bots can be added during a crawl to help with the queue. To run a worker bot in burst mode (quit after all jobs done), use the -b flag. If the queue is empty these bots will die, so use rq info or rq-dashboard to see if they are running. Run diskover-bot-launcher.sh to spawn and kill multiple bots.
Start diskover main job dispatcher and file tree crawler with:
$ python /path/to/diskover.py -d /rootpath/you/want/to/crawl -i diskover-indexname -a
Defaults for crawl with no flags is to index from . (current directory) and files >0 Bytes and 0 days modified time. Empty files and directores are skipped (unless you use -s 0 and -e flags). Use -h to see cli options.

User Guide
Read the wiki for more documentation on how to use diskover.


Attackintel - Tool To Query The MITRE ATT&CK API For Tactics, Techniques, Mitigations, & Detection Methods For Specific Threat Groups

$
0
0

A simple python script to query the MITRE ATT&CK API for tactics, techniques, mitigations, & detection methods for specific threat groups.

Goals
  • Quickly align updated tactics, techniques, mitigation, and detection information from MITRE ATT&CK API for a specific threat
  • Brush up on my python skills and get familiar with GIT while drinking coffee

How To
Use one of two methods:
  • If (python3 is installed):
    • Download script from git
    • python3 attackintel.py
  • Else:
  • Select a threat number from the menu to get tactics, techniques, mitigation, and detection information

Resources

Requirements
  • Python ver.3+


    Prowler - Distributed Network Vulnerability Scanner

    $
    0
    0

    Prowler is a Network Vulnerability Scanner implemented on a Raspberry Pi Cluster, first developed during Singapore Infosec Community Hackathon - HackSmith v1.0.

    Capabilities
    • Scan a network (a particular subnet or a list of IP addresses) for all IP addresses associated with active network devices
    • Determine the type of devices using fingerprinting
    • Determine if there are any open ports on the device
    • Associate the ports with common services
    • Test devices against a dictionary of factory default and common credentials
    • Notify users of security vulnerabilities through an dashboard. Dashboard tour

    Planned Capabilities
    • Greater variety of vulnerability assessment capabilities (webapp etc.)
    • Select wordlist based on fingerprint

    Hardware
    • Raspberry Pi Cluster HAT (with 4 * Pi Zero W)
    • Raspberry Pi 3
    • Networking device


    Software Stack
    • Raspbian Stretch (Controller Pi)
    • Raspbian Stretch Lite (Worker Pi Zero)
    • Note: For ease of setup, use the images provided by Cluster Hat! Instructions
    • Python 3 (not tested on Python 2)
    • Python packages see requirements.txt
    • Ansible for managing the cluster as a whole (/playbooks)
    Key Python Packages:
    • dispy (website) is the star of the show. It allows allows us to create a job queue that will be processed by the worker nodes.
    • python-libnmap is the python wrapper around nmap, an open source network scanner. It allows us to scan for open ports on devices.
    • paramiko is a python wrapper around SSH. We use it to probe SSH on devices to test for common credentials.
    • eel is used for the web dashboard (seperate repository, here)
    • rabbitmq (website) is used to pass the results from the cluster to the eel server that is serving the dashboard page.

    Ansible Playbooks
    For the playbooks to work, ansible must be installed (sudo pip3 install ansible). Configure the IP addresses of the nodes at /etc/ansible/hosts. WARNING: Your mileage may vary as these were only tested on my setup
    • shutdown.yml and reboot.yml self-explanatory
    • clone_repos.yml clone prowler and dispy repositories (required!) on the worker nodes
    • setup_node.yml installs all required packages on the worker nodes. Does not clone the repositories!

    Deploying Prowler
    1. Clone the git repository: git clone https://github.com/tlkh/prowler.git
    2. Install dependencies by running sudo pip3 install -r requirements.txt on the controller Pi
    3. Run ansible-playbook playbooks/setup_node.yml to install the required packages on worker nodes.
    4. Clone the prowler and dispy repositories to the worker nodes using ansible-playbook playbooks/clone_repos.yml
    5. Run clusterhat on on the controller Pi to ensure that all Pi Zeros are powered up.
    6. Run python3 cluster.py on the controller Pi to start Prowler
    To edit the range of IP addresses being scanned, edit the following lines in cluster.py:
    test_range = []

    for i in range(0, 1):

    for j in range(100, 200):

    test_range.append("172.22." + str(i) + "." + str(j))

    Old Demos

    Useful Snippets
    • To run ssh command on multiple devices, install pssh and pssh -h pssh-hosts -l username -A -i "command"
    • To create the cluster (in compute.py): cluster = dispy.JobCluster(compute, nodes='pi0_ip', ip_addr='pi3_ip')
    • Check connectivity: ansible all -m ping or ping p1.local -c 1 && ping p2.local -c 1 && ping p3.local -c 1 && ping p4.local -c 1
    • Temperature Check: /opt/vc/bin/vcgencmd measure_temp && pssh -h workers -l pi -A -i "/opt/vc/bin/vcgencmd measure_temp" | grep temp
    • rpimonitor (how to install):



    Sharesniffer - Network Share Sniffer And Auto-Mounter For Crawling Remote File Systems

    $
    0
    0

    sharesniffer is a network analysis tool for finding open and closed file shares on your local network. It includes auto-network discovery and auto-mounting of any open cifs and nfs shares.

    How to use
    Example to find all hosts in 192.168.56.0/24 network and auto-mount at /mnt:
    python sniffshares.py -l 4 --hosts 192.168.56.0/24 -a -m /mnt

    Requirements
    • Python 2.7 or 3.5
    • Linux or macOS
    • Nmap https://nmap.org in PATH
    • Nmap scripts (.nse) in PATH (on Linux/macOS they are usually in /usr/local/share/nmap/), if you don't have the ones required are also in the rootdir of sharesniffer.
    • python-nmap (pip install python-nmap)
    • netifaces (pip install netifaces)

    Download
    $ git clone https://github.com/shirosaidev/sharesniffer.git
    $ cd sharesniffer

    CLI Options
    usage: sniffshares.py [-h] [--hosts HOSTS] [-e EXCLUDEHOSTS] [-l SPEEDLEVEL]
    [-n] [--nfsmntopt NFSMNTOPT] [-s]
    [--smbmntopt SMBMNTOPT] [--smbtype SMBTYPE]
    [--smbuser SMBUSER] [--smbpass SMBPASS] [-a]
    [-m MOUNTPOINT] [-p MOUNTPREFIX] [-v] [--debug] [-q]
    [-V]

    optional arguments:
    -h, --help show this help message and exit
    --hosts HOSTS Hosts to scan, example: 10.10.56.0/22 or 10.10.56.2
    (default: scan all hosts)
    -e EXCLUDEHOSTS, --excludehosts EXCLUDEHOSTS
    Hosts to exclude from scan, example:
    10.10.56.1,10.10.56.254
    -l SPEEDLEVEL, --speedlevel SPEEDLEVEL
    Scan speed aggressiveness level from 3-5, lower for
    more accuracy (default: 4)
    -n, --nfs Scan network for nfs shares
    --nfsmntopt NFSMNTOPT
    nfs mount options (default: ro,nosuid,nodev,noexec,udp
    ,proto=udp,noatime,nodiratime,rsize=1024,dsize=1024,ve
    rs=3,rdirplus)
    -s, --smb Scan network for smb shares
    --smbmntopt SMBMNTOPT
    smb mount options (default: ro,nosuid,nodev,noexec,udp
    ,proto=udp,noatime,nodiratime,rsize=1024,dsize=1024)
    --smbtype SMBTYPE Can be smbfs (default) or cifs
    --smbuser SMBUSER smb username (default: guest)
    --smbpass SMBPASS smb password (default: none)
    -a, --automount Auto-mount any open nfs/smb shares
    -m MOUNTPOINT, --mountpoint MOUNTPOINT
    Mountpoint to mount shares (default: ./)
    -p MOUNTPREFIX, --mountprefix MOUNTPREFIX
    Prefix for mountpoint directory name (default:
    sharesniffer)
    -v, --verbose Increase output verbosity
    --debug Debug message output
    -q, --quiet Run quiet and just print out any possible mount points
    for crawling
    -V, --version Prints version and exits


    ReverseAPK - Quickly Analyze And Reverse Engineer Android Packages

    $
    0
    0

    Quickly analyze and reverse engineer Android applications.

    FEATURES:
    • Displays all extracted files for easy reference
    • Automatically decompile APK files to Java and Smali format
    • Analyze AndroidManifest.xml for common vulnerabilities and behavior
    • Static source code analysis for common vulnerabilities and behavior
      • Device info
      • Intents
      • Command execution
      • SQLite references
      • Logging references
      • Content providers
      • Broadcast recievers
      • Service references
      • File references
      • Crypto references
      • Hardcoded secrets
      • URL's
      • Network connections
      • SSL references
      • WebView references

    INSTALL:
    ./install

    USAGE:
    reverse-apk <apk_name>


    Empire GUI - Empire Client Application

    $
    0
    0

    The Empire Multiuser GUI is a graphical interface to the Empire post-exploitation Framework. It was written in Electron and utilizes websockets (SocketIO) on the backend to support multiuser interaction. The main goal of this project is to enable red teams, or any other color team, to work together on engagements in a more seamless and integrated way than using Empire as a command line tool.
    Read more about the Empire Framework

    This is a BETA release and does not have all the functionality of the full Empire Framework. The goal is to get community involvement early on to help fix bugs before adding in many of the bells and whistles. The main interaction with Agents at this point is soley through a shell prompt. The next release will have Module support, etc.

    Features
    • Multiplatorm Support (OSX,Window,Linux)
    • Traffic over HTTPS
    • User Authentication
    • Multiuser Support
    • Agent Shell Interaction

    Installation
    1. Checkout this repo to a folder on your system
    2. Install NodeJS (NPM) here
    3. Start your Empire Server
      1. Install the Empire Framework
      2. Switch to the 3.0-Beta branch git checkout 3.0-Beta
      3. Setup your listeners and generate stagers (as this is not yet supported in the GUI)
      4. Start the server with your password ./empire --server --shared_password ILikePasswords --port 1337
    4. Run the following commands from your EmpireGUI directory
      1. npm install
      2. npm start
    5. Login to the Empire!

    Screenshot



    Otseca - Security Auditing Tool To Search And Dump System Configuration

    $
    0
    0

    Otseca is a open source security auditing tool to search and dump system configuration. It allows you to generate reports in HTML or RAW-HTML formats.
    For more information, see wiki.

    How To Use
    It's simple:
    # Clone this repository
    git clone https://github.com/trimstray/otseca

    # Go into the repository
    cd otseca

    # Install
    ./setup.sh install

    # Run the app
    otseca
    • symlink to bin/otseca is placed in /usr/local/bin
    • man page is placed in /usr/local/man/man8

    Requirements
    This tool working with:
    • GNU/Linux or BSD (testing on Debian, CentOS and FreeBSD)
    • Bash (testing on 4.4.19)
    Also you will need root access.

    Reports
    Otseca generates reports in html (js, css and other) or raw-html (pure html) formats.
    Default path for reports is {project}/data/output directory.
    HTML reports consist of the following blocks:




    BurpBounty - A Extension Of Burp Suite That Improve An Active And Passive Scanner

    $
    0
    0

    This extension allows you, in a quick and simple way, to improve the active and passive burpsuitescanner by means of personalized rules through a very intuitive graphical interface. Through an advanced search of patterns and an improvement of the payload to send, we can create our own issue profiles both in the active scanner and in the passive. This Extension Requires Burp Suite Pro.

    - Usage:

    1. Config section
    • Profile Manager: you can manage the profiles, enable, disable o remove any of them.
    • Select Profile: you can choose any profile, for modify it and save.
    • Profiles reload: you can reload the profiles directory, for example, when you add new external profile to directory.
    • Profile Directory: you choose the profiles directory path.

    2. Payloads
    • You can add many payloads as you want.
    • Each payload of this secction will be sent at each entry point (Insertion points provided by the burp api)
    • You can choos multiple Enocders. For example, if you want encode the string alert(1), many times (in descendent order):
      1. Plain text: alert(1)
      2. HTML-encode all characters: &#x61;&#x6c;&#x65;&#x72;&#x74;&#x28;&#x31;&#x29;
      3. URL-encode all characters: %26%23%78%36%31%3b%26%23%78%36%63%3b%26%23%78%36%35%3b%26%23%78%37%32%3b%26%23%78%37%34%3b%26%23%78%32%38%3b%26%23%78%33%31%3b%26%23%78%32%39%3b
      4. Base64-encode: JTI2JTIzJTc4JTM2JTMxJTNiJTI2JTIzJTc4JTM2JTYzJTNiJTI2JTIzJTc4JTM2JTM1JTNiJTI2JTIzJTc4JTM3JTMyJTNiJTI2JTIzJTc4JTM3JTM0JTNiJTI2JTIzJTc4JTMyJTM4JTNiJTI2JTIzJTc4JTMzJTMxJTNiJTI2JTIzJTc4JTMyJTM5JTNi
    • If you choose "URL-Encode these characters" option, you can put all characters that you want encode with URL.

    3. Grep - Match
    • For each payload response, each string, regex or payload (depending of you choose) will be searched with the specific Grep Options.
    • Grep Type:
      • Simple String: search for a simple string or strings
      • Regex: search for regular expression
      • Payload: search for payloads sended
      • Payload without encode: if you encode the payload, and you want find for original payload, you should choose this
    • Grep Options:
      • Negative match: if you want find if string, regex or payload is not present in response
      • Case sensitive: Only match if case sensitive
      • Not in cookie: if you want find if any cookie attribute is not present
      • Content type: you can specify one or multiple (separated by comma) content type to search the string, regex or payload. For example: text/plain, text/html, ...
      • Response Code: you can specify one or multiple (separated by coma) HTTP response code to find string, regex or payload. For example. 300, 302, 400, ...

    4. Write an Issue
    • In this section you can specify the issue that will be show if the condition match with the options specified.
    • Issue Name
    • Severity
    • Confidence
    • And others details like description, background, etc.

    - Examples
    So, the vulnerabilities identified so far, from which you can make personalized improvements are:

    1- Active Scan
    • XSS reflected and Stored
    • SQL Injection error based
    • XXE
    • Command injection
    • Open Redirect
    • Local File Inclusion
    • Remote File Inclusion
    • Path Traversal
    • LDAP Injection
    • ORM Injection
    • XML Injection
    • SSI Injection
    • XPath Injection
    • etc

    2- Passive Scan
    • Security Headers
    • Cookies attributes
    • Software versions
    • Error strings
    • In general any string or regular expression.

    For example videos please visit our youtube channel:

    Viewing all 5854 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>