Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5838 articles
Browse latest View live

Re2Pcap - Create PCAP file from raw HTTP request or response in seconds

$
0
0

Re2Pcap is abbreviation for Request2Pcap and Response2Pcap. Community users can quickly create PCAP file using Re2Pcap and test them against Snort rules.
Re2Pcap allow you to quickly create PCAP file for raw HTTP request shown below
POST /admin/tools/iplogging.cgi HTTP/1.1
Host: 192.168.13.31:80
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0
Accept: text/plain, */*; q=0.01
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://192.168.13.31:80/admin/tools/iplogging.html
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-Requested-With: XMLHttpRequest
Content-Length: 63
Cookie: token=1e9c07e135a15e40b3290c320245ca9a
Connection: close

tcpdumpParams=tcpdump -z reboot -G 2 -i eth0&stateRequest=start

Usage
git clone https://github.com/Cisco-Talos/Re2Pcap.git
cd Re2Pcap/
docker build -t re2pcap .
docker run --rm --cap-add NET_ADMIN -p 5000:5000 re2pcap
OR
docker run --rm --cap-add NET_ADMIN -p 5000:5000 --name re2pcap amitraut/re2pcap
Open localhost:5000 in your web browser to access Re2Pcap or use Re2Pcap-cmd script to interact with Re2Pcap container to get PCAP in current working directory

Requirements
  • Docker
  • HTTP Raw Request / Response
  • Web Browser (for best results, please use Chromium based web browsers)

Advantages
  • Easy setup. No complex multi-VM setup required
  • Re2Pcap runs on Alpine Linux based docker image that weighs less than 100 MB :D
  • Allows you to dump simulated raw HTTP request and response in to PCAP

Dockerfile
FROM alpine

# Get required dependencies and setup for Re2Pcap
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/testing" >> /etc/apk/repositories
RUN apk update && apk add python3 tcpdump tcpreplay
RUN pip3 install --upgrade pip
RUN pip3 install pexpect flask requests httpretty requests-toolbelt

COPY Re2Pcap/ /Re2Pcap
RUN cd Re2Pcap && chmod +x Re2Pcap.py

WORKDIR /Re2Pcap
EXPOSE 5000/tcp

# Run application at start of new container
CMD ["/usr/bin/python3", "Re2Pcap.py"]

Walkthrough
  • Video walkthrough shows pcap creation for Sierra Wireless AirLink ES450 ACEManager iplogging.cgi command injection vulnerability using Re2Pcap web interface


Re2Pcap Workflow


As shown in the above image Re2Pcap is Alpine Linux based Python3 application with Flask based web interface
Re2Pcap parses the input data as raw HTTP request or response and actually perfoms client/server interaction while capturing packets. After the interaction Re2Pcap presents the captured packets as PCAP file

Recommendations
  • Please use Linux as your host operating system as Re2Pcap is well tested on Linux
  • If creating PCAP for Host: somedomain:5000 i.e. port 5000, please change Flask application to run on other port by modifying Re2Pcap.Py app.run call otherwise PCAP will contain Flask application response

Limitations
  • If raw HTTP request is without Accept-Encoding: header Accept-Encoding: identity is added in the reqeust
    • There is known issue for it in python requests. Following is closing note for that issue
    That's really fairly terrible. Accept-Encoding: identity is always valid, the RFCs say so. It should be utterly harmless to send it along. Otherwise, removing this requires us to replace httplib. That's a substantial bit of work. =(
  • The following are source and desination IPs in PCAPs from Re2Pcap
    • Sourece IP: 10.10.10.1
    • Destination IP: 172.17.0.2 or (Re2Pcap Container's IP Address)
    Please use tcprewrite -D option to modify desitnation IP to something else as per your need. You may also use tcpprep and tcprewrite to set other IPs as endpoints. Due to inconsistent result of tcprewrite I used alternative way to set different SRC/DST IPs
  • Specifying HTTP/1.1 302 FOUND as response will generated PCAP with maximum possible retries to reach resource specified in Location: header. Plase export the first HTTP stream using wireshark in testing if you do not like the additional noise of other streams




SEcraper - Search Engine Scraper Tool With BASH Script.

$
0
0

Search engine scraper tool with BASH script.

Dependency
  • curl (cli)

Available search engine
  • Ask.com
  • Search.yahoo.com
  • Bing.com

Installation
git clone https://github.com/zerobyte-id/SEcraper.git
cd SEcraper/

Run
bash secraper.bash "QUERY"


Acunetix v13 - Web Application Security Scanner

$
0
0

Acunetix, the pioneer in automated web application security software, has announced the release of Acunetix Version 13. The new release comes with an improved user interface and introduces innovations such as the SmartScan engine, malware detection functionality, comprehensive network scanning, proof-of-exploit, incremental scanning, and more. This release further strengthens the leading position of Acunetix on the web security market.

Check out what’s new in Acunetix v13. This brief presentation highlights the following features:
  • Full integration with a network scanner for comprehensive vulnerability management
  • Malware scanning using Windows Defender or ClamAV
  • The revolutionary SmartScan engine– find up to 80% vulnerabilities in the first 20% of the scan
  • Incremental scans – you may scan only the parts of your web applications that changed
  • Improved user experience thanks to a new user interface – better-looking, more intuitive, with more options

Acunetix v13 introduces even more new and improved features, for example, proof of exploit, direct support for Angular, Vue, and React in DeepScan, more integration capabilities including GitLab, Bugzilla, and Mantis, and many more. Give it a try!


FockCache - Minimalized Test Cache Poisoning

$
0
0

FockCache - Minimalized Test Cache Poisoning

Detail For Cache Poisoning : https://portswigger.net/research/practical-web-cache-poisoning

FockCache
FockCache tries to make cache poisoning by trying X-Forwarded-Host and X-Forwarded-Scheme headers on web pages.
After successful result, it gives you a poisoned URL.

To be added soon:
1 - Page Param Checker
2 - Recursive Checking

Installation
1 - Install with installer.sh
chmod +x installer.sh
./installer.sh
2 - Install manual
go get github.com/briandowns/spinner
go get github.com/christophwitzko/go-curl
go run main.go --hostname victim.host
or
go build FockCache main.go

Run
./FockCache --hostname victim.host


InjuredAndroid - A Vulnerable Android Application That Shows Simple Examples Of Vulnerabilities In A CTF Style

$
0
0

A vulnerable Android application with ctf examples based on bug bounty findings, exploitation concepts, and pure creativity.

Setup for a physical device
  1. Download injuredandroid.apk from Github
  2. Enable USB debugging on your Android test phone.
  3. Connect your phone and your pc with a usb cable.
  4. Install via adb. adb install injuredandroid.apk. Note: You need to use the absolute path to the .apk file or be in the same directory.

Setup for an Android Emulator using Android Studio
  1. Download the apk file.
  2. Start the emulator from Android Studio (I recommend downloading an emulator with Google APIs so root adb can be enabled).
  3. Drag and drop the .apk file on the emulator and injuredandroid.apk will install.

Tips and CTF Overview
Decompiling the Android app is highly recommended.
  • XSSTEST is just for fun and to raise awareness on how WebViews can be made vulnerable to XSS.
  • The login flags just need the flag submitted.
  • The flags without a submit that demonstrate concepts will automatically register in the "Flags Overview" Activity.
  • The last two flags don't register because there currently isn't a remote verification method (I plan to change this in a future update). This was done to prevent using previous flag methods to skip the exploitation techniques.
  • There is one flag with a Pentesterlab 1 month gift key. The key is stored in a self destructing note after It's read, do not close the browser tab before copying the url.
  • The exclamatory buttons on the bottom right will give users up to three tips for each flag.
Good luck and have fun! :D


Netdata - Real-time Performance Monitoring

$
0
0

Netdata is distributed, real-time, performance and health monitoring for systems and applications. It is a highly-optimized monitoring agent you install on all your systems and containers.
Netdata provides unparalleled insights, in real-time, of everything happening on the systems it runs (including web servers, databases, applications), using highly interactive web dashboards. It can run autonomously, without any third-party components, or it can be integrated to existing monitoring toolchains (Prometheus, Graphite, OpenTSDB, Kafka, Grafana, and more).
Netdata is fast and efficient, designed to permanently run on all systems (physical& virtual servers, containers, IoT devices), without disrupting their core function.
Netdata is free, open-source software and it currently runs on Linux, FreeBSD, and MacOS, along with other systems derived from them, such as Kubernetes and Docker.
Netdata is not hosted by the CNCF but is the 3rd most starred open-source project in the Cloud Native Computing Foundation (CNCF) landscape.
Want to see Netdata live? Check out any of our live demos.

User base
Netdata is used by hundreds of thousands of users all over the world. Check our GitHub watchers list. You will find people working for Amazon, Atos, Baidu, Cisco Systems, Citrix, Deutsche Telekom, DigitalOcean, Elastic, EPAM Systems, Ericsson, Google, Groupon, Hortonworks, HP, Huawei, IBM, Microsoft, NewRelic, Nvidia, Red Hat, SAP, Selectel, TicketMaster, Vimeo, and many more!


Pytm - A Pythonic Framework For Threat Modeling

$
0
0

Define your system in Python using the elements and properties described in the pytm framework. Based on your definition, pytm can generate, a Data Flow Diagram (DFD), a Sequence Diagram and most important of all, threats to your system.

Requirements
  • Linux/MacOS
  • Python 3.x
  • Graphviz package
  • Java (OpenJDK 10 or 11)
  • plantuml.jar

Usage
tm.py [-h] [--debug] [--dfd] [--report REPORT] [--exclude EXCLUDE] [--seq] [--list] [--describe DESCRIBE]    optional arguments:    -h, --help           show this help message and exit    --debug              print debug messages    --dfd                output DFD (default)    --report REPORT      output report using the named template file (sample template file is under docs/template.md)    --exclude EXCLUDE    specify threat IDs to be ignored    --seq                output sequential diagram    --list               list all available threats    --describe DESCRIBE  describe the properties available for a given element    
Currently available elements are: TM, Element, Server, ExternalEntity, Datastore, Actor, Process, SetOfProcesses, Dataflow, Boundary and Lambda.
The available properties of an element can be listed by using --describe followed by the name of an element:
  (pytm) ➜  pytm git:(master) ✗ ./tm.py --describe Element  Element   OS   check   definesConnectionTimeout   description   dfd   handlesResources   implementsAuthenticationScheme   implementsNonce   inBoundary   inScope   isAdmin   isHardened   name   onAWS    
For the security practitioner, you may add new threats to the threatlib/threats.json file:
tm.py [-h] [--debug] [--dfd] [--report REPORT] [--exclude EXCLUDE] [--seq] [--list] [--describe DESCRIBE]

optional arguments:
-h, --help show this help message and exit
--debug print debug messages
--dfd output DFD (default)
--report REPORT output report using the named template file (sample template file is under docs/template.md)
--exclude EXCLUDE specify threat IDs to be ignored
--seq output sequential diagram
--list list all available threats
--describe DESCRIBE describe the properties available for a given element
CAVEAT
The threats.json file contains strings that run through eval() -> make sure the file has correct permissions or risk having an attacker change the strings and cause you to run code on their behalf. The logic lives in the "condition", where members of "target" can be logically evaluated. Returning a true means the rule generates a finding, otherwise, it is not a finding.**
The following is a sample tm.py file that describes a simple application where a User logs into the application and posts comments on the app. The app server stores those comments into the database. There is an AWS Lambda that periodically cleans the Database.

(pytm) ➜ pytm git:(master) ✗ ./tm.py --describe Element
Element
OS
check
definesConnectionTimeout
description
dfd
handlesResources
implementsAuthenticationScheme
implementsNonce
inBoundary
inScope
isAdmin
isHardened
name
onAWS
Diagrams are output as Dot and PlantUML.
When --dfd argument is passed to the above tm.py file it generates output to stdout, which is fed to Graphviz's dot to generate the Data Flow Diagram:
{
"SID":"INP01",
"target": ["Lambda","Process"],
"description": "Buffer Overflow via Environment Variables",
"details": "This attack pattern involves causing a buffer overflow through manipulation of environment variables. Once the attacker finds that they can modify an environment variable, they may try to overflow associated buffers. This attack leverages implicit trust often placed in environment variables.",
"Likelihood Of Attack": "High",
"severity": "High",
"condition": "target.usesEnvironmentVariables is True and target.sanitizesInput is False and target.checksInputBounds is False",
"prerequisites": "The application uses environment variables.An environment variable exposed to the user is vulnerable to a buffe r overflow.The vulnerable environment variable uses untrusted data.Tainted data used in the environment variables is not properly validated. For instance boundary checking is not done before copying the input data to a buffer.",
"mitigations": "Do not expose environment variable to the user.Do not use untrusted data in your environment variables. Use a language or compiler that performs automatic bounds checking. There are tools such as Sharefuzz [R.10.3] which is an environment variable fuzzer for Unix that support loading a shared library. You can use Sharefuzz to determine if you are exposing an environment variable vulnerable to buffer overflow.",
"example": "Attack Example: Buffer Overflow in $HOME A buffer overflow in sccw allows local users to gain root access via the $HOME environmental variable. Attack Example: Buffer Overflow in TERM A buffer overflow in the rlogin program involves its consumption of the TERM environmental variable.",
"references": "https://capec.mitre.org/data/definitions/10.html, CVE-1999-0906, CVE-1999-0046, http://cwe.mitre.org/data/definitions/120.html, http://cwe.mitre.org/data/definitions/119.html, http://cwe.mitre.org/data/definitions/680.html"
}
Generates this diagram:

The following command generates a Sequence diagram.
#!/usr/bin/env python3

from pytm.pytm import TM, Server, Datastore, Dataflow, Boundary, Actor, Lambda

tm = TM("my test tm")
tm.description = "another test tm"

User_Web = Boundary("User/Web")
Web_DB = Boundary("Web/DB")

user = Actor("User")
user.inBoundary = User_Web

web = Server("Web Server")
web.OS = "CloudOS"
web.isHardened = True

db = Datastore("SQL Database (*)")
db.OS = "CentOS"
db.isHardened = False
db.inBoundary = Web_DB
db.isSql = True
db.inScope = False

my_lambda = Lambda("cleanDBevery6hours")
my_lambda.hasAccessControl = True
my_lambda.inBoundary = Web_DB

my_lambda_to_db = Dataflow(my_lambda, db, "(λ)Periodically cleans DB")
my_lambda_to_db.protocol = "SQL"
my_lambda_to_db.dstPort = 3306

user_to_web = Dataflow(user, web, "User enters comments (*)")
user_to_web.protocol = "HTTP"
user_to_web.dstPort = 80
user_to_web.data = 'Comments in HTML or Markdown'
user_to_web.order = 1

web_to_user = Dataflow(web, user, "Comments saved (*)")
web_to_user.protocol = "HTTP"
web_to_user.data = 'Ack of saving or error message, in JSON'
web_to_user.order = 2

web_to_db = Dataflow(web, db, "Insert query with comments")
web_to_db.protocol = "MySQL"
web_to_db.dstPort = 3306
web_to_db.data = 'MySQL insert statement, all literals'
web_to_db.order = 3

db_to_web = Dataflow(db, web, "Comments contents")
db_to_web.protocol = "MySQL"
db_to_web.data = 'Results of insert op'
db_to_web.order = 4

tm.process()
Generates this diagram:

The diagrams and findings can be included in the template to create a final report:
tm.py --dfd | dot -Tpng -o sample.png
The templating format used in the report template is very simple:
  # Threat Model Sample  ***    ## System Description    {tm.description}    ## Dataflow Diagram    ![Level 0 DFD](dfd.png)    ## Dataflows    Name|From|To |Data|Protocol|Port  ----|----|---|----|--------|----  {dataflows:repeat:{{item.name}}|{{item.source.name}}|{{item.sink.name}}|{{item.data}}|{{item.protocol}}|{{item.dstPort}}  }    ## Findings    {findings:repeat:* {{item.description}} on element "{{item.target}}"  }    

Currently supported threats
INP01 - Buffer Overflow via Environment Variables  INP02 - Overflow Buffers  INP03 - Server Side Include (SSI) Injection  CR01 - Session Sidejacking  INP04 - HTTP Request Splitting  CR02 - Cross Site Tracing  INP05 - Command Line Execution through SQL Injection  INP06 - SQL Injection through SOAP Parameter Tampering  SC01 - JSON Hijacking (aka JavaScript Hijacking)  LB01 - API Manipulation  AA01 - Authentication Abuse/ByPass  DS01 - Excavation  DE01 - Interception  DE02 - Double Encoding  API01 - Exploit Test APIs  AC01 - Privilege Abuse  INP07 - Buffer Manipulation  AC02 - Shared Data Manipulation  DO01 - Flooding  HA01 - Path Traversal  AC03 - Subverting Environment Variable Values  DO02 - Excessive Allocation  DS02 - Try All Common Switches  INP08 - Format String Injection  INP09 - LDAP Injection  INP10 - Parameter Injection  INP11 - Relative Path Traversal  INP12 - Client-side Injection-induced Buffer Overflow  AC04 - XML Schema Poisoning  DO03 - XML Ping of the Death  AC05 - Content Spoofing  INP13 - Command Delimiters  INP14 - Input Data Manipulation  DE03 - Sniffing Attacks  CR03 - Dictionary-based Password Attack  API02 - Exploit Script-Based APIs  HA02 - White Box Reverse Engineering  DS03 - Footprinting  AC06 - Using Malicious Files  HA03 - Web Application Fingerprinting  SC02 - XSS Targeting Non-Script Elements  AC07 - Exploiting Incorrectly Configured Access Control Security Levels  INP15 - IMAP/SMTP Command Injection  HA04 - Reverse Engineering  SC03 - Embedding Scripts within Scripts  INP16 - PHP Remote File Inclusion  AA02 - Principal Spoof  CR04 - Session Credential Falsification through Forging  DO04 - XML Entity Expansion  DS04 - XSS Targeting Error Pages  SC04 - XSS Using Alternate Syntax  CR05 - Encryption Brute Forcing  AC08 - Manipulate Registry Information  DS05 - Lifting Sensitive Data Embedded in Cache      


IPv6Tools - A Robust Modular Framework That Enables The Ability To Visually Audit An IPv6 Enabled Network

$
0
0
The IPv6Tools framework is a robust set of modules and plugins that allow a user to audit an IPv6 enabled network. The built-in modules support enumeration of IPv6 features such as ICMPv6 and Multicast Listener Discovery (MLD). In addition, the framework also supports enumeration of Upper Layer Protocols (ULP) such as multicast DNS (mDNS) and Link-Local Multicast Name Resolution (LLMNR). Users can easily expand the capability of the framework by creating plugins and modules in the Python language.

Requirements
  • python 2.7
  • pip
  • npm [development only]

Installation

Standard
[Optional] Use a virtualenv for installation:virtualenv venv && source venv/bin/activate
  1. git clone http://github.com/apg-intel/ipv6tools.git
  2. sudo pip install -r requirements.txt

Development
  1. git clone http://github.com/apg-intel/ipv6tools.git
  2. git checkout dev
  3. npm run setup

Usage

Standard
  1. sudo python app.py
  2. Navigate to http://localhost:8080 in a web browser

Development
  1. Run $ npm run serve
  2. In a separate terminal, run npm run dev
  3. Navigate to http://localhost:8081 in a web browser

CLI
TODO

Modules
Modules are classes that allow interaction with individual nodes or all nodes. These show up as a right click option on each node, or as a button below the graph.

Included Modules
Included in the project are a couple of modules to help validate your network, as well as use as examples for your own modules.
  • poisonLLMNR - Link-Local Multicast Name Resolution is the successor of of NBT-NS, which allows local nodes to resolve names and IP addresses. Enabling this module poisons LLMNR queries to all nodes on the local link.
  • CVE-2016-1879 - The following CVE is a vulnerability in SCTP that affects FreeBSD 9.3, 10.1 and 10.2. Enabling this module will launch a crafted ICMPv6 packet and potentially cause a DoS (assertion failure or NULL pointer dereference and kernel panic) to a single node.

Custom Modules
All modules are located in /modules and are automatically loaded when starting the server. Included in /modules is a file called template.py. This file contains the class that all modules must extend in order to display correctly and communicate with the webpage.
Use this template to build a custom module
from template import Template

class IPv6Module(Template):

def __init__(self, socketio, namespace):
super(IPv6Module, self).__init__(socketio, namespace)
self.modname = "CVE-2016-1879"
self.menu_text = "FreeBSD IPv6 DoS"
self.actions = [
{
"title": "FreeBSD IPv6 DoS", #name that's displayed on the buttons/menu
"action": "action", #method name to call
"target": True #set this to true to display it in the right-click menu
}
]

def action(self, target=None):
# send a log msg
self.socket_log('Running DoS on '+target['ip'])

# do stuff, etc

# merge results with main result set
listOfDicts = [{ip: '::1', device_name: 'test'}]
self.module_merge(listOfDicts)

Known Issues
  • Untested on large networks
  • Any stack traces mentioning dnet or dumbnet - follow the instructions below.
  • Some operating systems may require the libpcap headers. See notes below.

Installing libdnet
git clone https://github.com/dugsong/libdnet.git
cd libdnet
./configure && make
sudo make install
cd python
python setup.py install

libpcap headers in Ubuntu
sudo apt install libpcap-dev



XSS-Freak - An XSS Scanner Fully Written In Python3 From Scratch

$
0
0

XSS-Freak is an XSS scanner fully written in python3 from scratch. It is one of its kind since it crawls the website for all possible links and directories to expand its attack scope. Then it searches them for input tags and then launches a bunch of XSS payloads. if an input is not sanitized and vulnerable to XSS attacks, the tool will discover it in seconds.

Requirements:
  • High Internet Connection Speed.
  • A New PC that is capable of handling a high amount of threads concurrently.
  • Luck Of course.

How Does it Work:
First You Supply A Target Website To Scan And A List Which Contains Different XSS Payloads (better low amount and more effective and Least Detected Payloads) Then It Crawls The Main Website (Index Page) For Possible Links and Directories then it Crawls the Found Directories (if exist any) for additional links they weren't found in the initial crawl and add them to its attack scope. Then the tool will crawl all the links that were found in both in the initial scan and links from the directory For HTML INPUTS. Then the tool adds all the found HTML INPUTS to its attack scope. then the tool launches an ATTACK on all HTML INPUTS with the XSS payloads the user-provided from the list. if the HTML INPUT IS NOT SANITIZED PROPERLY and Filtered The Script Will Instantly Detect It and Will Print Out The Vulnerable Parameter.
Note: The Script Is Too Powerful For Old Computers.

Advantages:
  • Supports Multithreading For Efficiency and Faster Processing.
  • One Of It Kind.
  • Ability To Crawl All the sites not only a specific webpage.
  • Versatile.
Disadvantages:
  • Isn't Supported On Phones Due to the high demand for hardware.
  • Requires a High-Speed internet connection for it to work well or you will get errors or take too much time.
  • Requires Medium to best hardware since it deals and manages with high amounts of threads and any old hardware the script will cause the computer to lag or crash so take care.
HOPE YOU LIKE IT.


Agente - Distributed Simple And Robust Release Management And Monitoring System

$
0
0

Distributed simple and robust release management and monitoring system.
**This project on going work.

Road map
  • Core system
  • First worker agent
  • Management dashboard
  • Jenkins vs CI tool extensions
  • Management dashboard
  • First master agent
  • All relevant third-party system integrations (version control, CI, database, queuing etc.)

Requirements
  • Go > 1.11
  • Redis or RabbitMQ
  • PostgreSQL

Docker Environment
For PostgreSQL
docker run --name agente_PostgreSQL -e POSTGRES_PASSWORD=123456 -e POSTGRES_USER=agente -p 5432:5432 -d postgres

docker exec agente_PostgreSQL psql --username=agente -c 'create database agente_dev;'
For RabbitMQ
docker run --hostname my-rabbit --name agente_RabbitMQ -e RABBITMQ_DEFAULT_USER=local -e RABBITMQ_DEFAULT_PASS=local -p 5672:5672 -d rabbitmq:3-management

Development
git clone -b develop https://github.com/streetbyters/agente

go mod vendor

# Development Mode
go run ./cmd -mode dev -migrate -reset
go run ./cmd -mode dev

# Test Mode
go run ./cmd -mode test -migrate -reset
go run ./cmd -mode test

Build
We will release firstly Agente for Linux environment.
See detail


KawaiiDeauther - Jam All Wifi Clients/Routers

$
0
0

Kawaii Deauther is a pentest toolkit whose goal is to perform jam on WiFi clients/routers and spam many fake AP for testing purposes.

Dependencies
  • macchanger
  • mdk3
  • nmcli

Installation
Dependencies will be automatically installed.
$ git clone https://github.com/aryanrtm/KawaiiDeauther
$ cd KawaiiDeauther && sudo ./install.sh
$ sudo KawaiiDeauther.sh

Demo

Tested on
Operating SystemVersion
Linux Mint19.2 Tina

Features
  • Takedown with SSID
  • Takedown all channels
  • Spam many fake AP

Contact
  • Twitter: @4wsec_


Hashcracker - Python Hash Cracker

$
0
0


Supported hashing algorithms: SHA512, SHA256, SHA384, SHA1, MD5
Features: auto detection of hashing algorithm based on length (not recommended), bruteforce, password list

Arguments:
type: hash algorithm (must be one of the supported hashing algorithms mentioned above or AUTO if you want to use automatic algorithm detection)
hash: can be either the hashed password, or a text file containing a list of hashes to crack (hashlist must be activated if hash is a text file containing multiple hashes)
mode: list or bruteforce
pwlist: list of passwords to compare against a single hash or a list of hashes
range:bruteforce string length range (default: 8-11)
hashlist: no parameters required for this argument, if hashlist is used, then hash should be a text file with more than 1 hash
chars: string of characters to pick from to generate random strings for bruteforce (default value is: abcdefghijklmnopqrstuvwxyzABCDEFGHJIKLMNOPQRSTUVWXYZ0123456789)
Examples:
Cracking a single hash with a password list:
hashcracker.py SHA256 11a1162b984fef626ecc27c659a8b0eead5248ca867a6a87bea72f8a8706109d -mode list -pwlist passwordlist.txt

Cracking a single hash with bruteforce:
hashcracker.py SHA256 11a1162b984fef626ecc27c659a8b0eead5248ca867a6a87bea72f8a8706109d -mode bruteforce -range 6 11 -chars abcdefghijklmnopqrstuvwxyz0123456789$#@

Cracking a list of hashes with a password list:
hashcracker.py MD5 list_of_hashes.txt -mode list -pwlist passwordlist.txt -hashlist

Cracking a list of hashes with bruteforce:
hashcracker.py MD5 list_of_hashes.txt -mode bruteforce -hashlist -range 6 11 -chars ABCDEFGHJIKLMNOPQRSTUVWXYZ0123456789



OpenRelayMagic - Tool To Find SMTP Servers Vulnerable To Open Relay

$
0
0

╔═╗┌─┐┌─┐┌┐┌╦═╗┌─┐┬  ┌─┐┬ ┬╔╦╗┌─┐┌─┐┬┌─┐
║ ║├─┘├┤ │││╠╦╝├┤ │ ├─┤└┬┘║║║├─┤│ ┬││
╚═╝┴ └─┘┘└┘╩╚═└─┘┴─┘┴ ┴ ┴ ╩ ╩┴ ┴└─┘┴└─┘
Tool to test for vulnerable open relays on SMTP servers

Features
  • Check single target/ domain list
  • Port 587 and 465 Implemented
  • Multithreaded

Aduket - Straight-forward HTTP Client Testing, Assertions Included

$
0
0

Straight-forward HTTP client testing, assertions included!
Simple httptest.Serverwrapper with a little request recorder spice on it. No special DSL, no complex API to learn. Just create a server and fire your request like an Hadouken then assert them.

TODO
  • Add example usages
  • Add docs
  • Add response headers to NewServer
  • Add request header assertions
  • Add multiple request assertion logic
  • Extract Request().Body to requestRecorder.Body binding logic to CustomBinder
  • Add NewServerWithTimeout for testing API timeouts
  • http.RoundTripper interface can be implemented to mock arbitrary URLs
  • A Builder can be written to NewServer for ease of use


CTFTOOL - Interactive CTF Exploration Tool

$
0
0

An Interactive CTF Exploration Tool
This is ctftool, an interactive command line tool to experiment with CTF, a little-known protocol used on Windows to implement Text Services. This might be useful for studying Windows internals, debugging complex issues with Text Input Processors and analyzing Windows security.
It is possible to write simple scripts with ctftool for automating interaction with CTF clients or servers, or perform simple fuzzing.


Background
There is a blog post that accompanies the release of this tool available here.
https://googleprojectzero.blogspot.com/2019/08/down-rabbit-hole.html

Usage
ctftool has been tested on Windows 7, Windows 8 and Windows 10. Both 32-bit and x64 versions are supported, but x64 has been tested more extensively.
There is online help for most commands, simply type help to see a list of commands, and help <command> to see detailed help for a particular command.
$ ./ctftool.exe
An interactive ctf exploration tool by @taviso.
Type "help" for available commands.
Most commands require a connection, see "help connect".
ctf> help
Type `help <command>` for help with a specific command.
Any line beginning with # is considered a comment.

help - List available commands.
exit - Exit the shell.
connect - Connect to CTF ALPC Port.
info - Query server informaiton.
scan - Enumerate connected clients.
callstub - Ask a client to invoke a function.
createstub - Ask a client to instantiate CLSID.
hijack - Attempt to hijack an ALPC server path.
sendinput - Send keystrokes to thread.
setarg - Marshal a parameter.
getarg - Unmarshal a parameter.
wait - Wait for a process and set it as the default thread.
thread - Set the default thread.
sleep - Sleep for specified milliseconds.
forget - Forget all known stubs.
stack - Print the last leaked stack ptr.
marshal - Send command with marshalled parameters.
proxy - Send command with proxy parameters.
call - Send command without appended data.
window - Create and register a message window.
patch - Patch a marshalled parameter.
module - Print the base address of a module.
module64 - Print the base address of a 64bit module.
editarg - Change the type of a marshalled parameter.
symbol - Lookup a symbol offset from ImageBase.
set - Change or dump various ctftool parameters.
show - Show the value of special variables you can use.
lock - Lock the workstation, switch to Winlogon desktop.
repeat - Repeat a command multiple times.
run - Run a command.
script - Source a script file.
print - Print a string.
consent - Invoke the UAC consent dialog.
reg - Lookup a DWORD in the registry.
gadget - Find the offset of a pattern in a file.
section - Lookup property of PE section.
Most commands require a connection, see "help connect".
ctf>
The first thing you will want to do is connect to a session, and see which clients are connected.
ctf> connect
The ctf server port is located at \BaseNamedObjects\msctf.serverDefault1
NtAlpcConnectPort("\BaseNamedObjects\msctf.serverDefault1") => 0
Connected to CTF server@\BaseNamedObjects\msctf.serverDefault1, Handle 00000264
ctf> scan
Client 0, Tid 3400 (Flags 0x08, Hwnd 00000D48, Pid 8696, explorer.exe)
Client 1, Tid 7692 (Flags 0x08, Hwnd 00001E0C, Pid 8696, explorer.exe)
Client 2, Tid 9424 (Flags 0x0c, Hwnd 000024D0, Pid 9344, SearchUI.exe)
Client 3, Tid 12068 (Flags 0x08, Hwnd 00002F24, Pid 12156, PROCEXP64.exe)
Client 4, Tid 9740 (Flags 0000, Hwnd 0000260C, Pid 3840, ctfmon.exe)
You can then experiment by sending and receiving commands to the server, or any of the connected clients.

Building
If you don't want to build it yourself, check out the releases tab
I used GNU make and Visual Studio 2019 to develop ctftool. Only 32-bit builds are supported, as this allows the tool to run on x86 and x64 Windows.
If all the dependencies are installed, just typing make in a developer command prompt should be enough.
I use the "Build Tools" variant of Visual Studio, and the only components I have selected are MSVC, MSBuild, CMake and the SDK.
This project uses submodules for some of the dependencies, be sure that you're using a command like this to fetch all the required code.
git submodule update --init --recursive

Exploit
The examples only work on Windows 10 x64. All platforms and versions since Windows XP are affected, but no PoC is currently implemented.
This tool was used to discover many critical security problem with the CTF protocol that have existed for decades.
If you just want to test an exploit on Windows 10 x64 1903, run or double-click ctftool.exe and enter this command:
An interactive ctf exploration tool by @taviso.
Type "help" for available commands.
Most commands require a connection, see "help connect".
ctf> script .\scripts\ctf-consent-system.ctf
This will wait for the UAC dialog to appear, compromise it and start a shell.
In fact, the exploit code is split into two stages that you can use independently. For example, you might want to compromise a process belonging to a user on a different session using the optional parameters to connect.
Most CTF clients can be compromised, as the kernel forces applications that draw windows to load the vulnerable library.
Simply connect to a session, select a client to compromise (use the scan and thread commands, or just wait), then:
ctf> script .\scripts\ctf-exploit-common-win10.ctf

Exploitation Notes
Building a CFG jump chain that worked on the majority of CTF clients was quite challenging. There are two primary components to the final exploit, an arbitrary write primitive and then setting up our registers to call LoadLibrary().
You can use dumpbin /headers /loadconfig to dump the whitelisted branch targets.

Arbitrary Write
I need an arbitrary write gadget to create objects in a predictable location. The best usable gadget I was able to find was an arbitrary dword decrement in msvcrt!_init_time.
This means rather than just setting the values we want, We have to keep decrementing until the LSB reaches the value we want. This is a lot of work, but we never have to do more than (2^8 - 1) * len decrements.


Using this primitive, I build an object like this in some unused slack space in kernel32 .data section. It needs to be part of an image so that I can predict where it will be mapped, as image randomization is per-boot on Windows.


There were (of course) lots of arbitrary write gadgets, the problem was regaining control of execution after the write. This proved quite challenging, and that's the reason I was stuck with a dword decrement instead of something simpler.
MSCTF catches all exceptions, so the challenge was finding an arbitrary write that didn't mess up the stack so that SEH survived, or crashed really quickly without doing any damage.
The msvcrt!_init_time gadget was the best I could find, within a few instructions it dereferences NULL without corrupting any more memory. This means we can repeat it ad infinitum.

Redirecting Execution
I found two useful gadgets for adjusting registers, The first was:
combase!CStdProxyBuffer_CF_AddRef:
mov rcx,qword ptr [rcx-38h]
mov rax,qword ptr [rcx]
mov rax,qword ptr [rax+8]
jmp qword ptr [combase!__guard_dispatch_icall_fptr]
And the second was:
MSCTF!CCompartmentEventSink::OnChange:
mov rax,qword ptr [rcx+30h]
mov rcx,qword ptr [rcx+38h]
jmp qword ptr [MSCTF!_guard_dispatch_icall_fptr]
By combining these two gadgets with the object we formed with our write gadget, we can redirect execution to kernel32!LoadLibraryA by bouncing between them.
This was complicated, but the jump sequence works like this:


If you're interested, I recommend watching it in a debugger. Note that you will need to use the command sxd av and sxd bpe or the debugger will stop for every write!

Edit Session Attacks
Apart from memory corruption, a major vulnerability class exposed by CTF are edit session attacks. Normally, an unprivileged process (for example, low integrity) would not be permitted to send input or read data from a high privileged process. This security boundary is called UIPI, User Interface Privilege Isolation.
CTF breaks these assumptions, and allows unprivileged processes to send input to privileged processes.
There are some requirements for this attack to work, as far as I'm aware it will only work if you have a display language installed that uses an OoP TIP, out-of-process text input processor. Users with input languages that use IMEs (Chinese, Japanese, Korean, and so on) and users with a11y tools fall into this category.
Example attacks include...
  • Sending commands to an elevated command window.
  • Reading passwords out of dialogs or the login screen.
  • Escaping IL/AppContainer sandboxes by sending input to unsandboxed windows.
There is an example script in the scripts directory that will send input to a notepad window to demonstrate how edit sessions work.


Monitor Hijacking
Because there is no authentication involved between clients and servers in the CTF protocol, an attacker with the necessary privileges to write to \BaseNamedObjects can create the CTF ALPC port and pretend to be the monitor.
This allows any and all restrictions enforced by the monitor to be bypassed.
If you want to experiment with this attack, try the hijack command in ctftool.
An interactive ctf exploration tool by @taviso.
Type "help" for available commands.
ctf> hijack Default 1
NtAlpcCreatePort("\BaseNamedObjects\msctf.serverDefault1") => 0 00000218
NtAlpcSendWaitReceivePort("\BaseNamedObjects\msctf.serverDefault1") => 0 00000218
000000: 18 00 30 00 0a 20 00 00 00 11 00 00 44 11 00 00 ..0.. ......D...
000010: a4 86 00 00 b7 66 b8 00 00 11 00 00 44 11 00 00 .....f......D...
000020: e7 12 01 00 0c 00 00 00 80 01 02 00 20 10 d6 05 ............ ...
A a message received
ProcessID: 4352, SearchUI.exe
ThreadId: 4420
WindowID: 00020180
NtAlpcSendWaitReceivePort("\BaseNamedObjects\msctf.serverDefault1") => 0 00000218
000000: 18 00 30 00 0a 20 00 00 ac 0f 00 00 0c 03 00 00 ..0.. ..........
000010: ec 79 00 00 fa 66 b8 00 ac 0f 00 00 0c 03 00 00 .y...f..........
000020: 12 04 01 00 08 00 00 00 10 01 01 00 00 00 00 00 ................
A a message rec eived
ProcessID: 4012, explorer.exe
ThreadId: 780
WindowID: 00010110
NtAlpcSendWaitReceivePort("\BaseNamedObjects\msctf.serverDefault1") => 0 00000218
000000: 18 00 30 00 0a 20 00 00 ac 0f 00 00 0c 03 00 00 ..0.. ..........
000010: fc 8a 00 00 2a 67 b8 00 ac 0f 00 00 0c 03 00 00 ....*g..........
000020: 12 04 01 00 08 00 00 00 10 01 01 00 58 00 00 00 ............X...
A a message received
ProcessID: 4012, explorer.exe
ThreadId: 780
...

Cross Session Attacks
There is no session isolation in the CTF protocol, any process can connect to any CTF server. For example, a Terminal Services user can interact with the processes of any other user, even the Administrator.
The connect command in ctftool supports connecting to non-default sessions if you want to experiment with this attack.
An interactive ctf exploration tool by @taviso.
Type "help" for available commands.
Most commands require a connection, see "help connect".
ctf> help connect
Connect to CTF ALPC Port.

Usage: connect [DESKTOPNAME SESSIONID]
Without any parameters, connect to the ctf monitor for the current
desktop and session. All subsequent commands will use this connection
for communicating with the ctf monitor.

If a connection is already open, the existing connection is closed first.

If DESKTOPNAME and SESSIONID are specified, a connection to ctf monitor
for another desktop and session are opened, if it exists.
If the specified port does not exist, wait until it does exist. This is
so that you can wait for a session that hasn't started
yet in a script.
Examples
Connect to the monitor for current desktop
ctf> connect
Connect to a specific desktop and session.
ctf> connect Default 1
Most commands require a connection, see "help connect".

Status
At the time of writing, it is unknown how Microsoft will change the CTF protocol in response to the numerous design flaws this tool helped expose.
For that reason, consider this tool to be in proof-of-concept state.

Supported Versions and Platforms
All versions of Windows since Windows XP use CTF, on all supported platforms.
While not part of the base system until XP, versions as early as Windows 98 and NT4 would use CTF if you installed Microsoft Office.
ctftool supports Windows 7 and later on x86 and x64, but earlier versions and other platforms could be supported, and contributions would be appreciated.

Acronym
Microsoft doesn't document what CTF stands for, it's not explained in any of the Text Services documentation, SDK samples, symbol names, header files, or anywhere else. My theory is it's from CTextFramework, what you might name the class in hungarian notation.
There are some websites that claim ctfmon has something to do with Clear Type Fonts or the Azure Collaborative Translation Framework. They're mistaken.
Update: Jake Nelson finds evidence for "Common Text Framework"

Authors
Tavis Ormandy taviso@gmail.com

License
All original code is Apache 2.0, See LICENSE file for details.
The following components are imported third party projects.
  • pe-parse, by Andrew Ruef et al.
    • pe-parse is used to implement a GetProcAddress() for 64-bit modules from a 32-bit process. This is used in the symbol command, and allows the same binary to work on x64 and x86.
  • wineditline, by Paolo Tosco.
    • wineditline is used to implement user friendly command-line input and history editing.
  • dynamorio, by Derek Bruening et al.
    • I borrowed some of the prototypes and type definitions from DR.
  • ntdll.h, by Ladislav Zezula.
    • Ladislav collected some structure definitions and prototoypes from various WDK, DDK, SDK releases into one convenient file.



BurpSuite Random User-Agents - Burp Suite Extension For Generate A Random User-Agents

$
0
0

A Burp Suite extension to help pentesters to generate a random user-agent. This extension has been developed by M'hamed (@m4ll0k) Outaadi.

Installation
Download a jar file in release or compile the java code:
$ git clone https://github.com/m4ll0k/BurpSuite-Random_UserAgent.git random-useragents
$ cd random-useragents/src/main/java
$ javac burp/*.java
$ jar cf random-useragents.jar burp/*.class
video installation video


Nray - Distributed Port Scanner

$
0
0

Nray is a free, platform and architecture independent port and application layer scanner. Apart from regular targets (list of hosts/networks), it supports dynamic target selection, based on source like transparency logs" href="https://www.kitploit.com/search/label/Certificate%20transparency%20logs">certificate transparency logs or LDAP. Furthermore, nray allow to run in a distributed manner to speed up scans and to perform scans from different vantage points. Event-based results allow to further process information during the scan, e.g. using tools like jq or full-blown data analysis platforms like elasticsearch or Splunk.
This is the main repository where nray is developed. Downloads are here. If you are looking for user documentation, have a look at the project homepage. For information related to developing and contributing to nray, continue reading.
Nray is written in pure Go and its versioning follows the semantic versioning model. The development follows Vincent Driessen's "A successful git branching" model, therefore we try to keep the master branch stable and in line with releases whereas development happens on the development branch as well as branches derived from there.

Building
Care was taken to mostly stay in line with Go's build system, meaning that the project can be built with a plain go build. Nray is written in pure Go and care was taken to select only dependencies that also fulfill this requirement, therefore a standard Go installation (plus git) is enough to build nray on and for any supported platform.

With makefile
Nevertheless, there is a makefile that is supposed to be used for building production versions (make release) - it ensures that no C dependencies are linked in and symbols are stripped from binaries to save space. Also, binaries for most common operating systems are created automatically. A call to make will build a local development version, tailored to your current OS and architecture with C libraries and Go's race detector linked in.

Without makefile
Simply run go build - in case cross compiling is desired, GOOS and GOARCH parameters control target OS and architecture. For nodes, it is possible to inject server location and port directly into the binary: go build -ldflags "-X main.server=10.0.0.1 -X main.port=8601". To get smaller binaries, strip stuff that is not necessary away via -ldflags="-s -w" when calling go build. If you need to rebuild the protobuf schemas (this is not required unless you change the wire protocol!), run make create-schemas (which requires the protobuf compiler on your system).

Contributing and Development
Just grab the code and fix stuff that annoys you or hack in new awesome features! Every contribution is welcome and the goal is to make nray an awesome project for users and contributors!
Your code should pass standard checks performed by go vet and go lint. I recommend using Visual Studio Code with its Go support enabled, it is a really good IDE that brings such issues up early. Nray is always developed against the latest Go release, so if you are having trouble building nray, check if you have the latest go version installed.


Fuzzowski - The Network Protocol Fuzzer That We Will Want To Use

$
0
0
The idea is to be the Network Protocol Fuzzer that we will want to use.
The aim of this tool is to assist during the whole process of fuzzing a network protocol, allowing to define the communications, helping to identify the "suspects" of crashing a service, and much more

Last Changes
[16/12/2019]
  • Data Generation modules fully recoded (Primitives, Blocks, Requests)
    • Improved Strings fuzzing libraries, allowing also for custom lists, files and callback commands
    • Variable data type, which takes a variable set by the session, the user or a Response
  • Session fully recoded. Now it is based on TestCases, which contains all the information needed to perform the request, check the response, store data such as errors received, etc.
  • Responses added. Now you can define responses with s_response(), This allows to check the response from the server, set variables and even perform additional tests on the response to check if something is wrong
  • Monitors now automatically mark TestCases as suspect if they fail
  • Added the IPP (Internet Printing Protocol) Fuzzer that we used to find several vulnerabilities in different printer brands during our printers research project (https://www.youtube.com/watch?v=3X-ZnlyGuWc&t=7s)

Features
  • Based on Sulley Fuzzer for data generation [https://github.com/OpenRCE/sulley]
  • Actually, forked BooFuzz (which is a fork of Sulley) [https://github.com/jtpereyda/boofuzz ]
  • Python3
  • Not random (finite number of possibilities)
  • Requires to “create the packets” with types (spike fuzzer style)
  • Also allows to create ""Raw"" packets from parameters, with injection points (quite useful for fuzzing simple protocols)
  • Has a nice console to pause, review and retest any suspect (prompt_toolkit ftw)
  • Allows to skip parameters that cause errors, automatically or with the console
  • Nice print formats for suspect packets (to know exactly what was fuzzed)
  • It saves PoCs as python scripts for you when you mark a test case as a crash
  • Monitor modules to gather information of the target, detecting odd behaviours and marking suspects
  • Restarter modules that will restart the target if the connection is lost (e.g. powering off and on an smart plug)

Protocols implemented
  • LPD (Line Printing Daemon): Fully implemented
  • IPP (Internet Printing Protocol): Partially implemented
  • BACnet (Building Automation and Control networks Protocol): Partially implemented
  • Modbus (ICS communication protocol): Partially implemented

Installation
virtualenv venv -p python3
source venv/bin/activate
pip install -r requirements.txt

Help
usage: python -m fuzzowski [-h] [-p {tcp,udp,ssl}] [-b BIND] [-st SEND_TIMEOUT]
[-rt RECV_TIMEOUT] [--sleep-time SLEEP_TIME] [-nc] [-tn]
[-nr] [-nrf] [-cr]
[--threshold-request CRASH_THRESHOLD_REQUEST]
[--threshold-element CRASH_THRESHOLD_ELEMENT]
[--ignore-aborted] [--ignore-reset] [--error-fuzz-issues]
[-c CALLBACK | --file FILENAME] -f
{cops,dhcp,ipp,lpd,netconf,telnet_cli,tftp,raw}
[-r FUZZ_REQUESTS [FUZZ_REQUESTS ...]]
[--restart module_name [args ...]]
[--restart-sleep RESTART_SLEEP_TIME]
[--monitors {IPPMon} [{IPPMon} ...]] [--path PATH]
[--document_url DOCUMENT_URL]
host port

â–ˆ â–ˆ
&#226 ;–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆâ–ˆ
██████████
██ ████ ██
██ ████ ██
████ ████
█ █████ ███████ █
█ ██████████ █ Fuzzowski Network Fuzzer
█ █ █ █ 🄯 Fuzzers, inc.
██ ██

positional arguments:
host Destination Host
port Destination Port

optional arguments:
-h, --help show this help message and exit

Connection Options:
-p {tcp,udp,ssl}, --protocol {tcp,udp,ssl}
Protocol ( Default tcp)
-b BIND, --bind BIND Bind to port
-st SEND_TIMEOUT, --send_timeout SEND_TIMEOUT
Set send() timeout (Default 5s)
-rt RECV_TIMEOUT, --recv_timeout RECV_TIMEOUT
Set recv() timeout (Default 5s)
--sleep-time SLEEP_TIME
Sleep time between each test (Default 0)
-nc, --new-conns Open a new connection after each packet of the same test
-tn, --transmit-next-node
Transmit the next node in the graph of the fuzzed node

RECV() Options:
-nr, --no-recv Do not recv() in the socket after each send
-nrf, --no-recv-fuzz Do not recv() in the socket after sending a fuzzed request
-cr, --check-recv Check that data has been received in recv()

Crashes Options:
--threshold-request CRASH_THRESHOLD_REQUEST
Set the number of allowed crashes in a Request before skippi ng it (Default 9999)
--threshold-element CRASH_THRESHOLD_ELEMENT
Set the number of allowed crashes in a Primitive before skipping it (Default 3)
--ignore-aborted Ignore ECONNABORTED errors
--ignore-reset Ignore ECONNRESET errors
--error-fuzz-issues Log as error when there is any connection issue in the fuzzed node

Fuzz Options:
-c CALLBACK, --callback CALLBACK
Set a callback address to fuzz with callback generator instead of normal mutations
--file FILENAME Use contents of a file for fuzz mutations

Fuzzers:
-f {cops,dhcp,ipp,lpd,netconf,telnet_cli,tftp,raw}, --fuzz {cops,dhcp,ipp,lpd,netconf,telnet_cli,tftp,raw}
Available Protocols
-r FUZZ_REQUESTS [FUZZ_REQUESTS ...], --requests FUZZ_REQUESTS [FUZZ_REQUESTS ...]
Requests of the protocol to fuzz, default All
dhcp: [opt82]
ipp: [http_headers, get_printer_attribs, print_uri_message, send_uri, get_jobs, get_job_attribs]
lpd: [long_queue, short_queue, ctrl_file, data_file, remove_job]
telnet_cli: [commands]
tftp: [read]
raw: ['\x01string\n' '\x02request2\x00' ...]

Restart options:
--restart module_name [args ...]
Restarter Modules:
run: '<executable> [<argument> ...]' (Pass command and arguments within quotes, as only one argument)
smartplug: It will turn off and on the Smart Plug
teckin: <PLUG_IP>
--restart-sleep RESTART_SLEEP_TIME
Set sleep seconds after a crash bef ore continue (Default 5)

Monitor options:
--monitors {IPPMon} [{IPPMon} ...], -m {IPPMon} [{IPPMon} ...]
Monitor Modules:
IPPMon: Sends a get-attributes IPP message to the target

Other Options:
--path PATH Set path when fuzzing HTTP based protocols (Default /)
--document_url DOCUMENT_URL
Set Document URL for print_uri

Examples
Fuzz the get_printer_attribs IPP operation with default options:
python -m fuzzowski printer1 631 -f ipp -r get_printer_attribs --restart smartplug


Use the raw feature of IPP to fuzz the finger protocol:
python -m fuzzowski printer 79 -f raw -r '{{root}}\n'


Use the raw feature of IPP to fuzz the finger protocol, but instead of using the predefined mutations, use a file:
python -m fuzzowski printer 79 -f raw -r '{{root}}\n' --file 'path/to/my/fuzzlist'


Manul - A Coverage-Guided Parallel Fuzzer For Open-Source And Blackbox Binaries On Windows, Linux And MacOS

$
0
0

Manul is a coverage-guided parallel fuzzer for open-source and black-box binaries on Windows, Linux and macOS (beta) written in pure Python.

Quick Start
pip3 install psutil
git clone https://github.com/mxmssh/manul
cd manul
mkdir in
mkdir out
echo "AAAAAA" > in/test
python3 manul.py -i in -o out -n 4 "linux/test_afl @@"

Installing Radamsa
sudo apt-get install gcc make git wget
git clone https://gitlab.com/akihe/radamsa.git && cd radamsa && make && sudo make install
There is no need to install radamsa on Windows, Manul is distributed with radamsa native library on this platform.

List of Public CVEs
CVE IDsProductFinder
CVE-2019-9631 CVE-2019-7310 CVE-2019-9959PopplerMaksim Shudrak
CVE-2018-17019 CVE-2018-16807 CVE-2019-12175Bro/ZeekMaksim Shudrak
If you managed to find a new bug using Manul please contact me and I will add you in the list.

Dependencies
  1. psutil
  2. Python 2.7+ (will be deprecated after 1 Jan. 2020) or Python 3.7+ (preferred)

Coverage-guided fuzzing
Currently, Manul supports two types of instrumentation: AFL-based (afl-gcc, afl-clang and afl-clang-fast) and DBI.

Coverage-guided fuzzing (AFL instrumentation mode)
Instrument your target with afl-gcc or afl-clang-fast and Address Sanitizer (recommended for better results). For example:
CC=afl-gcc CXX=afl-g++ CFLAGS=-fsanitize=address CXXFLAGS=-fsanitize=address cmake <path_to_your_target>
make -j 8
USE_ASAN=1 CC=afl-clang-fast CXX=afl-clang-fast++ cmake <path_to_your_target>
make -j 8
See these instructions for more details.

Coverage-guided fuzzing (DBI mode)
You don't need to instrument your target in this mode but you need to download the latest version of DynamoRIO framework for Windows or Linux. The working version of Intel PIN is provided with Manul. You can find it in the dbi_clients_src/pin/pin-3.6-97554-g31f0a167d-gcc-linux folder.
Manul is distributed with x86/x64 precompiled clients for Linux and Windows. You can find them in the following folders:
linux/dbi_32|dbi_64/afl-pin.so (Intel PIN client)
linux/dbi_32|dbi_64/libbinafl.so (DynamoRIO client)
win/dbi_32|dbi_64/binafl.dll
Unfortunately, DynamoRIO is not officially supported on OS X. Intel PIN client on OS X is not yet ported.

Using DynamoRIO to fuzz black-box binaries
You can find DynamoRIO release packages at DynamoRIO download page. The supported version of DynamoRIO is 7.0.0-RC1 (see the next section if you need the latest version of DynamoRIO).
You have to uncomment the following lines in the manul.config file and provide correct path to DynamoRIO launcher and client.
# Choose DBI framework to provide coverage back to Manul ("dynamorio" or "pin"). Example dbi = dynamorio
dbi = dynamorio
# If dbi parameter is not None the path to dbi engine launcher and dbi client should be specified.
dbi_root = /home/max/DynamoRIO/bin64/drrun
dbi_client_root = /home/max/manul/linux/dbi_64/libbinafl.so
dbi_client_libs = None
IMPORTANT NOTE: You should use 32-bit launcher and 32-bit client to fuzz 32-bit binaries and 64-bit launcher and 64-bit client for 64-bit binaries!

Compiling DynamoRIO client library
If you want to use the latest version of DynamoRIO you need to compile instrumentation library from source code (see example below). The source code of instrumentation library can be found in dbi_clients_src located in the Manul main folder. On Windows, the compilation command (cmake) is the same as on Linux.
64-bit Linux

cd dbi_clients_src
wget https://github.com/DynamoRIO/dynamorio/releases/download/cronbuild-7.91.18124/DynamoRIO-x86_64-Linux-7.91.18124-0.tar.gz
tar xvf DynamoRIO-x86_64-Linux-7.91.18124-0.tar.gz
mkdir client_64
cd client_64
cmake ../dr_cov/ -DDynamoRIO_DIR=/home/max/manul/dbi_clients_src/DynamoRIO-x86_64-Linux-7.91.18124-0/cmake
make
If you need to compile 32-bit library, you should download DynamoRIO-i386-Linux-*.tar.gz archive instead of x86_64 and specify CFLAGS=-m32 CXXFLAGS=-m32 before cmake command.

Using Intel PIN to fuzz black-box binaries on Linux
TBD

Command-Line Arguments
The most frequently used options can be provided via the command line. The more options are supported using configuration file (manul.config).
Example: python3 manul.py -i corpus -o out_dir -n 40 "target @@"

positional arguments:
target_binary The target binary and options to be executed (don't forget to include quotes e.g. "target e @@").

optional arguments:
-h, --help show this help message and exit
-n NFUZZERS Number of parallel fuzzers
-s Run dumb fuzzing (no code instrumentation)
-c CONFIG Path to config file with additional options (see Configuration File Options section below)
-r Restore previous session

Required parameters:
-i INPUT Path to directory with initial corpus
-o OUTPUT Path to output directory

Configuration File Options
Manul is distributed with default manul.config file where user can find all supported options and usage examples. Options should be specified in the following format Format: <option_name> = <value>. Symbol # can be used to ignore a line.

Dictionary
dict = /home/max/dictionaries/test.dict. AFL mutation strategy allows user to specify a list of custom tokens that can be inserted at random places in the fuzzed file. Manul supports this functionality via this option (absolute paths preferred).

Mutator weights
mutator_weights=afl:7,radamsa:2,my_mutator:1. Mutator weights allow user to tell Manul how many mutations per 10 executions should be performed by certain fuzzer. In this example, AFL mutator will be executed in 7/10 mutations, Radamsa 2/10 and some custom my_mutator will get 1/10. If you want to disable certain mutator, the weight should be assigned to 0 (e.g. mutator_weights=afl:0,radamsa:1,my_mutator:9).

Deterministic Seed (Radamsa Option)
deterministic_seed = False|True. By providing True, Radamsa mutations will become deterministic thereby each run of Manul will lead to same outputs.

Print Summary per Thread
print_per_thread = False|True. By enabling this option, Manul will print summary for each thread being executed instead of total summary.

Disable Volatile Paths
disable_volatile_bytes = False|True By enabling this option, Manul will not blacklist volatile paths.

AFL's forkserver (only UNIX)
forkserver_on = False|True Enable or disable AFL's forkserver.

DBI Options
dbi = dynamorio|pin. This option tells Manul which DBI framework will be used to instrument the target.
dbi_root = <path>. This options tells Manul where to find DBI framework main launcher.
dbi_client_root = <path>. This options tells Manul where to find DBI client to perform instrumentation.
dbi_client_libs = name_#1,name_#2|None. This option can be used to specify list of libraries that need to be instrumented along with the main target (e.g. you have executable that loads the target library where you want to find bugs).

Timeout
timeout = 10. Time to wait before kill the target and send the next test case.

init_wait
init_wait = 1. This option can be used to setup a timeout required for target to initialize.

Netslave and Netmaster Options
The options net_config_master and net_config_slave are used to distribute Manul instances over network. You have to perform the following 3 steps to run distributed fuzzing.
  1. Create a file with a list of hosts in the following format: IP:port where your slaves will be executed.
  2. Start all Manul slave instances on remote machines (with all required options and path to target binary) and enable the following option: net_config_slave = 0.0.0.0:1337. Manul will launch the instance and will wait for incoming connection from master instance on port 1337.
  3. Start the master instance and provide the file with a list of slave instances created on Step 1 using net_config_master = file_name.

Debug Mode
debug = False|True - print debug info.
logging_enable = False|True - save debug info in the log.

Logo
manul_logo = False|True - print Manul logo at the beginning.

Disable Stats
no_stats = False|True - save statistics.

Bitmap Synchronization Frequency (5000 recommended for DBI mode)
sync_freq = 10000. Allows user to change coverage bitmap synchronization frequency. This options tells Manul how often it should synchronize coverage between parallel fuzzing instances. Lower value decreases performance but increases coordination between instances.

Custom Path to Save Output
#custom_path = test_path - this option allows to save the test case in the custom folder (if target wants to load it from some predefined place).

Command Line Fuzzing (experimental)
cmd_fuzzing = True|False. If this option is enabled, Manul will provide the input in the target via command line instead of saving in the file.

Ignore Signals
user_signals = 6,2,1|None. User can tell Manul which signals from the target should be ignored (not considered as crash).

Network Fuzzing (experimental)
target_ip_port = 127.0.0.1:7715|None - used to specify target IP and PORT. target_protocol = tcp|tcp - used to specify the protocol to send input in the target over network. net_sleep_between_cases = 0.0. This option can be used to define a delay between test cases being send in the target.
Currently, network fuzzing is an experimental feature (see issues for more details).

Adding Custom Mutator
Custom mutator can be added in the following three steps: Step 1. Create a python (.py) file and give it some name (e.g. example_mutator.py)
Step 2. Create two functions def init(fuzzer_id) and def mutate(data). See example_mutator for more details. Manul will call init function during fuzzing initialization and mutate for each file being provided into the target.
Step 3. Enable mutator by specifying its name using mutator_weights in manul.config. E.g. mutator_weights=afl:2,radamsa:0,example_mutator:8.
NOTE: AFL and Radamsa mutators should always be specified. If you want to disable AFL and/or Radamsa just assign 0 weights to them.

Technical Details
TBD

Status Screen



Syborg - Recursive DNS Subdomain Enumerator With Dead-End Avoidance System

$
0
0

Syborg is a Recursive DNS Domain Enumerator which is neither active nor completely passive. This tool simply constructs a domain name and queries it with a specified DNS Server.
Syborg has a Dead-end Avoidance system inspired from @Tomnomnom's ettu.
When you run subdomain enumeration with some of the tools, most of them passively query public records like virustotal, crtsh or censys. This enumeration technique is really fast and helps to find out a lot of domains in much less time.
However, there are some domains that may not be mentioned in these public records. In order to find those domains, Syborg interacts with the nameservers and recursively brute-forces subdomain from the DNS until it's queue is empty.

Image Credits: Carbon
As mentioned on ettu's page, I quote:
Ordinarily if there are no records to return for a DNS name you might expect an NXDOMAIN error:
▶ host four.tomnomnom.uk
Host four.tomnomnom.uk not found: 3(NXDOMAIN)
You may have noticed that sometimes you get an empty response instead though:
▶ host three.tomnomnom.uk
The difference in the latter case is often that another name - one that has your queried name as a suffix - exists and has records to return
▶ host one.two.three.tomnomnom.uk
one.two.three.tomnomnom.uk has address 46.101.59.42
This difference in response can be used to help avoid dead-ends in recursive DNS brute-forcing by not recursing in the former situation:
▶ echo -e "www\none\ntwo\nthree" | ettu tomnomnom.uk
one.two.three.tomnomnom.uk
Syborg incorporates all of these functionalities with simple concurrency and recursion.

Requirements:
Python 3.x (Recommended)
Python 2.x (Not tested)

Installation:
Clone the repo using the git clone command as follows:
git clone https://github.com/MilindPurswani/Syborg.git
Resolve the Dependencies:
pip3 install -r requirements.txt

Usage:
python3 syborg.py yahoo.com 
More information regarding usage can be found in Syborg's Creative Usage Guidelines. Do check it out!
At times, it is also possible that Syborg will hit High CPU Usage and that can cost you a lot if you are trying to use this tool on your VPS. Therefore to limit that use another utility called Cpulimit
cpulimit -l 50 -p $(pgrep python3)
This tool can be downloaded as follows:
sudo apt install cpulimit

Special Thanks <3:
  1. @nahamsec for his invaluable contribution towards the community by live streams. Check out his twitch channel https://twitch.tv/nahamsec
  2. @tomnomnom for making such awesome tools and sharing with everyone. Be sure to check out his twitch https://www.twitch.tv/tomnomnomuk
  3. @GP89 for the FileQueue lib that resolved high memory consumption problem with Syborg.
  4. Patrik Hudak for his awesome teachings and tools like dnsgen.


Viewing all 5838 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>