Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

SimplyEmail - Email Recon Made Fast And Easy

$
0
0

This tool was based on the work of theHarvester and kind of a port of the functionality. This was just an expansion of what was used to build theHarvester and will incorporate his work but allow users to easily build Modules for the Framework.

MAJOR CALLOUTS:

Work Conducted by:

Scrape EVERYTHING - Simply
Current Platforms Supported:
  • Kali Linux 2.0
  • Kali Linux 1.0
  • Debian (deb8u3)
A few small benefits:
  • Easy for you to write modules (All you need is 1 required Class option and you're up and running)
  • Use the built in Parsers for rawest results
  • Multiprocessing Queue for modules and Result Queue for easy handling of Email data
  • Simple integration of theHarvester Modules and new ones to come
  • Also the ability to change major settings fast without diving into the code
API Based Searches:
  • When API based searches become available, no need to add them to the Command line
  • API keys will be auto pulled from the SimpleEmail.ini, this will activate the module for use

Get Started on Deb
Please RUN the simple Setup Bash script!
# sh Setup.sh
or
# ./Setup.sh

Get Started in Kali
Please RUN the simple Setup Bash script! NOTE: At the moment the up-streeam debian python-futures contain bugs within configparser / python-magic. This has been reported to KALI and debiab. configparser bug in apt-get python-futures: https://bugs.kali.org/view.php?id=3245 SimplyEmail bug reported: https://github.com/killswitch-GUI/SimplyEmail/issues/11
FIX KALI BUG: 
# apt-get remove configparser
# apt-get remove python-magic
or
# apt-get remove python-futures

Normal Setup
# ./Setup.sh

Get Started on Mac OSX (At own risk)
Install brew:
https://coolestguidesontheplanet.com/installing-homebrew-on-os-x-el-capitan-10-11-package-manager-for-unix-apps/

$ sudo easy_install pip
$ sudo brew install libmagic
$ pip install python-magic
$ ./Setup.sh

Standard Help
 ============================================================
Current Version: v1.4.2 | Website: CyberSyndicates.com
============================================================
Twitter: @real_slacker007 | Twitter: @Killswitch_gui
============================================================
------------------------------------------------------------
______ ________ __ __
/ \/ | / / |
/$$$$$$ $$$$$$$$/ _____ ____ ______ $$/$$ |
$$ \__$$/$$ |__ / \/ \ / \/ $$ |
$$ \$$ | $$$$$$ $$$$ |$$$$$$ $$ $$ |
$$$$$$ $$$$$/ $$ | $$ | $$ |/ $$ $$ $$ |
/ \__$$ $$ |_____$$ | $$ | $$ /$$$$$$$ $$ $$ |
$$ $$/$$ $$ | $$ | $$ $$ $$ $$ $$ |
$$$$$$/ $$$$$$$$/$$/ $$/ $$/ $$$$$$$/$$/$$/

------------------------------------------------------------
usage: SimplyEmail.py [-all] [-e company.com] [-l] [-t html / flickr / google]
[-s] [-n] [-verify] [-v] [--json json-emails.txt]

Email enumeration is a important phase of so many operation that a pen-tester
or Red Teamer goes through. There are tons of applications that do this but I
wanted a simple yet effective way to get what Recon-Ng gets and theHarvester
gets. (You may want to run -h)

optional arguments:
-all Use all non API methods to obtain Emails
-e company.com Set required email addr user, ex ale@email.com
-l List the current Modules Loaded
-t html / flickr / google
Test individual module (For Linting)
-s Set this to enable 'No-Scope' of the email parsing
-n Set this to enable Name Generation
-verify Set this to enable SMTP server email verify
-v Set this switch for verbose output of modules
--json json-emails.txt
Set this switch for json output to specfic file

Run SimplyEmail
Let's say your target is cybersyndicates.com
./SimplyEmail.py -all -e cybersyndicates.com

or in verbose
./SimplyEmail.py -all -v -e cybersyndicates.com

or in verbose and no "Scope"
./SimplyEmail.py -all -v -e cybersyndicates.com -s

or with email verification
./SimplyEmail.py -all -v -verify -e cybersyndicates.com

or with email verification & Name Creation
./SimplyEmail.py -all -v -verify -n -e cybersyndicates.com

or json automation
./SimplyEmail.py -all -e cybersyndicates.com --json cs-json.txt
This will run ALL modules that are have API Key placed in the SimpleEmail.ini file and will run all non-API based modules.

List Modules SimpleEmail
root@kali:~/Tools/SimplyEmail# ./SimplyEmail.py -l
============================================================
Current Version: v0.7 | Website: CyberSyndicates.com
============================================================
Twitter: @real_slacker007 | Twitter: @Killswitch_gui
============================================================
------------------------------------------------------------
______ ________ __ __
/ \/ | / / |
/$$$$$$ $$$$$$$$/ _____ ____ ______ $$/$$ |
$$ \__$$/$$ |__ / \/ \ / \/ $$ |
$$ \$$ | $$$$$$ $$$$ |$$$$$$ $$ $$ |
$$$$$$ $$$$$/ $$ | $$ | $$ |/ $$ $$ $$ |
/ \__$$ $$ |_____$$ | $$ | $$ /$$$$$$$ $$ $$ |
$$ $$/$$ $$ | $$ | $$ $$ $$ $$ $$ |
$$$$$$/ $$$$$$$$/$$/ $$/ $$/ $$$$$$$/$$/$$/

------------------------------------------------------------
[*] Available Modules are:

1) Modules/HtmlScrape.py
2) Modules/PasteBinSearch.py
3) Modules/ExaleadSearch.py
4) Modules/SearchPGP.py
5) Modules/ExaleadXLSXSearch.py
6) Modules/ExaleadDOCXSearch.py
7) Modules/OnionStagram.py
8) Modules/GooglePDFSearch.py
9) Modules/RedditPostSearch.py
10) Modules/AskSearch.py
11) Modules/EmailHunter.py
12) Modules/WhoisAPISearch.py
13) Modules/Whoisolgy.py
14) Modules/GoogleDocxSearch.py
15) Modules/GitHubUserSearch.py
16) Modules/YahooSearch.py
17) Modules/GitHubCodeSearch.py
18) Modules/ExaleadPDFSearch.py
19) Modules/GoogleSearch.py
20) Modules/FlickrSearch.py
21) Modules/GoogleDocSearch.py
22) Modules/CanaryBinSearch.py
23) Modules/ExaleadDOCSearch.py
24) Modules/GoogleXLSXSearch.py
25) Modules/GitHubGistSearch.py

API Modules and Searches
API based searches can be painful and hard to configure. The main aspect of SimplyEmail is to easily integrate these aspects, while not compromising the ease of using this tool. Using the configuration file, you can simply add your corresponding API key and get up and running. Modules are automatically identified as API based searches, checks if the corresponding keys are present and if the keys are present it will run the module.

Canar.io API Search
Canario is a service that allows you to search for potentially leaked data that has been exposed on the Internet. Passwords, e-mail addresses, hostnames, and other data have been indexed to allow for easy searching.
Simply Register for a key here: [canar.io] (https://canar.io/register/) or https://canar.io/register/ Place the key in the SimplyEmail.ini at [APIKeys] section, the module will now initiate when the --all flag is user of the -t.

Name Generation
Some times SimplyEmail will only find the standard email addresses or just a few emails. In this case email creation may be your saving grace. Using name generation can allow you not only scrape names from diffrent sites but allow you to auto detect the format to some accuracy.

LinkedIn Name Generation
Using Bing and work from PhishBait I was able to implement LinkedIn name lookups from the company name.

Connect6.com Name Generation
Connect6 is also a great source for names, and also a bit flaky to find the source. Using a AutoUrl function I built I do attempt to find the correct URL for you. If not I provide you with a few more to pick from.
 ============================================================
Current Version: v1.1 | Website: CyberSyndicates.com
============================================================
Twitter: @real_slacker007 | Twitter: @Killswitch_gui
============================================================
[*] Now Starting Connect6 Scrape:
[*] SimplyEmail has attempted to find correct URL for Connect6:
URL detected: www.connect6.com/Vfffffff,%20LLC/c
[>] Is this URL correct?: n
Potential URL: www.connect6.com/Vffffffff,%20LLC/c
Potential URL: www.connect6.com/fffffff/p/181016043240247014147078237069133079124017210127108009097255039209172025193089206212192166241042174198072085028234035215132077249038065254013074
Potential URL: www.connect6.com/Cfffff/p/034097047081090085111147210185030172009078049169022098212236211095220195001177030045187199131226210223245205084079141193247011181189036140240023
Potential URL: www.connect6.com/Jfffffff/p/102092136035048036136024218227078226242230121102078233031208236153124239181008089103120004217018
Potential URL: www.connect6.com/Adam-Salerno/p/021252074213080142144144173151186084054192089124012168233122054057047043085086050013217026242085213002224084036030244077024184140161144046156080
[!] GoogleDork This: site:connect6.com "Vfffff.com"
[-] Commands Supported: (B) ack - (R) etry
[>] Please Provid a URL: b

Verifying Emails via target SMTP server:
More often than not you will have at least a few invalid emails gathered from recon. SimplyEmail now supports the ability to verify and check if the email is valid.
  • Looks up MX records
  • Sorts based on priority
  • Checks if SMTP server will respond other than 250
  • If the server is suitable, checks for 250 codes
  • Outputs a (.txt) file with verified emails.
============================================================
Curent Version: v1.0 | Website: CyberSyndicates.com
============================================================
Twitter: @real_slacker007 | Twitter: @Killswitch_gui
============================================================
[*] Email reconnaissance has been completed:

Email verification will allow you to use common methods
to attempt to enumerate if the email is valid.
This grabs the MX records, sorts and attempts to check
if the SMTP server sends a code other than 250 for known bad addresses

[>] Would you like to verify email(s)?: y
[*] Attempting to resolve MX records!
[*] MX Host: gmail-smtp-in.l.google.com.
[*] Checking for valid email: alwathiqlegaltranslation@gmail.com
[!] Email seems valid: alwathiqlegaltranslation@gmail.co

Understanding Reporting Options:
One of the most frustrating aspects of Pen-testing is the tools' ability to report the findings and make those easily readable. This may be for the data provided to a customer or just the ability to report on source of the data.
So I'm making it my goal for my tools to take that work off your back and make it as simple as possible! Let's cover the two different reports generated.

Text Output:
With this option results are generated and appended to a running text file called Email_List.txt. this makes it easy to find past searches or export to tool of choice. Example:
  ----------------------------------
Email Recon: 11/11/2015 05:13:32
----------------------------------
bo@mandiant.com
in@mandiant.com
sc@mandiant.com
je@mandiant.com
su@mandiant.com
----------------------------------
Email Recon: 11/11/2015 05:15:42
----------------------------------
bo@mandiant.com
in@mandiant.com
sc@mandiant.com
je@mandiant.com
su@mandiant.com

JSON Output
using the --json test.txt flag will alow you to output standard JSON text file for automation needs. This can be currently used with the email scraping portion only, maybe name generation and email verification to come. These helpers will be soon in the SQL DB and API for more streamline automation. Example output:
{
"current_version": "v1.4.1",
"data_of_collection": "26/06/2016",
"domain_of_collection": "---SNIP---",
"email_collection_count": 220,
"emails": [
{
"collection_data": "26/06/2016",
"collection_time": "18:47:42",
"email": "---SNIP---",
"module_name": "Searching PGP"
},
---SNIP---
{
"collection_data": "26/06/2016",
"collection_time": "18:51:46",
"email": "---SNIP---",
"module_name": "Exalead PDF Search for Emails"
}
],
"time_of_collection": "18:53:04",
"tool_of_collection": "SimplyEmail"
}

HTML Output:
As I mentioned before a powerful function that I wanted to integrate was the ability to produce a visually appealing and rich report for the user and potentially something that could be part of data provided to a client. Please let me know with suggestions!

Email Source:
![Alt text](/bootstrap-3.3.5/Screen Shot 2015-11-11 at 5.27.15 PM.png?raw=true "Report")

Email Section:
  • Html report now shows Alerts for Canary Search Results! ![Alt text](/bootstrap-3.3.5/Screen Shot 2015-11-11 at 5.27.31 PM.png?raw=true "Report Html")
##Current Email Evasion Techniques
  • The following will be built into the Parser Soon:
  • shinichiro.hamaji at gmail.com
  • shinichiro.hamaji AT gmail.com
  • simohayha.bobo at gmail.com
  • "jeffreytgilbert" => "gmail.com"
  • felix021 # gmail.com
  • hirokidaichi[at]gmail.com
  • hirokidaichi[@]gmail.com
  • hirokidaichi[#]gmail.com
  • xaicron{ at }gmail.com
  • xaicron{at}gmail.com
  • xaicron{@}gmail.com
  • xaicron(@)gmail.com
  • xaicron + gmail.com
  • xaicron ++ gmail.com
  • xaicron ## gmail.com
  • bekt17[@]gmail.com
  • billy3321 -AT- gmail.com
  • billy3321[AT]gmail.com
  • ybenjo.repose [[[at]]] gmail.com
  • sudhindra.r.rao (at) gmail.com
  • sudhindra.r.rao nospam gmail.com
  • shinichiro.hamaji (.) gmail.com
  • shinichiro.hamaji--at--gmail.com

TODO:
Modules Under Dev:
-----------------------------
( ) StartPage Search (can help with captcha issues)
( ) Searching SEC Data
( ) PwnBin Search
( ) Past Data Dumps
( ) psbdmp API Based and non Alert

Framework Under Dev:
-----------------------------
( ) New Parsers to clean results
( ) Fix import errors with Glob
( ) Add in "[@]something.com" to search Regex and engines
( ) Add Threading/Multi to GitHub Search
( ) Add Source of collection to HTML Output

Current Issues:
-----------------------------
( ) PDF miner Text Extraction Error
( ) Verify Emails function and only one name list raises errors



Twiga - A Tool That Enumerates Android Devices For Information Useful In Understanding Its Internals And For Exploit Development

$
0
0

A tool that enumerates Android devices for information useful in understanding its internals and for exploit development. It supports android 4.2 to android 7.1.1

Requirements
  • The most current ADB must be in your path and fully functional
  • The report name must not have any whitespace

Limitations
  • Some information and files cannot be pulled higher up the SDK version due to strict SELinux policies and android hardening.
  • It can only run on one device at a time for now

To Do
  • Support for enumeration on a rooted device
  • Support enumeration on multiple devices at a time
  • Generate PDF report on the enumartuon data

Inspired by


Pythem - Penetration Testing Framework

$
0
0

pythem is a multi-purpose pentest framework written in Python. It has been developed to be used by security researchers and security professionals. The tool intended to be used only for acts within the law. I am not liable for any undue and unlawful act practiced by this tool, for more information, read the license. Only runs on GNU/Linux OS.

Installation

Step-by-step


Quick-Start
git clone https://github.com/m4n3dw0lf/pythem
cd pythem
chmod +x install
./install
Run with:
pythem

Examples

Exploit Development with pythem

Commands Reference

Index

Core

Network, Man-in-the-middle and Denial of service (DOS)

Exploit development and Reverse Engineering

Brute Force

Utils

RastLeak - Tool To Automatic Leak Information Using Hacking With Engine Searches

$
0
0

Tool to automatic leak information using Hacking with engine searches.

How to install
Install requirements with:
pip install -r requirements.txt

How to use:
python rastleak.py

Usage:
$ python rastleak.py -h
usage: rastleak.py [-h] -d DOMAIN -o OPTION -n SEARCH -e EXT [-f EXPORT]

This script searchs files indexed in the main searches of a domain to detect a possible leak information

optional arguments:
-h, --help show this help message and exit
-d DOMAIN, --domain DOMAIN
The domain which it wants to search
-o OPTION, --option OPTION
Indicate the option of search
1.Searching leak information into the target
2.Searching leak information outside target
-n SEARCH, --search SEARCH
Indicate the number of the search which you want to do
-e EXT, --ext EXT Indicate the option of display:
1-Searching the domains where these files are found
2-Searching ofimatic files

-f EXPORT, --export EXPORT
Indicate the type of format to export results.
1.json (by default)
2.xlsx


Dracnmap v2.2 - Exploit Network and Gathering Information with Nmap

$
0
0

Dracnmap is an open source program which is using to exploit the network and gathering information with nmap help. Nmap command comes with lots of options that can make the utility more robust and difficult to follow for new users. Hence Dracnmap is designed to perform fast scaning with the utilizing script engine of nmap and nmap can perform various automatic scanning techniques with the advanced commands.

Screenshots



Changelog
  • v2.2 - add multi task in dracnmap when scan
  • v2.2 - the output file will be in root / on folder dracnmap
  • v2.1 - Fixed bug ( typo and double function )
  • v2.0 - Changed a banner
  • v2.0 - added auth-category (34 OPTIONAL) in to nmap script engine Advanced
  • v2.0 - added broadcast-category (44 OPTIONAL) in to nmap script engine Advanced
  • v2.0 - added brute-category (71 OPTIONAL) in to nmap script engine Advanced
  • v2.0 - added exploit-category (44 OPTIONAL) in to nmap script engine Advanced
  • v2.0 - added fuzzer-category (4 OPTIONAL)in to nmap script engine Advanced
  • v2.0 - added malware-category (10 OPTIONAL) in to nmap script engine Advanced
  • v2.0 - added vuln-category (89 OPTIONAL)in to nmap script engine Advanced
  • v2.0 - Delete future bruteforce with nse script & Changed to Nmap Script Engine Advanced with sub optional
  • v1.3 - Add 70 Bruteforce with nse script :))
  • v1.2 - Add dracnmap for dracos
  • v1.2 - Fix some functoin
  • v1.1 - Collecting Valid Email Accounts with nse Script ( WEB SERVICE )
  • V1.1 - Add Gathering information from WHOIS ( MENU WEB SERVICE )
  • V1.1 - Add Geolocation ip addres with nse script ( MENU WEB SERVICE )
  • v1.0 - Release Dracnmap

Getting Started
  1. git clone https://github.com/Screetsec/Dracnmap.git
  2. cd Dracnmap
  3. chmod +x Dracnmap.sh
  4. sudo ./Dracnmap.sh or sudo su ./Dracnmap.sh

 Requirements
  • A linux operating system. We recommend Kali Linux 2 or Kali 2016.1 rolling / Cyborg / Parrot / Dracos / BackTrack / Backbox / and another operating system ( linux )
  • Must install nmap

Tutorial



JKS Private Key Cracker - Cracking passwords of private key entries in a JKS file

$
0
0

The Java Key Store (JKS) is the Java way of storing one or several cryptographic private and public keys for asymmetric cryptography in a file. While there are various key store formats, Java and Android still default to the JKS file format. JKS is one of the file formats for Java key stores, but JKS is confusingly used as the acronym for the general Java key store API as well. This project includes information regarding the security mechanisms of the JKS file format and how the password protection of the private key can be cracked. Due the unusual design of JKS the developed implementation can ignore the key store password and crack the private key password directly. Because it ignores the key store password, this implementation can attack every JKS configuration, which is not the case with most other tools. By exploiting a weakness of the Password Based Encryption scheme for the private key in JKS, passwords can be cracked very efficiently. Until now, no public tool was available exploiting this weakness. This technique was implemented in hashcat to amplify the efficiency of the algorithm with higher cracking speeds on GPUs.
To get the theory part, please refer to the POC||GTFO article "15:12 Nail in the Java Key Store Coffin" in issue 0x15 included in this repository (pocorgtfo15.pdf) or available on various mirros like this beautiful one: https://unpack.debug.su/pocorgtfo/
Before you ask: JCEKS or BKS or any other Key Store format is not supported (yet).

How you should crack JKS files
The answer is build your own cracking hardware for it ;) . But let's be a little more practical, so the answer is using your GPU:
    _____:  _____________         _____:  v3.6.0     ____________
_\ |__\______ _/_______ _\ |_____ _______\______ /__ ______
| _ | __ \ ____/____ _ | ___/____ __ |_______/
| | | \ _\____ / | | \ / \ | |
|_____| |______/ / /____| |_________/_________: |
|_____:-aTZ!/___________/ |_____: /_______:

* BLAKE2 * BLOCKCHAIN2 * DPAPI * CHACHA20 * JAVA KEYSTORE * ETHEREUM WALLET *
All you need to do is run the following command:
java -jar JksPrivkPrepare.jar your_JKS_file.jks > hash.txt
If your hash.txt ends up being empty, there is either no private key in the JKS file or you specified a non-JKS file.
Then feed the hash.txt file to hashcat (version 3.6.0 and above), for example like this:
$ ./hashcat -m 15500 -a 3 -1 '?u|' -w 3 hash.txt ?1?1?1?1?1?1?1?1?1
hashcat (v3.6.0) starting...

OpenCL Platform #1: NVIDIA Corporation
======================================
* Device #1: GeForce GTX 1080, 2026/8107 MB allocatable, 20MCU

Hashes: 1 digests; 1 unique digests, 1 unique salts
Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes, 5/13 rotates

Applicable optimizers:
* Zero-Byte
* Precompute-Init
* Not-Iterated
* Appended-Salt
* Single-Hash
* Single-Salt
* Brute-Force

Watchdog: Temperature abort trigger set to 90c
Watchdog: Temperature retain trigger set to 75c

$jksprivk$*D1BC102EF5FE5F1A7ED6A63431767DD4E1569670...8*test:POC||GTFO

Session..........: hashcat
Status...........: Cracked
Hash.Type........: JKS Java Key Store Private Keys (SHA1)
Hash.Target......: $jksprivk$*D1BC102EF5FE5F1A7ED6A63431767DD4E1569670...8*test
Time.Started.....: Tue May 30 17:41:58 2017 (8 mins, 25 secs)
Time.Estimated...: Tue May 30 17:50:23 2017 (0 secs)
Guess.Mask.......: ?1?1?1?1?1?1?1?1?1 [9]
Guess.Charset....: -1 ?u|, -2 Undefined, -3 Undefined, -4 Undefined
Guess.Queue......: 1/1 (100.00%)
Speed.Dev.#1.....: 7946.6 MH/s (39.48ms)
Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts
Progress.........: 4014116700160/7625597484987 (52.64%)
Rejected.........: 0/4014116700160 (0.00%)
Restore.Point....: 5505024000/10460353203 (52.63%)
Candidates.#1....: NNVGFSRFO -> Z|ZFVDUFO
HWMon.Dev.#1.....: Temp: 75c Fan: 89% Util:100% Core:1936MHz Mem:4513MHz Bus:1

Started: Tue May 30 17:41:56 2017
Stopped: Tue May 30 17:50:24 2017
So from this repository you basically only need the JksPrivkPrepare.jar to run a cracking session.

Other things in this repository
  • test_run.sh: A little test script that you should be able to run after a couple of minutes to see this project in action. It includes comments on how to setup the dependencies for this project.
  • benchmarking: tests that show why you should use this technique and not others. Please read the "Nail in the JKS coffin" article.
  • example_jks: generate example JKS files
  • fingerprint_creation: Every plaintext private key in PKCS#8 has it's own "fingerprint" that we expect when we guess the correct password. These fingerprints are necessary to make sure we are able to detect when we guessed the correct password. Please read the "Nail in the JKS coffin" article. This folder has the code to generate these fingerprints, it's a little bit hacky but I don't expect that it will be necessary to add any other fingerprints ever.
  • JksPrivkPrepare: The source code of how the JKS files are read and the hash calculated we need to give to hashcat.
  • jksprivk_crack.py: A proof of concept implementation that can be used instead of hashcat. Obviously this is much slower than hashcat, but it can outperform John the Ripper (JtR) in certain cases. Please read the "Nail in the JKS coffin" article.
  • jksprivk_decrypt.py: A little helper script that can be used to extract a private key once the password was correctly guessed.
  • run_example_jks.sh: A script that runs JksPrivkPrepare.jar and jksprivk_crack.py on all example JKS files in the example_jks folder. Make sure you run the generate_examples.py in example_jks script before.

Related work and further links
A big shout to Casey Marshall who wrote the JKS.java class, which is used in a modified version in this project:
/* JKS.java -- implementation of the "JKS" key store.
Copyright (C) 2003 Casey Marshall <rsdio@metastatic.org>

Permission to use, copy, modify, distribute, and sell this software and
its documentation for any purpose is hereby granted without fee,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation. No representations are made about the
suitability of this software for any purpose. It is provided "as is"
without express or implied warranty.

This program was derived by reverse-engineering Sun's own
implementation, using only the public API that is available in the 1.4.1
JDK. Hence nothing in this program is, or is derived from, anything
copyrighted by Sun Microsystems. While the "Binary Evaluation License
Agreement" that the JDK is licensed under contains blanket statements
that forbid reverse-engineering (among other things), it is my position
that US copyright law does not and cannot forbid reverse-engineering of
software to produce a compatible implementation. There are, in fact,
numerous clauses in copyright law that specifically allow
reverse-engineering, and therefore I believe it is outside of Sun's
power to enforce restrictions on reverse-engineering of their software,
and it is irresponsible for them to claim they can. */
Various more information which are mentioned in the article as well:
Neighborly greetings go out to atom, vollkorn, cem, doegox, corkami, xonox and rexploit for supporting this research in one form or another!


SSH MITM - SSH Man-In-The-Middle Tool

$
0
0

This penetration testing tool allows an auditor to intercept SSH connections. A patch applied to the OpenSSH v7.5p1 source code causes it to act as a proxy between the victim and their intended SSH server; all plaintext passwords and sessions are logged to disk.
Of course, the victim's SSH client will complain that the server's key has changed. But because 99.99999% of the time this is caused by a legitimate action (OS re-install, configuration change, etc), many/most users will disregard the warning and continue on.
NOTE: Only run the modified sshd_mitm in a VM or container! Ad-hoc edits were made to the OpenSSH sources in critical regions, with no regard to their security implications. Its not hard to imagine these edits introduce serious vulnerabilities.

Change Log
  • v1.0: May 16, 2017: Initial revision.
  • v1.1: July 6, 2017: Removed root privilege dependencies, added automatic installer, added Kali Linux support, added JoesAwesomeSSHMITMVictimFinder.py script to find potential targets on a LAN.

To Do
The following list tracks areas to improve:
  • Support SFTP MITM'ing.
  • Print hostname, username, and password at the top of session logs.
  • Add port forwarding support.
  • Regex substitute the output of ssh-keygen when a user tries to check the host key hash. >:]
  • Create wrapper script that detects when user is trying to use key authentication only, and de-spoof them automatically.

Initial Setup
As root, run the install.sh script. This will install prerequisites from the repositories, download the OpenSSH archive, verify its signature, compile it, and initialize a non-privileged environment to execute within.

Finding Targets
The JoesAwesomeSSHMITMVictimFinder.py script makes finding targets on a LAN very easy. It will ARP spoof a block of IPs and sniff for SSH traffic for a short period of time before moving on to the next block. Any ongoing SSH connections originating from devices on the LAN are reported.
By default, JoesAwesomeSSHMITMVictimFinder.py will ARP spoof and sniff only 5 IPs at a time for 20 seconds before moving onto the next block of 5. These parameters can be tuned, though a trade-off exists: the more IPs that are spoofed at a time, the greater the chance you will catch an ongoing SSH connection, but also the greater the strain you will put on your puny network interface. Under too high of a load, your interface will start dropping frames, causing a denial-of-service and greatly raising suspicions (this is bad). The defaults shouldn't cause problems in most cases, though it'll take longer to find targets. The block size can be safely raised on low-utilization networks.

Example:
# ./JoesAwesomeSSHMITMVictimFinder.py --interface enp0s3 --ignore-ips 10.11.12.50,10.11.12.53
Found local address 10.11.12.141 and adding to ignore list.
Using network CIDR 10.11.12.141/24.
Found default gateway: 10.11.12.1
IP blocks of size 5 will be spoofed for 20 seconds each.
The following IPs will be skipped: 10.11.12.50 10.11.12.53 10.11.12.141


Local clients:
* 10.11.12.70 -> 174.129.77.155:22
* 10.11.12.43 -> 10.11.99.2:22
The above output shows that two devices on the LAN have created SSH connections (10.11.12.43 and 10.11.12.70); these can be targeted for a man-in-the-middle attack. Note, however, that in order to potentially intercept credentials, you'll have to wait for them to initiate new connections. Impatient pentesters may opt to forcefully close existing SSH sessions, prompting users to create new ones immediately...

Running The Attack
1.) Once you've completed the initial setup and found a list of potential victims (see above), execute run.sh as root. This will start sshd_mitm, enable IP forwarding, and set up SSH packet interception through iptables.
2.) ARP spoof the target(s) (Protip: do NOT spoof all the things! Your puny network interface won't likely be able to handle an entire network's traffic all at once. Only spoof a couple IPs at a time):
arpspoof -r -t 192.168.x.1 192.168.x.5
Alternatively, you can use the ettercap tool:
ettercap -i enp0s3 -T -M arp /192.168.x.1// /192.168.x.5,192.168.x.6//
3.) Monitor auth.log. Intercepted passwords will appear here:
sudo tail -f /var/log/auth.log
4.) Once a session is established, a full log of all input & output can be found in /home/ssh-mitm/session_*.txt.

Sample Results
Upon success, /var/log/auth.log will have lines that log the password, like this:
May 16 23:14:01 showmeyourmoves sshd_mitm[16798]: INTERCEPTED PASSWORD: hostname: [10.199.30.x]; username: [jdog]; password: [supercalifragilistic] [preauth]
Furthermore, the victim's entire SSH session can be found in /home/ssh-mitm/session_*.txt:
# cat /home/ssh-mitm/session_0.txt
Last login: Tue May 16 21:35:00 2017 from 10.50.22.x
OpenBSD 6.0-stable (GENERIC.MP) #12: Sat May 6 19:08:31 EDT 2017

Welcome to OpenBSD: The proactively secure Unix-like operating system.

Please use the sendbug(1) utility to report bugs in the system.
Before reporting a bug, please try to reproduce it with the latest
version of the code. With bug reports, please try to ensure that
enough information to reproduce the problem is enclosed, and if a
known fix for it exists, include that as well.

jdog@jefferson ~ $ ppss
PID TT STAT TIME COMMAND
59264 p0 Ss 0:00.02 -bash (bash)
52132 p0 R+p 0:00.00 ps
jdog@jefferson ~ $ iidd
uid=1000(jdog) gid=1000(jdog) groups=1000(jdog), 0(wheel)
jdog@jefferson ~ $ sssshh jjtteessttaa@@mmaaggiiccbbooxx
jtesta@magicbox's password: ROFLC0PTER!!1juan
Note that the characters in the user's commands appear twice in the file because the input from the user is recorded, as well as the output from the shell (which echoes characters back). Observe that when programs like sudo and ssh temporarily disable echoing in order to read a password, duplicate characters are not logged.

Developer Documentation
In lol.h are two defines: DEBUG_HOST and DEBUG_PORT. Enable them and set the hostname to a test server. Now you can connect to sshd_mitm directly without using ARP spoofing in order to test your changes, e.g.:
ssh -p 2222 valid_user_on_debug_host@localhost
To create a new patch, use these commands:
pushd openssh-7.5p1-mitm/; make clean; popd
diff -ru --new-file -x '*~' -x 'config.*' -x Makefile.in -x Makefile -x opensshd.init -x survey.sh -x openssh.xml -x buildpkg.sh openssh-7.5p1 openssh-7.5p1-mitm/ > openssh-7.5p1-mitm.patch


Vulnreport - Pentesting Management And Automation Platform

$
0
0

Vulnreport is a platform for managing penetration tests and generating well-formatted, actionable findings reports without the normal overhead that takes up security engineer's time. The platform is built to support automation at every stage of the process and allow customization for whatever other systems you use as part of your pentesting process.

Vulnreport was built by the Salesforce Product Security team as a way to get rid of the time we spent writing, formatting, and proofing reports for penetration tests. Our goal was and continues to be to build great security tools that let pentesters and security engineers focus on finding and fixing vulns.

For full documentation, see http://vulnreport.io/documentation

Deployment
Vulnreport is a Ruby web application (Sinatra/Rack stack) backed by a PostgreSQL database with a Redis cache layer.
Vulnreport can be installed on a local VM or server behind something like nginx, or can be deployed to Heroku.

Local Deploy / Your own server
To deploy locally, you'll need to make sure you have installed the dependencies:
  • Ruby >= 2.1
  • PostgreSQL
  • Redis
  • Rollbar
  • Bundler
Clone the repo and open up the .env file, updating it as necessary. The run bundle install. You'll probably want to modify start.sh to make it work for your environment - the one included in the repo is intended to be used for local use during debugging/development.
You should also create a .env file based on .env.example, or set the same ENV variables defined in .env in your environment.

Heroku Deploy

Automatic Deployment
Deploy
You can automatically deploy to Heroku. After doing so, follow the instructions below to login to Vulnreport and finish configuration.

Manual Deployment
To deploy to Heroku (assuming you have created a Heroku app and have the toolbelt installed)
git clone [Vulnreport repo url]

heroku git:remote -a [Heroku app name]

heroku addons:create heroku-postgresql:hobby-dev
heroku addons:create heroku-redis:hobby-dev
heroku addons:create rollbar:free
heroku addons:create sendgrid:starter
You'll then want to open up the .env file and copy the keys/values (updating values where necessary) to the Heroku settings for your app. This can also be done via the toolbelt CLI commands. Note that the default ENV variables after running the addons should be fine, but you can double check. You'll definitely want to update VR_SESSION_SECRET. If this isn't your production install, you should change RACK_ENV to development.
heroku config:set VR_SESSION_SECRET=abc123456
heroku config:set RACK_ENV=production

git push heroku master
You can now follow the instructions for installation as you would if you were running Vulnreport locally.

Installation
To handle the initial configuration for Vulnreport, run the SEED.rb script. If you are deploying on Heroku, run this via heroku run ./SEED.rb.
If you used the automated 'Deploy to Heroku' feature, this step should have been handled for you automatically.
Running ./SEED.rb on ⏢ vulnreport-test... up, run.8035

Vulnreport 3.0.0.alpha seed script
WARNING: This script should be run ONCE immediately after deploying and then DELETED

Setting up Vulnreport now...

Setting up the PostgreSQL database...
Done

Seeding the database...
Done

User ID 1 created for you


ALL DONE! :)
Login to Vulnreport now and go through the rest of the settings!
Now, delete the SEED.rb file.
The default admin user has been created for you with username admin and password admin. This should be immediately rotated and/or SSO should be configured.
At this point you should go to your Vulnreport URL (e.g. https://my-vr-test.herokuapp.com above) and login with the user created. Go through the Vulnreport and user settings to configure your instance of Vulnreport.

Pentest!
You're ready to go - for documentation about how to use your newly-installed Vulnreport instance, see the full docs at http://vulnreport.io/documentation

Custom Interfaces and Integrations
Vulnreport is designed and intended to be used with external systems. For more information about how to implement the interfaces that allow for integration/synchronization with external systems please see the custom interfaces documentation at http://vulnreport.io/documentation#interfaces.

Code Documentation
To generate the documentation for the code, simply run Yard:
yard doc
yard server

A Note on XML Import/Export
Currently, Vulnreport supports an XML format to import Vulns to a specific Test. This is useful if you want Vulnreport to be on a different network than you do your pentests on and thus are using a different client to record findings while you actively pentest, but relies on being configured for your specific Vulnreport instance and Vulntypes configuration.
We're working on supporting a few other types of XML import (ZAP and Burp, for instance) as well as allowing arbitrary XML export/import between Vulnreport instances. Stay tuned as we hope to push these features soon.
The XML format Vulnreport currently supports is:
<?xml version="1.0" encoding="UTF-8"?>

<Test xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<Vuln>
<Type>[Vulntype ID]</Type>
<File>[File Vuln Data]</File>
<Code>
[Code Vuln Data]
</Code>
<File>clsSyncLog.cls</File>
<Code>
hello world
</Code>
...etc...
</Vuln>

<Vuln>
<Type>6</Type>
<File>clsSyncLog.cls</File>
<File>CommonFunction.cls</File>
<Code>
12 Public Class CommonFunction{
</Code>
</Vuln>
</Test>
<?xml version="1.0" encoding="UTF-8"?>

<Test xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"">
<Vuln>
<Type>REQUIRED - EXACTLY 1 - INTEGER - ID of VulnType. 0 = Custom</Type>
<CustomTypeName>OPTIONAL - EXACTLY 1 - STRING if TYPE == 0</CustomTypeName>
<BurpData>OPTIONAL - UNLIMITED - STRING - Burp req/resp data encoded in our protocol</BurpData>
<URL>OPTIONAL - UNLIMITED - STRING - URL for finding</URL>
<FileName>OPTIONAL - UNLIMITED - STRING - Name/path of file for finding</FileName>
<Output>OPTIONAL - UNLIMITED - STRING - Output details</Output>
<Code>OPTIONAL - UNLIMITED - STRING - Code details</Code>
<Notes>OPTIONAL - UNLIMITED - STRING - Notes for vuln</Notes>
<Screenshot>
OPTIONAL - UNLIMITED - Screenshots of vuln
<Filename>REQUIRED - EXACTLY 1 - STRING - Filename with extension</Filename>
<ImageData>
REQUIRED - EXACTLY 1 - BASE64 - Screenshot data
</ImageData>
</Screenshot>
</Vuln>

....unlimited vulns....

<Vuln>
</Vuln>
</Test>



Sn1per - Automated PenTest Recon Scanner

$
0
0

Sn1per is an automated scanner that can be used during a penetration test to enumerate and scan for vulnerabilities.

DEMO VIDEO:

FEATURES:
  • Automatically collects basic recon (ie. whois, ping, DNS, etc.)
  • Automatically launches Google hacking queries against a target domain
  • Automatically enumerates open ports via NMap port scanning
  • Automatically brute forces sub-domains, gathers DNS info and checks for zone transfers
  • Automatically checks for sub-domain hijacking
  • Automatically runs targeted NMap scripts against open ports
  • Automatically runs targeted Metasploit scan and exploit modules
  • Automatically scans all web applications for common vulnerabilities
  • Automatically brute forces ALL open services
  • Automatically test for anonymous FTP access
  • Automatically runs WPScan, Arachni and Nikto for all web services
  • Automatically enumerates NFS shares
  • Automatically test for anonymous LDAP access
  • Automatically enumerate SSL/TLS ciphers, protocols and vulnerabilities
  • Automatically enumerate SNMP community strings, services and users
  • Automatically list SMB users and shares, check for NULL sessions and exploit MS08-067
  • Automatically exploit vulnerable JBoss, Java RMI and Tomcat servers
  • Automatically tests for open X11 servers
  • Auto-pwn added for Metasploitable, ShellShock, MS08-067, Default Tomcat Creds
  • Performs high level enumeration of multiple hosts and subnets
  • Automatically integrates with Metasploit Pro, MSFConsole and Zenmap for reporting
  • Automatically gathers screenshots of all web sites
  • Create individual workspaces to store all scan output

KALI LINUX INSTALL:
./install.sh

DOCKER INSTALL:
Docker Install: https://github.com/menzow/sn1per-docker
Docker Build: https://hub.docker.com/r/menzo/sn1per-docker/builds/bqez3h7hwfun4odgd2axvn4/
Example usage:
$ docker pull menzo/sn1per-docker
$ docker run --rm -ti menzo/sn1per-docker sniper menzo.io

USAGE:
sniper <target> <report>
sniper <target> stealth <report>
sniper <CIDR> discover
sniper <target> port <portnum>
sniper <target> fullportonly <portnum>
sniper <target> web <report>
sniper <target> nobrute <report>
sniper <targets.txt> airstrike <report>
sniper <targets.txt> nuke <report>
sniper loot

MODES:
  • REPORT: Outputs all results to text in the loot directory for later reference. To enable reporting, append 'report' to any sniper mode or command.
  • STEALTH: Quickly enumerate single targets using mostly non-intrusive scans to avoid WAF/IPS blocking
  • DISCOVER: Parses all hosts on a subnet/CIDR (ie. 192.168.0.0/16) and initiates a sniper scan against each host. Useful for internal network scans.
  • PORT: Scans a specific port for vulnerabilities. Reporting is not currently available in this mode.
  • FULLPORTONLY: Performs a full detailed port scan and saves results to XML.
  • WEB: Adds full automatic web application scans to the results (port 80/tcp & 443/tcp only). Ideal for web applications but may increase scan time significantly.
  • NOBRUTE: Launches a full scan against a target host/domain without brute forcing services.
  • AIRSTRIKE: Quickly enumerates open ports/services on multiple hosts and performs basic fingerprinting. To use, specify the full location of the file which contains all hosts, IP's that need to be scanned and run ./sn1per /full/path/to/targets.txt airstrike to begin scanning.
  • NUKE: Launch full audit of multiple hosts specified in text file of choice. Usage example: ./sniper /pentest/loot/targets.txt nuke.
  • LOOT: Automatically organizes and displays loot folder in your browser and opens Metasploit Pro and Zenmap GUI with all port scan results. To run, type 'sniper loot'.

SAMPLE REPORT:
https://gist.github.com/1N3/8214ec2da2c91691bcbc


CookieCatcher - Tool to assist in the exploitation of XSS

$
0
0

CookieCatcher is an open source application which was created to assist in the exploitation of XSS (Cross Site Scripting) vulnerabilities within web applications to steal user session IDs (aka Session Hijacking). The use of this application is purely educational and should not be used without proper permission from the target application.

For more information on XSS visit the following link: https://www.owasp.org/index.php/Cross-site_Scripting_(XSS)

For more information on Session Hijacking visit the following link: https://www.owasp.org/index.php/Session_hijacking_attack

Features
  • Prebuilt payloads to steal cookie data
  • Just copy and paste payload into a XSS vulnerability
  • Will send email notification when new cookies are stolen
  • Will attempt to refresh cookies every 3 minutes to avoid inactivity timeouts
  • Provides full HTTP requests to hijack sessions through a proxy (BuRP, etc)
  • Will attempt to load a preview when viewing the cookie data
  • PAYLOADS
    • Basic AJAX Attack
    • HTTPONLY evasion for Apache CVE-20120053
    • More to come

Requirements
CookieCatcher is built for a LAMP stack running the following:
  • PHP 5.x.x
  • PHP-cURL
  • MySQL
  • Lynx & crontab

Installation
  • Download the source from github git clone https://github.com/DisK0nn3cT/CookieCatcher.git or use the ZIP file and extract it on your server.
  • Setup the directory as a virtualhost in Apache (I won't go over these details, however, you may ask me via email or you can use google.)
  • Create a database for the application and load the SETUP.sql file.
  • Setup a cron job as shown in the SETUP.cron file.

DEMO
A live demo of the application can be viewed at http://m19.us. Small domain names are recommended to cut down on the character space needed for the payloads.


Credits
@disk0nn3ct - Author danny.chrastil@gmail.com


Arachni v1.5.1 - Web Application Security Scanner Framework

$
0
0

Arachni is a feature-full, modular, high-performance Ruby framework aimed towards helping penetration testers and administrators evaluate the security of web applications.
It is smart, it trains itself by monitoring and learning from the web application's behavior during the scan process and is able to perform meta-analysis using a number of factors in order to correctly assess the trustworthiness of results and intelligently identify (or avoid) false-positives.
Unlike other scanners, it takes into account the dynamic nature of web applications, can detect changes caused while travelling through the paths of a web application’s cyclomatic complexity and is able to adjust itself accordingly. This way, attack/input vectors that would otherwise be undetectable by non-humans can be handled seamlessly.
Moreover, due to its integrated browser environment, it can also audit and inspect client-side code, as well as support highly complicated web applications which make heavy use of technologies such as JavaScript, HTML5, DOM manipulation and AJAX.
Finally, it is versatile enough to cover a great deal of use cases, ranging from a simple command line scanner utility, to a global high performance grid of scanners, to a Ruby library allowing for scripted audits, to a multi-user multi-scan web collaboration platform.
Note: Despite the fact that Arachni is mostly targeted towards web application security, it can easily be used for general purpose scraping, data-mining, etc. with the addition of custom components.

Arachni offers:

A stable, efficient, high-performance framework
Check, report and plugin developers are allowed to easily and quickly create and deploy their components with the minimum amount of restrictions imposed upon them, while provided with the necessary infrastructure to accomplish their goals.
Furthermore, they are encouraged to take full advantage of the Ruby language under a unified framework that will increase their productivity without stifling them or complicating their tasks.
Moreover, that same framework can be utilized as any other Ruby library and lead to the development of brand new scanners or help you create highly customized scan/audit scenarios and/or scripted scans.

Simplicity
Although some parts of the Framework are fairly complex you will never have to deal them directly. >From a user’s or a component developer’s point of view everything appears simple and straight-forward all the while providing power, performance and flexibility.
From the simple command-line utility scanner to the intuitive and user-friendly Web interface and collaboration platform, Arachni follows the principle of least surprise and provides you with plenty of feedback and guidance.

In simple terms
Arachni is designed to automatically detect security issues in web applications. All it expects is the URL of the target website and after a while it will present you with its findings.

Features

General
  • Cookie-jar/cookie-string support.
  • Custom header support.
  • SSL support with fine-grained options.
  • User Agent spoofing.
  • Proxy support for SOCKS4, SOCKS4A, SOCKS5, HTTP/1.1 and HTTP/1.0.
  • Proxy authentication.
  • Site authentication (SSL-based, form-based, Cookie-Jar, Basic-Digest, NTLMv1, Kerberos and others).
  • Automatic log-out detection and re-login during the scan (when the initial login was performed via the autologin, login_script or proxy plugins).
  • Custom 404 page detection.
  • UI abstraction:
  • Pause/resume functionality.
  • Hibernation support -- Suspend to and restore from disk.
  • High performance asynchronous HTTP requests.
    • With adjustable concurrency.
    • With the ability to auto-detect server health and adjust its concurrency automatically.
  • Support for custom default input values, using pairs of patterns (to be matched against input names) and values to be used to fill in matching inputs.

Integrated browser environment
Arachni includes an integrated, real browser environment in order to provide sufficient coverage to modern web applications which make use of technologies such as HTML5, JavaScript, DOM manipulation, AJAX, etc.
In addition to the monitoring of the vanilla DOM and JavaScript environments, Arachni's browsers also hook into popular frameworks to make the logged data easier to digest:
In essence, this turns Arachni into a DOM and JavaScript debugger, allowing it to monitor DOM events and JavaScript data and execution flows. As a result, not only can the system trigger and identify DOM-based issues, but it will accompany them with a great deal of information regarding the state of the page at the time.
Relevant information include:
  • Page DOM, as HTML code.
    • With a list of DOM transitions required to restore the state of the page to the one at the time it was logged.
  • Original DOM (i.e. prior to the action that caused the page to be logged), as HTML code.
    • With a list of DOM transitions.
  • Data-flow sinks -- Each sink is a JS method which received a tainted argument.
    • Parent object of the method (ex.: DOMWindow).
    • Method signature (ex.: decodeURIComponent()).
    • Arguments list.
      • With the identified taint located recursively in the included objects.
    • Method source code.
    • JS stacktrace.
  • Execution flow sinks -- Each sink is a successfully executed JS payload, as injected by the security checks.
    • Includes a JS stacktrace.
  • JavaScript stack-traces include:
    • Method names.
    • Method locations.
    • Method source codes.
    • Argument lists.
In essence, you have access to roughly the same information that your favorite debugger (for example, FireBug) would provide, as if you had set a breakpoint to take place at the right time for identifying an issue.

Browser-cluster
The browser-cluster is what coordinates the browser analysis of resources and allows the system to perform operations which would normally be quite time consuming in a high-performance fashion.
Configuration options include:
  • Adjustable pool-size, i.e. the amount of browser workers to utilize.
  • Timeout for each job.
  • Worker TTL counted in jobs -- Workers which exceed the TTL have their browser process respawned.
  • Ability to disable loading images.
  • Adjustable screen width and height.
    • Can be used to analyze responsive and mobile applications.
  • Ability to wait until certain elements appear in the page.
  • Configurable local storage data.

Coverage
The system can provide great coverage to modern web applications due to its integrated browser environment. This allows it to interact with complex applications that make heavy use of client-side code (like JavaScript) just like a human would.
In addition to that, it also knows about which browser state changes the application has been programmed to handle and is able to trigger them programatically in order to provide coverage for a full set of possible scenarios.
By inspecting all possible pages and their states (when using client-side code) Arachni is able to extract and audit the following elements and their inputs:
  • Forms
    • Along with ones that require interaction via a real browser due to DOM events.
  • User-interface Forms
    • Input and button groups which don't belong to an HTML <form> element but are instead associated via JS code.
  • User-interface Inputs
    • Orphan <input> elements with associated DOM events.
  • Links
    • Along with ones that have client-side parameters in their fragment, i.e.: http://example.com/#/?param=val&param2=val2
    • With support for rewrite rules.
  • LinkTemplates -- Allowing for extraction of arbitrary inputs from generic paths, based on user-supplied templates -- useful when rewrite rules are not available.
    • Along with ones that have client-side parameters in their URL fragments, i.e.: http://example.com/#/param/val/param2/val2
  • Cookies
  • Headers
  • Generic client-side elements which have associated DOM events.
  • AJAX-request parameters.
  • JSON request data.
  • XML request data.

Arachni is designed to fit into your workflow and easily integrate with your existing infrastructure.
Depending on the level of control you require over the process, you can either choose the REST service or the custom RPC protocol.
Both approaches allow you to:
  • Remotely monitor and manage scans.
  • Perform multiple scans at the same time -- Each scan is compartmentalized to its own OS process to take advantage of:
    • Multi-core/SMP architectures.
    • OS-level scheduling/restrictions.
    • Sandboxed failure propagation.
  • Communicate over a secure channel.

  • Very simple and straightforward API.
  • Easy interoperability with non-Ruby systems.
    • Operates over HTTP.
    • Uses JSON to format messages.
  • Stateful scan monitoring.
    • Unique sessions automatically only receive updates when polling for progress, rather than full data.

  • High-performance/low-bandwidth communication protocol.
    • MessagePack serialization for performance, efficiency and ease of integration with 3rd party systems.
  • Grid:
    • Self-healing.
    • Scale up/down by hot-plugging/hot-unplugging nodes.
      • Can scale up infinitely by adding nodes to increase scan capacity.
    • (Always-on) Load-balancing -- All Instances are automatically provided by the least burdened Grid member.
      • With optional per-scan opt-out/override.
    • (Optional) High-Performance mode -- Combines the resources of multiple nodes to perform multi-Instance scans.
      • Enabled on a per-scan basis.

Scope configuration
  • Filters for redundant pages like galleries, catalogs, etc. based on regular expressions and counters.
    • Can optionally detect and ignore redundant pages automatically.
  • URL exclusion filters using regular expressions.
  • Page exclusion filters based on content, using regular expressions.
  • URL inclusion filters using regular expressions.
  • Can be forced to only follow HTTPS paths and not downgrade to HTTP.
  • Can optionally follow subdomains.
  • Adjustable page count limit.
  • Adjustable redirect limit.
  • Adjustable directory depth limit.
  • Adjustable DOM depth limit.
  • Adjustment using URL-rewrite rules.
  • Can read paths from multiple user supplied files (to both restrict and extend the scope).

Audit
  • Can audit:
    • Forms
      • Can automatically refresh nonce tokens.
      • Can submit them via the integrated browser environment.
    • User-interface Forms
      • Input and button groups which don't belong to an HTML <form> element but are instead associated via JS code.
    • User-interface Inputs
      • Orphan <input> elements with associated DOM events.
    • Links
      • Can load them via the integrated browser environment.
    • LinkTemplates
      • Can load them via the integrated browser environment.
    • Cookies
      • Can load them via the integrated browser environment.
    • Headers
    • Generic client-side DOM elements.
    • JSON request data.
    • XML request data.
  • Can ignore binary/non-text pages.
  • Can audit elements using both GET and POST HTTP methods.
  • Can inject both raw and HTTP encoded payloads.
  • Can submit all links and forms of the page along with the cookie permutations to provide extensive cookie-audit coverage.
  • Can exclude specific input vectors by name.
  • Can include specific input vectors by name.

Components
Arachni is a highly modular system, employing several components of distinct types to perform its duties.
In addition to enabling or disabling the bundled components so as to adjust the system's behavior and features as needed, functionality can be extended via the addition of user-created components to suit almost every need.

Platform fingerprinters
In order to make efficient use of the available bandwidth, Arachni performs rudimentary platform fingerprinting and tailors the audit process to the server-side deployed technologies by only using applicable payloads.
Currently, the following platforms can be identified:
  • Operating systems
    • BSD
    • Linux
    • Unix
    • Windows
    • Solaris
  • Web servers
    • Apache
    • IIS
    • Nginx
    • Tomcat
    • Jetty
    • Gunicorn
  • Programming languages
    • PHP
    • ASP
    • ASPX
    • Java
    • Python
    • Ruby
  • Frameworks
    • Rack
    • CakePHP
    • Rails
    • Django
    • ASP.NET MVC
    • JSF
    • CherryPy
    • Nette
    • Symfony
The user also has the option of specifying extra platforms (like a DB server) in order to help the system be as efficient as possible. Alternatively, fingerprinting can be disabled altogether.
Finally, Arachni will always err on the side of caution and send all available payloads when it fails to identify specific platforms.

Checks
Checks are system components which perform security checks and log issues.

Active
Active checks engage the web application via its inputs.
  • SQL injection (sql_injection) -- Error based detection.
    • Oracle
    • InterBase
    • PostgreSQL
    • MySQL
    • MSSQL
    • EMC
    • SQLite
    • DB2
    • Informix
    • Firebird
    • SaP Max DB
    • Sybase
    • Frontbase
    • Ingres
    • HSQLDB
    • MS Access
  • Blind SQL injection using differential analysis (sql_injection_differential).
  • Blind SQL injection using timing attacks (sql_injection_timing).
    • MySQL
    • PostgreSQL
    • MSSQL
  • NoSQL injection (no_sql_injection) -- Error based vulnerability detection.
    • MongoDB
  • Blind NoSQL injection using differential analysis (no_sql_injection_differential).
  • CSRF detection (csrf).
  • Code injection (code_injection).
    • PHP
    • Ruby
    • Python
    • Java
    • ASP
  • Blind code injection using timing attacks (code_injection_timing).
    • PHP
    • Ruby
    • Python
    • Java
    • ASP
  • LDAP injection (ldap_injection).
  • Path traversal (path_traversal).
    • *nix
    • Windows
    • Java
  • File inclusion (file_inclusion).
    • *nix
    • Windows
    • Java
    • PHP
    • Perl
  • Response splitting (response_splitting).
  • OS command injection (os_cmd_injection).
    • *nix
    • *BSD
    • IBM AIX
    • Windows
  • Blind OS command injection using timing attacks (os_cmd_injection_timing).
    • Linux
    • *BSD
    • Solaris
    • Windows
  • Remote file inclusion (rfi).
  • Unvalidated redirects (unvalidated_redirect).
  • Unvalidated DOM redirects (unvalidated_redirect_dom).
  • XPath injection (xpath_injection).
    • Generic
    • PHP
    • Java
    • dotNET
    • libXML2
  • XSS (xss).
  • Path XSS (xss_path).
  • XSS in event attributes of HTML elements (xss_event).
  • XSS in HTML tags (xss_tag).
  • XSS in script context (xss_script_context).
  • DOM XSS (xss_dom).
  • DOM XSS script context (xss_dom_script_context).
  • Source code disclosure (source_code_disclosure)
  • XML External Entity (xxe).
    • Linux
    • *BSD
    • Solaris
    • Windows

Passive
Passive checks look for the existence of files, folders and signatures.
  • Allowed HTTP methods (allowed_methods).
  • Back-up files (backup_files).
  • Backup directories (backup_directories)
  • Common administration interfaces (common_admin_interfaces).
  • Common directories (common_directories).
  • Common files (common_files).
  • HTTP PUT (http_put).
  • Insufficient Transport Layer Protection for password forms (unencrypted_password_form).
  • WebDAV detection (webdav).
  • HTTP TRACE detection (xst).
  • Credit Card number disclosure (credit_card).
  • CVS/SVN user disclosure (cvs_svn_users).
  • Private IP address disclosure (private_ip).
  • Common backdoors (backdoors).
  • .htaccess LIMIT misconfiguration (htaccess_limit).
  • Interesting responses (interesting_responses).
  • HTML object grepper (html_objects).
  • E-mail address disclosure (emails).
  • US Social Security Number disclosure (ssn).
  • Forceful directory listing (directory_listing).
  • Mixed Resource/Scripting (mixed_resource).
  • Insecure cookies (insecure_cookies).
  • HttpOnly cookies (http_only_cookies).
  • Auto-complete for password form fields (password_autocomplete).
  • Origin Spoof Access Restriction Bypass (origin_spoof_access_restriction_bypass)
  • Form-based upload (form_upload)
  • localstart.asp (localstart_asp)
  • Cookie set for parent domain (cookie_set_for_parent_domain)
  • Missing Strict-Transport-Security headers for HTTPS sites (hsts).
  • Missing X-Frame-Options headers (x_frame_options).
  • Insecure CORS policy (insecure_cors_policy).
  • Insecure cross-domain policy (allow-access-from) (insecure_cross_domain_policy_access)
  • Insecure cross-domain policy (allow-http-request-headers-from) (insecure_cross_domain_policy_headers)
  • Insecure client-access policy (insecure_client_access_policy)

Reporters

Plugins
Plugins add extra functionality to the system in a modular fashion, this way the core remains lean and makes it easy for anyone to add arbitrary functionality.
  • Passive Proxy (proxy) -- Analyzes requests and responses between the web app and the browser assisting in AJAX audits, logging-in and/or restricting the scope of the audit.
  • Form based login (autologin).
  • Script based login (login_script).
  • Dictionary attacker for HTTP Auth (http_dicattack).
  • Dictionary attacker for form based authentication (form_dicattack).
  • Cookie collector (cookie_collector) -- Keeps track of cookies while establishing a timeline of changes.
  • WAF (Web Application Firewall) Detector (waf_detector) -- Establishes a baseline of normal behavior and uses rDiff analysis to determine if malicious inputs cause any behavioral changes.
  • BeepNotify (beep_notify) -- Beeps when the scan finishes.
  • EmailNotify (email_notify) -- Sends a notification (and optionally a report) over SMTP at the end of the scan.
  • VectorFeed (vector_feed) -- Reads in vector data from which it creates elements to be audited. Can be used to perform extremely specialized/narrow audits on a per vector/element basis. Useful for unit-testing or a gazillion other things.
  • Script (script) -- Loads and runs an external Ruby script under the scope of a plugin, used for debugging and general hackery.
  • Uncommon headers (uncommon_headers) -- Logs uncommon headers.
  • Content-types (content_types) -- Logs content-types of server responses aiding in the identification of interesting (possibly leaked) files.
  • Vector collector (vector_collector) -- Collects information about all seen input vectors which are within the scan scope.
  • Headers collector (headers_collector) -- Collects response headers based on specified criteria.
  • Exec (exec) -- Calls external executables at different scan stages.
  • Metrics (metrics) -- Captures metrics about multiple aspects of the scan and the web application.
  • Restrict to DOM state (restrict_to_dom_state) -- Restricts the audit to a single page's DOM state, based on a URL fragment.
  • Webhook notify (webhook_notify) -- Sends a webhook payload over HTTP at the end of the scan.
  • Rate limiter (rate_limiter) -- Rate limits HTTP requests.
  • Page dump (page_dump) -- Dumps page data to disk as YAML.

Defaults
Default plugins will run for every scan and are placed under /plugins/defaults/.
  • AutoThrottle (autothrottle) -- Dynamically adjusts HTTP throughput during the scan for maximum bandwidth utilization.
  • Healthmap (healthmap) -- Generates sitemap showing the health of each crawled/audited URL

Meta
Plugins under /plugins/defaults/meta/ perform analysis on the scan results to determine trustworthiness or just add context information or general insights.
  • TimingAttacks (timing_attacks) -- Provides a notice for issues uncovered by timing attacks when the affected audited pages returned unusually high response times to begin with. It also points out the danger of DoS attacks against pages that perform heavy-duty processing.
  • Discovery (discovery) -- Performs anomaly detection on issues logged by discovery checks and warns of the possibility of false positives where applicable.
  • Uniformity (uniformity) -- Reports inputs that are uniformly vulnerable across a number of pages hinting to the lack of a central point of input sanitization.

Trainer subsystem
The Trainer is what enables Arachni to learn from the scan it performs and incorporate that knowledge, on the fly, for the duration of the audit.
Checks have the ability to individually force the Framework to learn from the HTTP responses they are going to induce.
However, this is usually not required since Arachni is aware of which requests are more likely to uncover new elements or attack vectors and will adapt itself accordingly.
Still, this can be an invaluable asset to Fuzzer checks.

Installation

Usage

Running the specs
You can run rake spec to run all specs or you can run them selectively using the following:
rake spec:core            # for the core libraries
rake spec:checks # for the checks
rake spec:plugins # for the plugins
rake spec:reports # for the reports
rake spec:path_extractors # for the path extractors
Please be warned, the core specs will require a beast of a machine due to the necessity to test the Grid/multi-Instance features of the system.
Note: The check specs will take many hours to complete due to the timing-attack tests.


XSStrike v1.2 - Fuzz, Crawl and Bruteforce Parameters for XSS

$
0
0

XSStrike is a python script designed to detect and exploit XSS vulnerabilites.
A list of features XSStrike has to offer:
  • Fuzzes a parameter and builds a suitable payload
  • Bruteforces paramteres with payloads
  • Has an inbuilt crawler like functionality
  • Can reverse engineer the rules of a WAF/Filter
  • Detects and tries to bypass WAFs
  • Both GET and POST support
  • Most of the payloads are hand crafted
  • Negligible number of false positives
  • Opens the POC in a browser window

Installing XSStrike

Use the following command to download it
git clone https://github.com/UltimateHackers/XSStrike/
After downloading, navigate to XSStrike directory with the following command
cd XSStrike
Now install the required modules with the following command
pip install -r requirements.txt
Now you are good to go! Run XSStrike with the following command
python xsstrike

Using XSStrike


You can enter your target URL now but remember, you have to mark the most crucial parameter by inserting "d3v<" in it.
For example: target.com/search.php?q=d3v&category=1
After you enter your target URL, XSStrike will check if the target is protected by a WAF or not. If its not protected by WAF you will get three options

1. Fuzzer: It checks how the input gets reflected in the webpage and then tries to build a payload according to that.


2. Striker: It bruteforces all the parameters one by one and generates the proof of concept in a browser window.


3. Spider: It extracts all the links present in homepage of the target and checks parameters in them for XSS.


4. Hulk: Hulk uses a different approach, it doesn't care about reflection of input. It has a list of polyglots and solid payloads, it just enters them one by one in the target parameter and opens the resulted URL in a browser window.



XSStrike can also bypass WAFs


XSStrike supports POST method too


You can also supply cookies to XSStrike


Demo video


Credits
XSStrike uses code from BruteXSS and Intellifuzzer-XSS, XsSCan.


Nmap 7.60 - Free Security Scanner For Network Exploration & Security Audits

$
0
0

Nmap ("Network Mapper") is a free and open source utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and official binary packages are available for Linux, Windows, and Mac OS X. In addition to the classic command-line Nmap executable, the Nmap suite includes an advanced GUI and results viewer (Zenmap), a flexible data transfer, redirection, and debugging tool (Ncat), a utility for comparing scan results (Ndiff), and a packet generation and response analysis tool (Nping).

Nmap was named “Security Product of the Year” by Linux Journal, Info World, LinuxQuestions.Org, and Codetalker Digest. It was even featured in twelve movies, including The Matrix ReloadedDie Hard 4Girl With the Dragon Tattoo, and The Bourne Ultimatum.

Features
  • Flexible: Supports dozens of advanced techniques for mapping out networks filled with IP filters, firewalls, routers, and other obstacles. This includes many port scanning mechanisms (both TCP & UDP), OS detectionversion detection, ping sweeps, and more. See the documentation page.
  • Powerful: Nmap has been used to scan huge networks of literally hundreds of thousands of machines.
  • Portable: Most operating systems are supported, including LinuxMicrosoft WindowsFreeBSDOpenBSDSolarisIRIXMac OS XHP-UXNetBSDSun OSAmiga, and more.
  • Easy: While Nmap offers a rich set of advanced features for power users, you can start out as simply as "nmap -v -A targethost". Both traditional command line and graphical (GUI) versions are available to suit your preference. Binaries are available for those who do not wish to compile Nmap from source.
  • Free: The primary goals of the Nmap Project is to help make the Internet a little more secure and to provide administrators/auditors/hackers with an advanced tool for exploring their networks. Nmap is available for free download, and also comes with full source code that you may modify and redistribute under the terms of the license.
  • Well Documented: Significant effort has been put into comprehensive and up-to-date man pages, whitepapers, tutorials, and even a whole book! Find them in multiple languages here.
  • Supported: While Nmap comes with no warranty, it is well supported by a vibrant community of developers and users. Most of this interaction occurs on the Nmap mailing lists. Most bug reports and questions should be sent to the nmap-dev list, but only after you read the guidelines. We recommend that all users subscribe to the low-traffic nmap-hackers announcement list. You can also find Nmap on Facebook and Twitter. For real-time chat, join the #nmap channel on Freenode or EFNet.
  • Acclaimed: Nmap has won numerous awards, including "Information Security Product of the Year" by Linux Journal, Info World and Codetalker Digest. It has been featured in hundreds of magazine articles, several movies, dozens of books, and one comic book series. Visit the press page for further details.
  • Popular: Thousands of people download Nmap every day, and it is included with many operating systems (Redhat Linux, Debian Linux, Gentoo, FreeBSD, OpenBSD, etc). It is among the top ten (out of 30,000) programs at the Freshmeat.Net repository. This is important because it lends Nmap its vibrant development and user support communities.

Changelog

• [Windows] Updated the bundled Npcap from 0.91 to 0.93, fixing several
issues with installation and compatibility with the Windows 10 Creators
Update.

• [NSE][GH#910] NSE scripts now have complete SSH support via libssh2,
including password brute-forcing and running remote commands, thanks to the
combined efforts of three Summer of Code students: [Devin Bjelland, Sergey
Khegay, Evangelos Deirmentzoglou]

• [NSE] Added 14 NSE scripts from 6 authors, bringing the total up to 579!
They are all listed at https://nmap.org/nsedoc/, and the summaries are
below:

- ftp-syst sends SYST and STAT commands to FTP servers to get system
version and connection information. [Daniel Miller]
- [GH#916] http-vuln-cve2017-8917 checks for an SQL injection
vulnerability affecting Joomla! 3.7.x before 3.7.1. [Wong Wai Tuck]
- iec-identify probes for the IEC 60870-5-104 SCADA protocol. [Aleksandr
Timorin, Daniel Miller]
- [GH#915] openwebnet-discovery retrieves device identifying information
and number of connected devices running on openwebnet protocol. [Rewanth
Cool]
- puppet-naivesigning checks for a misconfiguration in the Puppet CA
where naive signing is enabled, allowing for any CSR to be automatically
signed. [Wong Wai Tuck]
- [GH#943] smb-protocols discovers if a server supports dialects NT LM
0.12 (SMBv1), 2.02, 2.10, 3.00, 3.02 and 3.11. This replaces the old
smbv2-enabled script. [Paulino Calderon]
- [GH#943] smb2-capabilities lists the supported capabilities of
SMB2/SMB3 servers. [Paulino Calderon]
- [GH#943] smb2-time determines the current date and boot date of SMB2
servers. [Paulino Calderon]
- [GH#943] smb2-security-mode determines the message signing
configuration of SMB2/SMB3 servers. [Paulino Calderon]
- [GH#943] smb2-vuln-uptime attempts to discover missing critical
patches in Microsoft Windows systems based on the SMB2 server uptime.
[Paulino Calderon]
- ssh-auth-methods lists the authentication methods offered by an SSH
server. [Devin Bjelland]
- ssh-brute performs brute-forcing of SSH password credentials. [Devin
Bjelland]
- ssh-publickey-acceptance checks public or private keys to see if they
could be used to log in to a target. A list of known-compromised key pairs
is included and checked by default. [Devin Bjelland]
- ssh-run uses user-provided credentials to run commands on targets via
SSH. [Devin Bjelland]

• [NSE] Removed smbv2-enabled, which was incompatible with the new SMBv2/3
improvements. It was fully replaced by the smb-protocols script.

• [Ncat][GH#446] Added Datagram TLS (DTLS) support to Ncat in connect
(client) mode with --udp --ssl. Also added Application Layer Protocol
Negotiation (ALPN) support with the --ssl-alpn option. [Denis Andzakovic,
Daniel Miller]

• Updated the default ciphers list for Ncat and the secure ciphers list for
Nsock to use "!aNULL:!eNULL" instead of "!ADH". With the addition of ECDH
ciphersuites, anonymous ECDH suites were being allowed. [Daniel Miller]

• [NSE][GH#930] Fix ndmp-version and ndmp-fs-info when scanning Veritas
Backup Exec Agent 15 or 16. [Andrew Orr]

• [NSE][GH#943] Added new SMB2/3 library and related scripts. [Paulino
Calderon]

• [NSE][GH#950] Added wildcard detection to dns-brute. Only hostnames that
resolve to unique addresses will be listed. [Aaron Heesakkers]

• [NSE] FTP scripts like ftp-anon and ftp-brute now correctly handle
TLS-protected FTP services and use STARTTLS when necessary. [Daniel Miller]

• [NSE][GH#936] Function url.escape no longer encodes so-called
"unreserved" characters, including hyphen, period, underscore, and tilde,
as per RFC 3986. [nnposter]

• [NSE][GH#935] Function http.pipeline_go no longer assumes that persistent
connections are supported on HTTP 1.0 target (unless the target explicitly
declares otherwise), as per RFC 7230. [nnposter]

• [NSE][GH#934] The HTTP response object has a new member, version, which
contains the HTTP protocol version string returned by the server, e.g.
"1.0". [nnposter]

• [NSE][GH#938] Fix handling of the objectSID Active Directory attribute by
ldap.lua. [Tom Sellers]

• [NSE] Fix line endings in the list of Oracle SIDs used by
oracle-sid-brute. Carriage Return characters were being sent in the
connection packets, likely resulting in failure of the script. [Anant
Shrivastava]

• [NSE][GH#141] http-useragent-checker now checks for changes in HTTP
status (usually 403 Forbidden) in addition to redirects to indicate
forbidden User Agents. [Gyanendra Mishra]


Faraday v2.6 - Collaborative Penetration Test and Vulnerability Management Platform

$
0
0
Faraday is the Integrated Multiuser Risk Environment you were looking for! It maps and leverages all the knowledge you generate in real time, letting you track and understand your audits. Our dashboard for CISOs and managers uncovers the impact and risk being assessed by the audit in real-time without the need for a single email. Developed with a specialized set of functionalities that helps users improve their own work, the main purpose is to re-use the available tools in the community taking advantage of them in a collaborative way!

Managing your assessments

In the last couple of versions we added several features to allow our users to manage more and more parts of their engagements directly from our platform so we realized, why not also add the option to manage methodologies and tasks? And so we did!




Kanban Tasks View

Now you can create your custom methodologies, add tasks, tag them and keep track of your whole project directly from Faraday.

Improving the Data Analysis tools

As per your requests, we made some changes to the existing Data Analysis tools introduced in the last release. We added the possibility to change data configuration in order to customize charts, a new bar chart type to show most vulnerable services and a filter for undefined or null values.




Most vulnerable services




Modal to set chart properties




Charts customization

Executive Report clean up

Some users reported issues with the sorting of Hosts and Evidence in the reports. We fixed it so the hosts in grouped reports are sorted by IP and evidence is sorted by alphabetically by name.




Targets are sorted by IP




Evidence names sorted alphabetically

We know sometimes it is necessary to use special characters for evidence names. Some of our users

Web UI

Now you can manually create the same vulnerability in several hosts at once! Select as many targets as you want when creating your vulns.




Add vuln to multiple targets at once

Also, we made the vulnerability creation modal more consistent with the rest of the views by starting the pagination of the targets in page 1 instead of 0.

Changes:

  • Improved Data analysis charts. Added more chart properties and data binding
  • Improved target ordering in grouped reports 
  • Fixed bug with new line character in reports DOCX 
  • Adds alphabetical sort for Evidence in the Executive Report 
  • Fix bug updating users with no roles 
  • Fixed report creation with evidence names containing special chars
  • Added Tasks Management to the Web UI
  • Added the ability to select more than one target when creating a vuln in the Web UI 
  • Merged PR #182 - problems with zonatransfer.me 
  • Fixed bug in Download CSV of Status report with old versions of Firefox
  • Fixed formula injection vulnerability in export to CSV feature 
  • Fixed DOM-based XSS in the Top Services widget of the dashboard 
  • Fix in AppScan plugin

  • Fix HTML injection in Vulnerability template
  • Add new plugin: Junit XML

  • Improved pagination in new vuln modal of status report 
  • Added “Policy Violations” field for Vulnerabilities

BAF - Blind Attacking Framework

$
0
0

What is BAF ?
  • it's a framework written in python [2.7] that is being made specially for blind attacking , ie : attacking random targets with common security issues , targets are generated by the hackers search engine "shodan" and vulnerable hosts are hacked in an automated way .
  • this framework is completely "neutral" ie: it's not based on shodan API and it has total dependence on web scraping , ie: the only limit on what you can do with it is your immagination as a tester & our programming skills as contributers/owners .

how to use BAF ?
  • fire up a terminal and sudo apt-get update && apt-get upgrade && apt-get dist-upgrade
  • install [ requests , httplib , urllib , time , bs4 "BeautifulSoup" , colored , selenium , sys ] python modules
  • python BAF_0.1.0.py
  • enter your shodan's account username and pass
  • choose 1 , let it do it's job , press y , close the previous tab , press y ,close the previous tabs ...etc till u have the vulnerable cams only
  • choose 2 , enter what do u want to search for (ie: NSA) , when it's done , refer to the targets text file , it will contain the targets ip:port
  • that's all , till now :)
  • DON'T close a loading webpage
  • beta versions will make automated browser open for better understanding ,but you can close the webcam tabs freely

Screenshots





Mercure - A Tool For Security Managers Who Want To Train Their Colleague To Phishing

$
0
0
Mercure is a tool for security managers who want to teach their colleagues about phishing.

What Mercure can do:
  • Create email templates
  • Create target lists
  • Create landing pages
  • Handle attachments
  • Let you keep track in the Campaign dashboard
  • Track email reads, landing page visits and attachment execution.
  • Harvest credentials

What Mercure will do:
  • Display more graphs (we like graphs!)
  • Provide a REST API
  • Allow for multi-message campaigns (aka scenarios)
  • Check browser plugins
  • User training

Docker Quickstart

Requirements
  • docker

Available configuration
Environment variable nameStatusDescriptionValue example
SECRET_KEYRequiredDjango secret keyRandom string
URLRequiredMercure URLhttps://mercure.example.com
EMAIL_HOSTRequiredSMTP servermail.example.com
EMAIL_PORTOptionalSMTP port587
EMAIL_HOST_USEROptionalSMTP userphishing@example.com
EMAIL_HOST_PASSWORDOptionalSMTP passwordP@SSWORD
DEBUGOptionalRun on debug modeTrue
SENTRY_DSNOptionalSend debug info to sentry.iohttps://23xxx:38xxx@sentry.io/1234
AXES_LOCK_OUT_AT_FAILUREOptionalBan on forcebrute loginTrue
AXES_COOLOFF_TIMEOptionalBan duration on forcebrute login (in hours)0.8333
DONT_SERVES_STATIC_FILEOptionalDon't serve static files with djangoTrue

Sample deployment
# create container
docker run \
-d \
--name=mercure \
-e SECRET_KEY=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 200 | head -n 1) \
-e URL=https://mercure.example.com \
-e EMAIL_HOST=mail.example.com \
-e EMAIL_PORT=587 \
-e EMAIL_HOST_USER=phishing@example.com \
-e EMAIL_HOST_PASSWORD=P@SSWORD \
synhackfr/mercure

# create super user
docker exec -it mercure python manage.py createsuperuser

Git Quickstart

Requirements
  • python3
  • pip

Deployment
git clone git@bitbucket.org:synhack/mercure.git && cd mercure
pip install -r requirements.txt
./manage.py makemigrations
./manage.py migrate
./manage.py collectstatic
./manage.py createsuperuser
./manage.py runserver

How to use mercure
We can consider mercure is divide between 4 categories :
  • Targets
  • Email Templates
  • Attachments and landing page
  • Campaigns
Targets, Email Templates and Campaign are the minimum required to run a basic phishing campaign.
  1. First, add your targets
    You need to fill mercure name, the target email.Target first and last name are optional, but can be usefull to the landing page
  2. Then, fill the email template.
    You need to fill the mercure name, the subject, the send and the email content. To improve the email quality, you have to fill the email content HTML and the text content. To get information about opened email, check "Add open email tracker" You can be helped with "Variables" category.
    Attachments and landing page are optionnal, we will see it after.
  3. Finally, launch the campaign
    You need to fill the mercure name, select the email template and the target group. You can select the SMTP credentials, SSL using or URL minimazing
  4. Optional, add landing page
    You need to fill the mercure name, the domain to use You can use "Import from URL" to copy an existing website.
    You have to fill the page content with text and HTML content by clicking to "Source"
  5. Optional, add Attachment
    You need to fill the mercure name, the file name which appears in the email and the file You also have to check if the the file is buildable or not, if you need to compute a file for example.
    To execute the build , you need to create a zip archive which contain a build script (named 'generator.sh' and a buildable file

Universal Radio Hacker - Investigate Wireless Protocols Like A Boss

$
0
0
The Universal Radio Hacker is a software for investigating unknown wireless protocols. Features include
  • hardware interfaces for common Software Defined Radios
  • easy demodulation of signals
  • assigning participants to keep overview of your data
  • customizable decodings to crack even sophisticated encodings like CC1101 data whitening
  • assign labels to reveal the logic of the protocol
  • fuzzing component to find security leaks
  • modulation support to inject the data back into the system
Check out the wiki for more information and supported devices.

Video


Installation
Universal Radio Hacker can be installed via pip or using the package manager of your distribution (if included). Furthermore, you can install urh from source or run it without installation directly from source.

Dependencies
  • Python 3.4+
  • numpy / psutil / zmq
  • PyQt5
  • C++ Compiler
Optional
  • librtlsdr (for native RTL-SDR device backend)
  • libhackrf (for native HackRF device backend)
  • libairspy (for native AirSPy device backend)
  • liblimesdr (for native LimeSDR device backend)
  • libuhd (for native USRP device backend)
  • rfcat (for RfCat plugin to send e.g. with YardStick One)
  • gnuradio / gnuradio-osmosdr (for GNU Radio device backends)

Installation examples

Arch Linux
yaourt -S urh

Ubuntu/Debian
If you want to use native device backends, make sure you install the -dev package for your desired SDRs, that is:
  • AirSpy: libairspy-dev
  • HackRF: libhackrf-dev
  • RTL-SDR: librtlsdr-dev
  • USRP: libuhd-dev
If your device does not have a -dev package, e.g. LimeSDR, you need to manually create a symlink to the .so, like this:
sudo ln -s /usr/lib/x86_64-linux-gnu/libLimeSuite.so.17.02.2 /usr/lib/x86_64-linux-gnu/libLimeSuite.so
before installing URH, using:
sudo apt-get update
sudo apt-get install python3-numpy python3-psutil python3-zmq python3-pyqt5 g++ libpython3-dev python3-pip
sudo pip3 install urh

Gentoo/Pentoo
emerge -av urh

Fedora 25+
dnf install urh

Windows
If you run Python 3.4 on Windows you need to install Visual C++ Build Tools 2015 first.
It is recommended to use Python 3.5 or later on Windows, so no C++ compiler needs to be installed.
  1. Install Python 3 for Windows.
  • Make sure you tick the Add Python to PATH checkbox on first page in Python installer.
  • Choose a 64 Bit Python version for native device support.
  1. In a terminal, type: pip install urh.
  2. Type urh in a terminal or search for urh in search bar to start the application.

Mac OS X
  1. Install Python 3 for Mac OS X. If you experience issues with preinstalled Python, make sure you update to a recent version using the given link.
  2. (Optional) Install desired native libs e.g. brew install librtlsdr for corresponding native device support.
  3. In a terminal, type: pip3 install urh.
  4. Type urh in a terminal to get it started.

Update your installation
If you installed URH via pip you can keep it up to date with
pip3 install --upgrade urh
If this shouldn't work you can try:
python3 -m pip install --upgrade urh

Running from source
If you like to live on bleeding edge, you can run URH from source.

Without installation
To execute the Universal Radio Hacker without installation, just run:
git clone https://github.com/jopohl/urh/
cd urh/src/urh
./main.py
Note, before first usage the C++ extensions will be built.

Installing from source
To install from source you need to have python-setuptools installed. You can get it e.g. with pip install setuptools. Once the setuptools are installed use:
git clone https://github.com/jopohl/urh/
cd urh
python setup.py install
And start the application by typing urh in a terminal.

External decodings
See wiki for a list of external decodings provided by our community! Thanks for that!

Screenshots

Get the data out of raw signals

Keep an overview even on complex protocols

Record and send signals


WiFi Bruteforcer - Android application to brute force WiFi passwords (No Root Required)

$
0
0

WARNING: This project is still under development and by installing the app may misconfigure the Wi-Fi settings of your Android OS, a system restore may be necessary to fix it.

Android application to brute force WiFi passwords without requiring a rooted device.

lscript - This script will make your life easier, and of course faster

$
0
0

This is a script that automates many procedures about wifi penetration and hacking.

Features

Enabling-Disabling interfaces faster Changing Mac faster Anonymizing yourself faster View your public IP faster View your MAC faster

TOOLS
You can install whichever tool(s) you want from within lscript! 
Fluxion by Deltaxflux
WifiTe by derv82
Wifiphisher by Dan McInerney
Zatacker by LawrenceThePentester
Morpheus by Pedro ubuntu [ r00t-3xp10it ]
Osrframework by i3visio
Hakku by 4shadoww
Trity by Toxic-ig
Cupp by Muris Kurgas
Dracnmap by Edo -maland-
Fern Wifi Cracker by Savio-code
Kichthemout by Nikolaos Kamarinakis & David SchĂźtz
BeeLogger by Alisson Moretto - 4w4k3
Ghost-Phisher by Savio-code
Mdk3-master by Musket Developer
Anonsurf by Und3rf10w
The Eye by EgeBalci
Airgeddon by v1s1t0r1sh3r3
Xerxes by zanyarjamal
Ezsploit by rand0m1ze
Katana framework by PowerScript
4nonimizer by Hackplayers
Sslstrip2 by LeonardoNve
Dns2proxy by LeonardoNve
Pupy by n1nj4sec
Zirikatu by pasahitz
TheFatRat by Sceetsec
Angry IP Scanner by Anton Keks
Sniper by 1N3
ReconDog by UltimateHackers
RED HAWK by Tuhinshubhra
Routersploit by Reverse shell
CHAOS by Tiagorlampert
Winpayloads by Ncc group
Wifi password scripts
Handshake       (WPA-WPA2)
Find WPS pin (WPA-WPA2)
WEP hacking (WEP)
Others
Email spoofing
Metasploit automation (create payloads,listeners,save listeners for later etc...)
Auto eternalblue exploiting (check on ks) -> hidden shortcuts

How to install
(make sure you are a root user)
Be carefull.If you download it as a .zip file, it will not run.Make sure to follow these simple instructions.
cd
git clone https://github.com/arismelachroinos/lscript.git
cd lscript
chmod +x install.sh
./install.sh

How to run it
(make sure you are a root user)
open terminal
type "l"
press enter
(Not even "lazy"!! Just "l"! The less you type , the better!)

How to uninstall
cd /root/lscript
./uninstall.sh
rmdir -r /root/lscript

How to update
Run the script
Type "update"

Things to keep in mind
1)you should be a root user to run the script
2)you should contact me if something doesnt work (Write it on the "issues" tab at the top)
3)you should contact me if you want a feature to be added (Write it on the "issues" tab at the top)

Video


Screenshots






CyberChef - The Cyber Swiss Army Knife [A Web App For Encryption, Encoding, Compression And Data Analysis]

$
0
0

The Cyber Swiss Army Knife
CyberChef is a simple, intuitive web app for carrying out all manner of "cyber" operations within a web browser. These operations include simple encoding like XOR or Base64, more complex encryption like AES, DES and Blowfish, creating binary and hexdumps, compression and decompression of data, calculating hashes and checksums, IPv6 and X.509 parsing, changing character encodings, and much more.
The tool is designed to enable both technical and non-technical analysts to manipulate data in complex ways without having to deal with complex tools or algorithms. It was conceived, designed, built and incrementally improved by an analyst in their 10% innovation time over several years. Every effort has been made to structure the code in a readable and extendable format, however it should be noted that the analyst is not a professional developer.

Live demo
CyberChef is still under active development. As a result, it shouldn't be considered a finished product. There is still testing and bug fixing to do, new features to be added and additional documentation to write. Please contribute!
Cryptographic operations in CyberChef should not be relied upon to provide security in any situation. No guarantee is offered for their correctness.


How it works
There are four main areas in CyberChef:
  1. The input box in the top right, where you can paste, type or drag the data you want to operate on.
  2. The output box in the bottom right, where the outcome of your processing will be displayed.
  3. The operations list on the far left, where you can find all the operations that CyberChef is capable of in categorised lists, or by searching.
  4. The recipe area in the middle, where you can drag the operations that you want to use and specify arguments and options.
You can use as many operations as you like in simple or complex ways. Some examples are as follows:

Features
  • Drag and drop
    • Operations can be dragged in and out of the recipe list, or reorganised.
    • Files can be dragged over the input box to load them directly.
  • Auto Bake
    • Whenever you modify the input or the recipe, CyberChef will automatically “bake” for you and produce the output immediately.
    • This can be turned off and operated manually if it is affecting performance (if the input is very large, for instance).
    • If any bake takes longer than 200 milliseconds, auto bake will be switched off automatically to prevent further performance issues.
  • Breakpoints
    • You can set breakpoints on any operation in your recipe to pause execution before running it.
    • You can also step through the recipe one operation at a time to see what the data looks like at each stage.
  • Save and load recipes
    • If you come up with an awesome recipe that you know you’ll want to use again, just click save and add it to your local storage. It'll be waiting for you next time you visit CyberChef.
    • You can also copy a URL which includes your recipe and input which can be shared with others.
  • Search
    • If you know the name of the operation you want or a word associated with it, start typing it into the search field and any matching operations will immediately be shown.
  • Highlighting
  • Save to file and load from file
    • You can save the output to a file at any time or load a file by dragging and dropping it into the input field (note that files larger than about 500kb may cause your browser to hang or even crash due to the way that browsers handle large amounts of textual data).
  • CyberChef is entirely client-side
    • It should be noted that none of your input or recipe configuration is ever sent to the CyberChef web server - all processing is carried out within your browser, on your own computer.
    • Due to this feature, CyberChef can be compiled into a single HTML file. You can download this file and drop it into a virtual machine, share it with other people, or use it independently on your desktop.

Browser support
CyberChef is built to support
  • Google Chrome 40+
  • Mozilla Firefox 35+
  • Microsoft Edge 14+

Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>