Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5842 articles
Browse latest View live

wig - WebApp Information Gatherer

$
0
0

wig is a web application information gathering tool, which can identify numerous Content Management Systems and other administrative applications.
The application fingerprinting is based on checksums and string matching of known files for different versions of CMSes. This results in a score being calculated for each detected CMS and its versions. Each detected CMS is displayed along with the most probable version(s) of it. The score calculation is based on weights and the amount of "hits" for a given checksum.
wig also tries to guess the operating system on the server based on the 'server' and 'x-powered-by' headers. A database containing known header values for different operating systems is included in wig, which allows wig to guess Microsoft Windows versions and Linux distribution and version.

Requirements
wig is built with Python 3, and is therefore not compatible with Python 2.

Installation
wig can be run from the command line or installed with distuils.

Command line
$ python3 wig.py example.com

Usage in script
Install with
$ python3 setup.py install
and then wig can be imported from any location as such:
>>>> from wig.wig import wig
>>>> w = wig(url='example.com')
>>>> w.run()
>>>> results = w.get_results()

How it works
The default behavior of wig is to identify a CMS, and exit after version detection of the CMS. This is done to limit the amount of traffic sent to the target server. This behavior can be overwritten by setting the '-a' flag, in which case wig will test all the known fingerprints. As some configurations of applications do not use the default location for files and resources, it is possible to have wig fetch all the static resources it encounters during its scan. This is done with the '-c' option. The '-m' option tests all fingerprints against all fetched URLs, which is helpful if the default location has been changed.

Help Screen
usage: wig.py [-h] [-l INPUT_FILE] [-q] [-n STOP_AFTER] [-a] [-m] [-u] [-d]
[-t THREADS] [--no_cache_load] [--no_cache_save] [-N]
[--verbosity] [--proxy PROXY] [-w OUTPUT_FILE]
[url]

WebApp Information Gatherer

positional arguments:
url The url to scan e.g. http://example.com

optional arguments:
-h, --help show this help message and exit
-l INPUT_FILE File with urls, one per line.
-q Set wig to not prompt for user input during run
-n STOP_AFTER Stop after this amount of CMSs have been detected. Default:
1
-a Do not stop after the first CMS is detected
-m Try harder to find a match without making more requests
-u User-agent to use in the requests
-d Disable the search for subdomains
-t THREADS Number of threads to use
--no_cache_load Do not load cached responses
--no_cache_save Do not save the cache for later use
-N Shortcut for --no_cache_load and --no_cache_save
--verbosity, -v Increase verbosity. Use multiple times for more info
--proxy PROXY Tunnel through a proxy (format: localhost:8080)
-w OUTPUT_FILE File to dump results into (JSON)

Example of run:
$ python3 wig.py example.com

wig - WebApp Information Gatherer


Redirected to http://www.example.com
Continue? [Y|n]:
Scanning http://www.example.com...
_____________________________________________________ SITE INFO _____________________________________________________
IP Title
256.256.256.256 PAGE_TITLE

______________________________________________________ VERSION ______________________________________________________
Name Versions Type
Drupal 7.38 CMS
nginx Platform
amazons3 Platform
Varnish Platform
IIS 7.5 Platform
ASP.NET 4.0.30319 Platform
jQuery 1.4.4 JavaScript
Microsoft Windows Server 2008 R2 OS

_____________________________________________________ SUBDOMAINS ____________________________________________________
Name Page Title IP
http://m.example.com:80 Mobile Page 256.256.256.257
https://m.example.com:443 Secure Mobil Page 256.256.256.258

____________________________________________________ INTERESTING ____________________________________________________
URL Note Type
/test/ Test directory Interesting
/login/ Login Page Interesting

_______________________________________________ PLATFORM OBSERVATIONS _______________________________________________
Platform URL Type
ASP.NET 2.0.50727 /old.aspx Observation
ASP.NET 4.0.30319 /login/ Observation
IIS 6.0 http://www.example.com/templates/file.css Observation
IIS 7.0 https://www.example.com/login/ Observation
IIS 7.5 http://www.example.com Observation

_______________________________________________________ TOOLS _______________________________________________________
Name Link Software
droopescan https://github.com/droope/droopescan Drupal
CMSmap https://github.com/Dionach/CMSmap Drupal

__________________________________________________ VULNERABILITIES __________________________________________________
Affected #Vulns Link
Drupal 7.38 5 http://cvedetails.com/version/185744

_____________________________________________________________________________________________________________________
Time: 11.3 sec Urls: 310 Fingerprints: 37580



KRACK Detector - Detect and prevent KRACK attacks in your network

$
0
0

KRACK Detector is a Python script to detect possible KRACK attacks against client devices on your network. The script is meant to be run on the Access Point rather than the client devices. It listens on the Wi-Fi interface and waits for duplicate message 3 of the 4-way handshake. It then disconnects the suspected device, preventing it from sending any further sensitive data to the Access Point.
KRACK Detector currently supports Linux Access Points with hostapd. It uses Python 2 for compatibility with older operating systems. No external Python packages are required.

Usage
Run as root and pass the Wi-Fi interface as a single argument. It is important to use the actual Wi-Fi interface and not any bridge interface it connects to.
python krack_detect.py wlan0
If you do not wish to disconnect suspected devices, use the -n flag
python krack_detect.py -n wlan0

Known Issues
Message 3 of the 4-way handshake might be retransmitted even if no attack is perfomed. In such a case the client device will be disconnected from the Wi-Fi network. Some client devices will take some time to re-authenticate themselves, losing the Wi-Fi connection for a few seconds.


Linux Soft Exploit Suggester - Search Exploitable Software On Linux

$
0
0

linux-soft-exploit-suggester finds exploits for all vulnerable software in a system helping with the privilege escalation. It focuses on software packages instead of Kernel vulnerabilities.

> python linux-soft-exploit-suggester.py -h

| _ __ _ _ | _ _ | _ | __ __ __ _ __ | _ _
|·| || |\/ (_ | ||_ |- /_)\/| \|| |·|- (_ | || )| )/_)(_ |- /_)|
||| ||_|/\ __)|_|| |_ \_ /\|_/||_|||_ __)|_||_/ |_/ \_ __) |_ \_ |
| _/ _/

linux-soft-exploit-suggester:
Search for Exploitable Software from package list.

optional arguments:
-h, --help Show this help message and exit
-f FILE, --file FILE Package list file
--clean Use clean package list, if used 'dpkg-query -W'
--duplicates Show duplicate exploits
--db DB Exploits csv file [default: file.csv]
--update Download latest version of exploits db
-d debian|redhat, --distro debian|redhat
Linux flavor, debian or redhat [default: debian]
--dos Include DoS exploits
--intense Include intense package name search,
when software name doesn't match package name (experimental)
-l 1-5, --level 1-5 Software version search variation [default: 1]
level 1: Same version
level 2: Micro and Patch version
level 3: Minor version
level 4: Major version
level 5: All versions
--type TYPE Exploit type; local, remote, webapps, dos.
e.g. --type local
--type remote
--filter FILTER Filter exploits by string
e.g. --filter "escalation"

usage examples:
Get Package List:
debian/ubuntu: dpkg -l > package_list
redhat/centos: rpm -qa > package_list

Update exploit database:
python linux-soft-exploit-suggester.py --update

Basic usage:
python linux-soft-exploit-suggester.py --file package_list

Specify exploit db:
python linux-soft-exploit-suggester.py --file package_list --db file.cve

Use Redhat/Centos format file:
python linux-soft-exploit-suggester.py --file package_list --distro redhat

Search exploit for major version:
python linux-soft-exploit-suggester.py --file package_list --level 4

Filter by remote exploits:
python linux-soft-exploit-suggester.py --file package_list --type remote

Search specific words in exploit title:
python linux-soft-exploit-suggester.py --file package_list --filter Overflow

Advanced usage:
python linux-soft-exploit-suggester.py --file package_list --level 3 --type local --filter escalation

Output
> python linux-soft-exploit-suggester.py --file packages --db file.csv

| _ __ _ _ | _ _ | _ | __ __ __ _ __ | _ _
|·| || |\/ (_ | ||_ |- /_)\/| \|| |·|- (_ | || )| )/_)(_ |- /_)|
||| ||_|/\ __)|_|| |_ \_ /\|_/||_|||_ __)|_||_/ |_/ \_ __) |_ \_ |
| _/ _/

[+] DNSTracer 1.9 - Buffer Overflow - local
From: dnstracer 1.9
File: /usr/share/exploitdb/platforms/linux/local/42424.py
Url: https://www.exploit-db.com/exploits/42424
[+] GNU Wget < 1.18 - Arbitrary File Upload / Remote Code Execution - remote
From: wget 1.17.1
File: /usr/share/exploitdb/platforms/linux/remote/40064.txt
Url: https://www.exploit-db.com/exploits/40064
[+] GNU Screen 4.5.0 - Privilege Escalation (PoC) - local
From: screen 4.3.1
File: /usr/share/exploitdb/platforms/linux/local/41152.txt
Url: https://www.exploit-db.com/exploits/41152
[+] Ghostscript 9.21 - Type Confusion Arbitrary Command Execution (Metasploit) - local
From: ghostscript 9.21
File: /usr/share/exploitdb/platforms/linux/local/41955.rb
Url: https://www.exploit-db.com/exploits/41955
[+] KeepNote 0.7.8 - Command Execution - local
From: keepnote 0.7.8
File: /usr/share/exploitdb/platforms/multiple/local/40440.py
Url: https://www.exploit-db.com/exploits/40440
[+] MAWK 1.3.3-17 - Local Buffer Overflow - local
From: mawk 1.3.3
File: /usr/share/exploitdb/platforms/linux/local/42357.py
Url: https://www.exploit-db.com/exploits/42357
[+] Sudo 1.8.20 - 'get_process_ttyname()' Privilege Escalation - local
From: sudo 1.8.20
File: /usr/share/exploitdb/platforms/linux/local/42183.c
Url: https://www.exploit-db.com/exploits/42183

...

Generate package list

Debian
dpkg -l > package_list

Red Hat
rpm -qa > package_list

TIP. Packages from running processes and SETUID binaries

Running packages
> for i in $(ps auex|sed -e ':l;s/  / /g;t l'|cut -d' ' -f11|grep -v '\['|grep '/'|sort -u); \
do \
dpkg -l | grep "^ii `dpkg -S $i 2>&1|cut -d':' -f1`" |tee -a potentials; \
done

SETUID Binaries
> for i in $(find / -perm -4000 -o -perm -2000 -type f 2>/dev/null); \
do \
dpkg -l | grep "^ii `dpkg -S $i 2>&1|cut -d':' -f1`"|tee -a potentials; \
done

Eliminate duplicates and Run
> sort -u potentials > potentials_no_duplicates
> python linux-soft-exploit-suggester.py --file potentials_no_duplicates --level 2 --type local

| _ __ _ _ | _ _ | _ | __ __ __ _ __ | _ _
|·| || |\/ (_ | ||_ |- /_)\/| \|| |·|- (_ | || )| )/_)(_ |- /_)|
||| ||_|/\ __)|_|| |_ \_ /\|_/||_|||_ __)|_||_/ |_/ \_ __) |_ \_ |
| _/ _/

[+] Sudo 1.8.20 - 'get_process_ttyname()' Privilege Escalation - local
From: sudo 1.8.20
File: /usr/share/exploitdb/platforms/linux/local/42183.c
Url: https://www.exploit-db.com/exploits/42183
[+] Fuse 2.9.3-15 - Privilege Escalation - local
From: fuse 2.9.7
File: /usr/share/exploitdb/platforms/linux/local/37089.txt
Url: https://www.exploit-db.com/exploits/37089


CrunchRAT - HTTPS-based Remote Administration Tool (RAT)

$
0
0

CrunchRAT currently supports the following features:
  • File upload
  • File download
  • Command execution
It is currently single-threaded (only one task at a time), but multi-threading (or multi-tasking) is currently in the works. Additional features will be included at a later date.

Server
The server-side of the RAT uses PHP and MySQL. The server-side of the RAT has been tested and works on the following:
  • Ubuntu 15.10 (Desktop or Server edition)
  • Ubuntu 16.04 (Desktop or Server edition)
Once the latest RAT code has been downloaded, there will be three directories:
  • Client - Contains implant code (ignore for the this section)
  • Server - Contains server code
  • Setup - Contains setup files

Dependencies Setup
  1. Within the Setup directory, there are two dependencies setup shell scripts. If you are using Ubuntu 15.10 run sh 15_10_dependencies.sh, and if you're using Ubuntu 16.04 run sh 16_04_dependencies.sh. Note: This needs to be run as root. Failure to run with root privileges will result in an error.
  2. When asked for a new MySQL root password, please choose one that is complex. This information is needed at a later step.

HTTPS Setup
  1. CrunchRAT uses a self-signed certificate to securely communicate between the server and implant. Run the https_setup.sh shell script with the Setup directory to automate the HTTPS setup. Note: This needs to be run as root. Failure to run with root privileges will result in an error. When asked to fill out the certificate information (Country Name, etc), please fill out all information. Snort rules already exist to alert on the dummy OpenSSL certificates. Don't be that guy that gets flagged by not filling out this information.

Database Setup
  1. Run the database_setup.sh shell script within the Setup directory to setup the MySQL database.
  2. CrunchRAT creates a default RAT account with the admin:changeme credentials. Please log into the web end of the RAT and change the default password. Once logged into the web end of the RAT, go to Account Management--> Change Password to successfully change the default password to something more complex. Additional RAT users can be provisioned using Account Management --> Add Users.

Miscellaneous Setup
  1. Copy all files from the Server directory to the webroot.
  2. You will want to create a downloads directory as well. Note: It is absolutely critical that you don't put this folder in the webroot. I typically create this directory in the /home/<USERNAME> directory. You will want to make sure that www-data can access this directory with the following command sudo chown www-data:www-data downloads. This directory will store all of the files downloaded from the infected system(s).
  3. In the webroot, open the config/config.php file. This is the main RAT configuration file. Make sure that you update all of the variables (downloadsPath, dbUser, dbPass, etc) to match your environment.

Client
CrunchRAT is written in C# for simplicity. The C# binary does not have a persistence mechanism in place, but plans to write a C++ stager are currently in the works.
Targeted Framework: .NET Framework 3.5 (enabled by default on Windows 7 systems)
  1. Create a new console project in Visual Studio
  2. Copy implant.cs code from Client directory and add it to the project.
  3. Change Output Type to Windows Application (this will hide the command window) (Project --> Properties --> Output Type).
  4. Make sure Target Framework is .NET Framework 3.5.
  5. In the actual code, there will be a variable called c2 - Change this variable to the IP address or domain name of the C2 server
  6. Compile and your implant executable is ready to run.


Evil-Droid - Framework to Create, Generate & Embed APK Payloads

$
0
0

Evil-Droid is a framework that create & generate & embed apk payload to penetrate android platforms.

Screenshot:


Dependencies :
1 - metasploit-framework
2 - xterm
3 - Zenity
4 - Aapt
5 - Apktool
6 - Zipalign

Download/Config/Usage:
1 - Download the tool from github
git clone https://github.com/M4sc3r4n0/Evil-Droid.git
2 - Set script execution permission
cd Evil-Droid
chmod +x evil-droid
3- Run Evil-Droid Framework :
./evil-droid
see options bellow

Video tutorial:


pcc - PHP Secure Configuration Checker

$
0
0

Check current PHP configuration for potential security flaws.
Simply access this file from your webserver or run on CLI.

Author
This software was written by Ben Fuhrmannek, SektionEins GmbH, in an effort to automate php.ini checks and spend more time on cheerful tasks.

Idea
  • one single file for easy distribution
  • simple tests for each security related ini entry
  • a few other tests - not too complicated though
  • compatible with PHP >= 5.4, or if possible >= 5.0
  • NO complicated/overengineered code, e.g. no classes/interfaces, test-frameworks, libraries, ... -> It is supposed to be obvious on first glance - even for novices - how this tool works and what it does!
  • NO (or very few) dependencies

Usage / Installation
  • CLI: Simply call php phpconfigcheck.php. That's it. Add -a to see hidden results as well, -h for HTML output and -j for JSON output.
  • WEB: Copy this script to any directory accessible by your webserver, e.g. your document root. See also 'Safeguards' below.
    The output in non-CLI mode is HTML by default. This behaviour can be changed by setting the environment variable PCC_OUTPUT_TYPE=text or PCC_OUTPUT_TYPE=json.
    Some test cases are hidden by default, specifically skipped, ok and unknown/untested. To show all results, use phpconfigcheck.php?showall=1. This does not apply to JSON output, which returns all results by default.
    To control the output format in WEB mode use phpconfigcheck.php?format=..., where the value of format maybe one of text, html or json. For example: phpconfigcheck.php?format=text. The format parameter takes precedence over PCC_OUTPUT_TYPE.

Safeguards
Most of the time it is a good idea to keep security related issues such as your PHP configuration to yourself. The following safeguards have been implemented:
  • mtime check: This script stops working in non-CLI mode after two days. Re-arming the check can be done by touch phpconfigcheck.php or by copying the script to your server again (e.g. via SCP). This check can be disabled by setting the environment variable: PCC_DISABLE_MTIME=1, e.g. SetEnv PCC_DISABLE_MTIME 1 in apache's .htaccess.
  • source IP check: By default only localhost (127.0.0.1 and ::1) can access this script. Other hosts may be added by setting PCC_ALLOW_IP to a your IP address or a wildcard pattern, e.g. SetEnv PCC_ALLOW_IP 10.0.0.* in .htaccess. You may also choose to access your webserver via SSH Port forwarding, e.g. ssh -D or ssh -L.

Troubleshooting
  • disabled functions: This scripts needs a few functions to work properly, such as ini_get() and stat(). If one of these functions is blacklisted (or not whitelisted) then execution will fail or produce invalid output. In these cases it is possible to temporarily put Suhosin in simulation mode and omit disable_functions. To be on the safe side, relaxed security configuration can be done with .htaccess in a separate directory. Also, this script may be called from command line with your webserver's configuration, e.g. php -n -c /etc/.../php.ini phpconfigcheck.php.
  • CLI: Older PHP versions don't known about SAPI name 'cli' and use CGI style output even on cli. Workaround: PCC_OUTPUT_TYPE=text /opt/php/php-5.1.6/bin/php phpconfigcheck.php

WARNING
This tool will only support you setting up a secure PHP environment. Nothing else. Your setup, software or any related configuration may still be vulnerable, even if this tool's output suggests otherwise.


Cromos - Download and Inject code into Google Chrome extensions

$
0
0

Cromos is a tool for downloading legitimate extensions of the Chrome Web Store and inject codes in the background of the application and more cromos create executable files to force installation via PowerShell for example, and also upload files to dropbox to host the malicious files.
  • Download extension
  • Injections
  • Upload files on dropbox
  • Windows infection

Group Policy Object (GPO)
Chrome allows you to add extensions using Windows Group Policy Object (GPO) if you need to force installation on multiple machines just follow the steps in the Chrome Deployment Guide then modify the original extension with few modifications you can publish your extension in the Chrome Web Store requires to pay $5.

Support
If you chose to generate a batch file to force installation the script in powershell that will be downloaded is compatible Windows, 7, 8 10 with versions of powershell >= 3.0

Demo
This is a demonstration of the tool at work in this examples I'm downloading a famous Google extension called G Suite Training on Google Chrome Web Store and injecting a keylogger module.

Installation
$ cd $HOME/
$ git clone https://github.com/fbctf/cromos
$ sudo chmod -R 777 cromos/
$ cd cromos && python setup.py

Usage

Downloading the extension
Usage: python cromos.py --extension {id}

Downloading the extension and loading module
Usage: python cromos.py --extension {id} --load {currency/keylogger}

Build a batch file and upload the files in dropbox
Usage: python cromos.py --extension {id} --build {bat} --token {dropboxToken}

Modules
You can also inject some predefined modules in the background as keylogger, virtual currency.
ModuleDescription
modules/keyloggerThis module captures all the passwords you type in an infected browser over https or not. All you need is to have a php server for example to receive the requests get the parameters are email, password, cookies and userAgent.
modules/currencyThis module allows you to mine virtual coins using the coinhive API, you just need to have an account.


Parrot Security 3.9 - Security GNU/Linux Distribution Designed with Cloud Pentesting and IoT Security in Mind

$
0
0

Security GNU/Linux distribution designed with cloud pentesting and IoT security in mind.

It includes a full portable laboratory for security and digital forensics experts, but it also includes all you need to develop your own softwares or protect your privacy with anonymity and crypto tools.

Details

Security

Parrot Security includes a full arsenal of security oriented tools to perform penetration tests, security audits and more. With a Parrot usb drive in your pocket you will always be sure to have all you need with you.

Privacy

Parrot includes by default TOR, I2P, anonsurf, gpg, tccf, zulucrypt, veracrypt, truecrypt, luks and many other tecnologies designed to defend your privacy and your identity.

Development

If you need a comfortable environment with updated frameworks and useful libraries already installed, Parrot will amaze you as it includes a full development-oriented environment with some powerful editors and IDEs pre-installed and many other tools installable from our repository.

Features

System Specs
  • Debian GNU/Linux 9 (stretch)
  • Custom hardened Linux 4.8 kernel
  • Rolling release updates
  • Powerful worldwide mirror servers
  • High hardware compatibility
  • Community-driven development
  • free(libre) and open source project

Cryptography

Parrot includes many cryptographic softwares which are extremely useful when it comes to protect your confidential data and defend your privacy.

Parrot includes several cryptographic front-ends to work both with symmetric and asymmetric encryption, infact it natively supports volumes encryption with LUKS, TrueCrypt, VeraCrypt and the hidden TrueCrypt/VeraCrypt volumes with nested algorythms support.

The whole system can be installed inside an encrypted partition to protect your computer in case of theft.

Another swiss army knife of your privacy is GPG, the GNU Privacy Guard, an extremely powerful PGP software that lets you create a private/public pair of keys to apply digital signatures to your messages and to allow other people to send you encrypted messages that only your private key can decrypt, in can also handle multiple identities and subkeys, and its power resides in its ring of trust as PGP users can sign each other's keys to make other people know if a digital identity is valid or not.

Even our software repository is digitally signed by GPG, and the system automatically verifies if an update was altered or compromised and it refuses to upgrade or to install new software if our digital signature is not found or not valid.

Privacy

Your privacy is the most valuable thing you have in your digital life and the whole Parrot Team is exaggeratedly paranoid when it comes to users privacy, infact our system doesn't contain tracking systems, and it is hardened in deep to protect users from prying eyes.

Parrot has developed and implemented several tricks and softwares to achieve this goal, and AnonSurf is one of the most important examples, it is a software designed to start TOR and hijack all the internet traffic made by the system through the TOR network, we have also modified the system to make it use DNS servers different from those offered by your internet provider.

Parrot also includes torbrowser, torchat and other anonymous services, like I2P, a powerful alternative to TOR.

Programming

The main goal of an environment designed by hackers for hackers is the possibility to change it, adapt it, transform it and use it as a development platform to create new things, this is why Parrot comes out of the box with several tools for developers such as compilers, disassemblers, IDEs, comfortable editors and powerful frameworks.

Parrot includes QTCreator as its main C, C++ and Qt framework. Another very useful tool is Geany, a lightweight and simple IDE which supports a huge amount of programming languages, while we also include Atom, the opensource editor of the future developed by GitHub, and many compilers and interpreters with their most important libraries are pre-installed and ready to use.


And of course many other editors, development softwares and libraries are available through our software repository where we keep all the development tools always updated to their most cutting edge but reliable version.

Changelog

Parrot 3.9 is now ready, and it includes some important new features that were introduced to make the system more secure and reliable.
The most important feature is the new sandbox system, introduced to protect many applications from 0day attacks out of the box. The sandbox is based on firejail, a suid program which is very easy to configure and customize to protect many critical applications in a quick and effective way (if an application does not work as expected, customize the corresponding firejail profile to be more permissive).

enum4linux - Tool for Enumerating Information from Windows and Samba Systems

$
0
0

A Linux alternative to enum.exe for enumerating data from Windows and Samba hosts.
Enum4linux is a tool for enumerating information from Windows and Samba systems.

It is written in Perl and is basically a wrapper around the Samba tools smbclient, rpclient, net and nmblookup.

Key features
  • RID cycling (When RestrictAnonymous is set to 1 on Windows 2000)
  • User listing (When RestrictAnonymous is set to 0 on Windows 2000)
  • Listing of group membership information
  • Share enumeration
  • Detecting if host is in a workgroup or a domain
  • Identifying the remote operating system
  • Password policy retrieval (using polenum)

Overview

Enum4linux is a tool for enumerating information from Windows and Samba systems. It attempts to offer similar functionality to enum.exe formerly available from www.bindview.com.
It is written in Perl and is basically a wrapper around the Samba tools smbclient, rpclient, net and nmblookup.
The tool usage can be found below followed by examples, previous versions of the tool can be found at the bottom of the page.

Dependencies

You will need to have the Samba package installed as this script is basically just a wrapper around rpcclient, net, nmblookup and smbclient.

Usage
$ enum4linux.pl -h
enum4linux v0.8.2 (https://labs.portcullis.co.uk/application/enum4linux/)
Copyright (C) 2006 Mark Lowe (mrl@portcullis-security.com)

Simple wrapper around the tools in the samba package to provide similar functionality
to enum (http://www.bindview.com/Services/RAZOR/Utilities/Windows/enum_readme.cfm).
Some additional features such as RID cycling have also been added for convenience.

This is an ALPHA release only.  Some of the options supported by the original "enum"
aren't implemented in this release.

Usage: /usr/local/bin/enum4linux.pl [options] ip

Options are (like "enum"):
-U             get userlist
-M             get machine list*
-N             get namelist dump (different from -U|-M)*
-S             get sharelist
-P             get password policy information*
-G             get group and member list
-L             get LSA policy information*
-D             dictionary crack, needs -u and -f*
-d             be detailed, applies to -U and -S
-u username    specify username to use (default "")
-p password    specify password to use (default "")
-f filename    specify dictfile to use (wants -D)*

* = Not implemented in this release.

Additional options:
-a             Do all simple enumeration (-U -S -G -r -o -n)
-h             Display this help message and exit
-r             enumerate users via RID cycling
-R range       RID ranges to enumerate (default: 500-550,1000-1050, implies -r)
-s filename    brute force guessing for share names
-k username    User(s) that exists on remote system (default: administrator,guest,krbtgt,domain admins,root,bin,none)
Used to get sid with "lookupsid known_username"
Use commas to try several users: "-k admin,user1,user2"
-o             Get OS information
-i             Get printer information
-w workgroup   Specify workgroup manually (usually found automatically)
-n             Do an nmblookup (similar to nbtstat)
-v             Verbose.  Shows full commands being run (net, rpcclient, etc.)

RID cycling should extract a list of users from Windows (or Samba) hosts which have
RestrictAnonymous set to 1 (Windows NT and 2000), or "Network access: Allow
anonymous SID/Name translation" enabled (XP, 2003).

If no usernames are known, good names to try against Windows systems are:
- administrator
- guest
- none
- helpassistant
- aspnet

The following might work against samba systems:
- root
- nobody
- sys

NB: Samba servers often seem to have RIDs in the range 3000-3050.

Examples

Below are examples which demonstrate most of the features of enum4linux. Output has been edited for brevity in most cases.

Verbose mode
Before we delve into the features of enum4linux, it’s worth pointing out that verbose mode shows you the underlying commands being run by enum4linux (rpcclient, smblient, etc.). This is useful if you want to use the underlying commands manually, but can’t figure out the syntax to use. Note the lines beginning with [V] in the output below:
$ enum4linux.pl -v 192.168.2.55
[V] Dependent program "nmblookup" found in /usr/bin/nmblookup
[V] Dependent program "net" found in /usr/bin/net
[V] Dependent program "rpcclient" found in /usr/bin/rpcclient
[V] Dependent program "smbclient" found in /usr/bin/smbclient
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 11:18:51 2008

----- Enumerating Workgroup/Domain on 192.168.2.55 ------
[V] Attempting to get domain name with command: nmblookup -A '192.168.2.55'
[+] Got domain/workgroup name: WORKGROUP

----- Getting domain SID for 192.168.2.55 -----
[V] Attempting to get domain SID with command: rpcclient -U''%'' 192.168.2.55 -c 'lsaquery' 2>&1
Domain Name: WORKGROUP
Domain Sid: S-0-0
[+] Host is part of a workgroup (not a domain)

----- Session Check on 192.168.2.55 -----
[V] Attempting to make null session using command: smbclient //'192.168.2.55'/ipc$ -U''%'' -c 'help' 2>&1
[+] Server 192.168.2.55 allows sessions using username '', password ''

The “Do Everything” option
As you read through the following section you’ll probably think that there are a lot of options you need to remember. If you just want enum4linux to try to enumerate all the information it can from a remote host, just use the -a option:
$ enum4linux.pl -a 192.168.2.55
NB: This won’t do dictionary-based share name guessing, but does pretty much everything else.

Obtain list of usernames (RestrictAnonymous = 0)
This feature is similar to enum.exe -U IP. It returns a complete list of usernames if the server allows it. On Windows 2000 the RestrictAnonymous registry setting must be set to 0 for this feature to work. The user list is show twice in two different formats because type different underlying commands are used to retrieve the data.
$ enum4linux.pl -U 192.168.2.55
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Thu Mar 27 16:02:50 2008

----- Users on 192.168.2.55 -----
index: 0x1 RID: 0x1f4 acb: 0x210 Account: Administrator Name: Desc: Built-in account for administering the computer/domain
index: 0x2 RID: 0x3ee acb: 0x10 Account: basic Name: basic Desc:
index: 0x3 RID: 0x3ed acb: 0x10 Account: blah Name: Desc:
index: 0x4 RID: 0x1f5 acb: 0x215 Account: Guest Name: Desc: Built-in account for guest access to the computer/domain
index: 0x5 RID: 0x3e9 acb: 0x214 Account: IUSR_PORTCULLIS Name: Internet Guest Account Desc: Built-in account for anonymous access to Internet Information Services
index: 0x6 RID: 0x3ea acb: 0x214 Account: IWAM_PORTCULLIS Name: Launch IIS Process Account Desc: Built-in account for Internet Information Services to start out of process applications
index: 0x7 RID: 0x3ec acb: 0x10 Account: mark Name: Desc:
index: 0x8 RID: 0x3e8 acb: 0x214 Account: TsInternetUser Name: TsInternetUser Desc: This user account is used by Terminal Services.

user:[Administrator] rid:[0x1f4]
user:[basic] rid:[0x3ee]
user:[blah] rid:[0x3ed]
user:[Guest] rid:[0x1f5]
user:[IUSR_PORTCULLIS] rid:[0x3e9]
user:[IWAM_PORTCULLIS] rid:[0x3ea]
user:[mark] rid:[0x3ec]
user:[TsInternetUser] rid:[0x3e8]

Obtain a list of usernames (using authentication)
If you’ve managed to obtain a username and password for the host, you can use it to retrieve a complete list of users regardless of RestrictAnonymous settings. In the example below we use the administrator account, but any account will do:
$ enum4linux.pl -u administrator -p password -U 192.168.2.55
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 13:19:35 2008

----- Users on 192.168.2.55 -----
index: 0x1 RID: 0x1f4 acb: 0x210 Account: Administrator Name: Desc: Built-in account for administering the computer/domain
index: 0x2 RID: 0x3ee acb: 0x10 Account: basic Name: basic Desc:
index: 0x3 RID: 0x3ed acb: 0x10 Account: blah Name: Desc:
index: 0x4 RID: 0x1f5 acb: 0x215 Account: Guest Name: Desc: Built-in account for guest access to the computer/domain
index: 0x5 RID: 0x3e9 acb: 0x214 Account: IUSR_PORTCULLIS Name: Internet Guest Account Desc: Built-in account for anonymous access to Internet Information Services
index: 0x6 RID: 0x3ea acb: 0x214 Account: IWAM_PORTCULLIS Name: Launch IIS Process Account Desc: Built-in account for Internet Information Services to start out of process applications
index: 0x7 RID: 0x3ec acb: 0x10 Account: mark Name: Desc:
index: 0x8 RID: 0x3e8 acb: 0x214 Account: TsInternetUser Name: TsInternetUser Desc: This user account is used by Terminal Services.

user:[Administrator] rid:[0x1f4]
user:[basic] rid:[0x3ee]
user:[blah] rid:[0x3ed]
user:[Guest] rid:[0x1f5]
user:[IUSR_PORTCULLIS] rid:[0x3e9]
user:[IWAM_PORTCULLIS] rid:[0x3ea]
user:[mark] rid:[0x3ec]
user:[TsInternetUser] rid:[0x3e8]

Obtaining a List of Usernames via RID Cycling (RestrictAnonymous = 1)
To obtain the usernames corresponding to a default range of RIDs (500-550,1000-1050) use the -r option:
$ enum4linux.pl -r 192.168.2.55
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 11:27:21 2008

----- Target information -----
Target ........... 192.168.2.55
RID Range ........ 500-550,1000-1050
Known Usernames .. administrator, guest, krbtgt, domain admins, root, bin, none

----- Users on 192.168.2.55 via RID cycling (RIDS: 500-550,1000-1050) -----
[I] Assuming that user "administrator" exists
[+] Got SID: S-1-5-21-1801674531-1482476501-725345543 using username '', password ''
S-1-5-21-1801674531-1482476501-725345543-500 W2KSQL\Administrator (Local User)
S-1-5-21-1801674531-1482476501-725345543-501 W2KSQL\Guest (Local User)
S-1-5-21-1801674531-1482476501-725345543-513 W2KSQL\None (Domain Group)
S-1-5-21-1801674531-1482476501-725345543-1000 W2KSQL\TsInternetUser (Local User)
S-1-5-21-1801674531-1482476501-725345543-1001 W2KSQL\IUSR_PORTCULLIS (Local User)
S-1-5-21-1801674531-1482476501-725345543-1002 W2KSQL\IWAM_PORTCULLIS (Local User)
S-1-5-21-1801674531-1482476501-725345543-1004 W2KSQL\mark (Local User)
S-1-5-21-1801674531-1482476501-725345543-1005 W2KSQL\blah (Local User)
S-1-5-21-1801674531-1482476501-725345543-1006 W2KSQL\basic (Local User)
You can specify a custom range of RIDs using the -R option. This implies -r, so your don’t have specify the -r option:
$ enum4linux.pl -R 500-520 192.168.2.55
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 11:27:53 2008

----- Target information -----
Target ........... 192.168.2.55
RID Range ........ 500-520
Known Usernames .. administrator, guest, krbtgt, domain admins, root, bin, none

----- Users on 192.168.2.55 via RID cycling (RIDS: 500-520) -----
[I] Assuming that user "administrator" exists
[+] Got SID: S-1-5-21-1801674531-1482476501-725345543 using username '', password ''
S-1-5-21-1801674531-1482476501-725345543-500 W2KSQL\Administrator (Local User)
S-1-5-21-1801674531-1482476501-725345543-501 W2KSQL\Guest (Local User)
S-1-5-21-1801674531-1482476501-725345543-513 W2KSQL\None (Domain Group)
Before RID cycling can start, enum4linux needs to get the SID from the remote host. It does this by requesting the SID of a known username / group (pretty much the same thing every other RID-cycling tool does). You can see in the above output a list of known usernames. These are tried in turn, until enum4linux finds the SID of the remote host.
If you’ve very unlucky, this list won’t be good enough and you won’t be able to get the SID. In this case, use the -k option to specify a different known username:
$ enum4linux.pl -k anotheruser -R 500-520 192.168.2.55
You can specify a list using commas:
$ enum4linux.pl -k user1,user2,user3 -R 500-520 192.168.2.55

Group membership
If the remote host allow it, you can get a list of groups and their members using the -G option (like in enum.exe):
$ enum4linux.pl -G 192.168.2.55
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 13:54:48 2008

----- Groups on 192.168.2.55 -----
[+] Getting builtin groups:
group:[Administrators] rid:[0x220]
group:[Backup Operators] rid:[0x227]
group:[Guests] rid:[0x222]
group:[Power Users] rid:[0x223]
group:[Replicator] rid:[0x228]
group:[Users] rid:[0x221]

[+] Getting builtin group memberships:
Group 'Guests' (RID: 546) has members:
W2KSQL\Guest
W2KSQL\TsInternetUser
W2KSQL\IUSR_PORTCULLIS
W2KSQL\IWAM_PORTCULLIS
Group 'Users' (RID: 545) has members:
NT AUTHORITY\INTERACTIVE
NT AUTHORITY\Authenticated Users
W2KSQL\mark
W2KSQL\blah
W2KSQL\basic
Group 'Replicator' (RID: 552) has members:
Group 'Power Users' (RID: 547) has members:
Group 'Administrators' (RID: 544) has members:
W2KSQL\Administrator
W2KSQL\mark
W2KSQL\blah
Group 'Backup Operators' (RID: 551) has members:

[+] Getting local groups:

[+] Getting local group memberships:

[+] Getting domain groups:
group:[None] rid:[0x201]

[+] Getting domain group memberships:
Group 'None' (RID: 513) has members:
W2KSQL\Administrator
W2KSQL\Guest
W2KSQL\TsInternetUser
W2KSQL\IUSR_PORTCULLIS
W2KSQL\IWAM_PORTCULLIS
W2KSQL\mark
W2KSQL\blah
W2KSQL\basic
As with the -U option for user enumeration, you can also specify -u user -p pass to provide login credentials if required. Any user account will do, you don’t have to be an admin.

Check if host is part of a domain or workgroup
Enum4linux uses rpcclient’s lsaquery command to ask for a host’s Domain SID. If we get a proper SID we can infer that it is part of a domain. If we get the answer S-0-0 we can infer the host is part of a workgroup. This is done by default, so no command line options are required:
$ enum4linux.pl 192.168.2.55
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Thu Mar 27 16:02:50 2008

----- Getting domain SID for 192.168.2.55 -----
Domain Name: WORKGROUP
Domain Sid: S-0-0
[+] Host is part of a workgroup (not a domain)

Getting nbtstat Information
The -n option causes enum4linux to run nmblookup and does some extra parsing on it’s output to provide human-readable information about the remote host.
$ enum4linux.pl -n 192.168.2.55
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 11:21:13 2008

----- Nbtstat Information for 192.168.2.55 -----
Looking up status of 192.168.2.55
W2KSQL <00> - B <tt>Workstation Service
W2KSQL <20> - B </tt><tt>File Server Service
WORKGROUP <00> - </tt><tt>B </tt><tt>Domain/Workgroup Name
INet~Services <1c> - </tt><tt>B </tt><tt>IIS
WORKGROUP <1e> - </tt><tt>B </tt><tt>Browser Service Elections
W2KSQL <03> - B </tt><tt>Messenger Service
IS~W2KSQL <00> - B </tt><tt>IIS
ADMINISTRATOR <03> - B </tt><tt>Messenger Service</tt>

MAC Address = 00-0C-29-A4-12-6C

Listing Windows shares
If the server allows it, you can obtain a complete list of shares with the -S option. This uses smbclient under the bonnet which also seems to grab the browse list.
Enum4linux will also attempt to connect to each share with the supplied credentials (null session usually, but you could use -u user -p pass to use something else). It will report whether it could connect to the share and whether it was possible to get a directory listing.
$ enum4linux.pl -S 192.168.2.55
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 11:28:28 2008

----- Enumerating Workgroup/Domain on 192.168.2.55 ------
[+] Got domain/workgroup name: WORKGROUP

----- Share Enumeration on 192.168.2.55 -----
Domain=[WORKGROUP] OS=[Windows 5.0] Server=[Windows 2000 LAN Manager]

Sharename Type Comment
--------- ---- -------
IPC$ IPC Remote IPC
ADMIN$ Disk Remote Admin
C$ Disk Default share
session request to 192.168.2.55 failed (Called name not present)
session request to 192 failed (Called name not present)

Server Comment
--------- -------
W2KSQL
WEBVULNB
WINORACLE

Workgroup Master
--------- -------
PTT SBS
WORKGROUP WEBVULNB

----- Attempting to map to shares on 192.168.2.55 -----
//192.168.2.55/IPC$ Mapping: OK Listing: DENIED
//192.168.2.55/ADMIN$ Mapping: DENIED, Listing: N/A
//192.168.2.55/C$ Mapping: DENIED, Listing: N/A
Some hosts don’t let your retrieve a share list. In these situations, it is still possible to perform a dictionary attack to guess share names. First we demonstrate the -S option failing:
$ enum4linux.pl -S 192.168.2.76
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 11:54:02 2008</tt>

----- Share Enumeration on 192.168.2.76 -----
[E] Can't list shares: NT_STATUS_ACCESS_DENIED

----- Attempting to map to shares on 192.168.2.76 -----
The output below show the use of the -s option with a dictionary file guess the names of some shares:
$ enum4linux.pl -s share-list.txt 192.168.2.76
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 11:54:20 2008</tt>

----- Session Check on 192.168.2.76 -----
[+] Server 192.168.2.76 allows sessions using username '', password ''

----- Brute Force Share Enumeration on 192.168.2.76 -----
c$ EXISTS
e$ EXISTS
admin$ EXISTS
ipc$ EXISTS, Allows access using username: '', password: ''

Getting OS information
The -o option gets OS information using smbclient. Certain versions of Windows (e.g. 2003) even return service pack information.
$ enum4linux.pl -o 192.168.2.76
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 11:55:11 2008</tt>

----- OS information on 192.168.2.76 -----
[+] Got OS info for 192.168.2.76 from smbclient: Domain=[PTT] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager]
[E] Can't get OS info with srvinfo: NT_STATUS_ACCESS_DENIED

Printer information
You can get some information about printers known to the remote device with the -i option. I don’t know why you’d want to do this. I only implemented it because I could.

$ enum4linux.pl -i 192.168.2.69
Starting enum4linux v0.8.2 ( https://labs.portcullis.co.uk/application/enum4linux/ ) on Fri Mar 28 11:55:32 2008</tt>

----- Getting printer info for 192.168.2.69 -----
flags:[0x800000]
name:[\\192.168.2.69\SharedFax]
description:[\\192.168.2.69\SharedFax,Microsoft Shared Fax Driver,]
comment:[]


EvilURL - An Unicode Domain Phishing Generator for IDN Homograph Attack

$
0
0

An unicode domain phishing generator for IDN Homograph Attack.

VIDEO DEMO

CLONE
git clone https://github.com/UndeadSec/EvilURL.git

RUNNING
cd EvilURL
python evilurl.py

PREREQUISITES
  • python 2.7

TESTED ON
Kali Linux - ROLLING EDITION


Paskto - Passive Web Scanner

$
0
0

Paskto will passively scan the web using the Common Crawl internet index either by downloading the indexes on request or parsing data from your local system. URLs are then processed through Nikto and known URL lists to identify interesting content. Hash signatures are also used to identify known default content for some IoT devices or web applications.

  Options

-d, --dir-input directory Directory with common crawl index files with .gz extension. Ex: -d "/tmp/cc/"
-v, --ia-dir-input directory Directory with internet archive index files with .gz extension. Ex: -v "/tmp/ia/"
-o, --output-file file Save test results to file. Ex: -o /tmp/results.csv
-u, --update-db Build/Update Paskto DB from Nikto databases.
-n, --use-nikto Use Nikto DBs. Default: true
-e, --use-extras Use EXTRAS DB. Default: true
-s, --scan domain name Domain to scan. Ex: -s "www.google.ca" or -s "*.google.ca"
-i, --cc-index index Common Crawl index for scan. Ex: -i "CC-MAIN-2017-34-index"
-a, --save-all-urls file Save CSV List of all URLS. Ex: -a /tmp/all_urls.csv
-h, --help Print this usage guide.

Examples

Scan domain, save results and URLs $ node paskto.js -s "www.msn.com" -o /tmp/rest-results.csv -a /tmp/all-urls.csv
Scan domain with CC wildcards. $ node paskto.js -s "*.msn.com" -o /tmp/rest-results.csv -a /tmp/all-urls.csv
Scan domain, only save URLs. $ node paskto.js -s "www.msn.com" -o /tmp/rest-results.csv
Scan dir with indexes. $ node paskto.js -d "/tmp/CC-MAIN-2017-39-index/" -o /tmp/rest-results.csv -a /tmp/all-urls.csv

Create Custom Digest signatures
A quick way to create new digest signatures for default content is to use WARCPinch which is a Chrome Extension I hacked together based off of WARCreate except it creates digital signatures as well as WARC files. (Also adds highlight and right click functionality, which is useful to just highlight any identifying text to use as the name of the signatures).


docker-onion-nmap - Scan .onion hidden services with nmap using Tor, proxychains and dnsmasq in a minimal alpine Docker container

$
0
0

Use nmap to scanhidden "onion" services on the Tor network. Minimal image based on alpine, using proxychains to wrap nmap. Tor and dnsmasq are run as daemons via s6, and proxychains wraps nmap to use the Tor SOCKS proxy on port 9050. Tor is also configured via DNSPort to anonymously resolve DNS requests to port 9053. dnsmasq is configured to with this localhost:9053 as an authority DNS server. Proxychains is configured to proxy DNS through the local resolver, so all DNS requests will go through Tor and applications can resolve .onion addresses.

Example:
$ docker run --rm -it milesrichardson/onion-nmap -p 80,443 facebookcorewwwi.onion
[tor_wait] Wait for Tor to boot... (might take a while)
[tor_wait] Done. Tor booted.
[nmap onion] nmap -p 80,443 facebookcorewwwi.onion
[proxychains] config file found: /etc/proxychains.conf
[proxychains] preloading /usr/lib/libproxychains4.so
[proxychains] DLL init: proxychains-ng 4.12

Starting Nmap 7.60 ( https://nmap.org ) at 2017-10-23 16:17 UTC
[proxychains] Dynamic chain ... 127.0.0.1:9050 ... facebookcorewwwi.onion:443 ... OK
[proxychains] Dynamic chain ... 127.0.0.1:9050 ... facebookcorewwwi.onion:80 ... OK
Nmap scan report for facebookcorewwwi.onion (224.0.0.1)
Host is up (2.7s latency).

PORT STATE SERVICE
80/tcp open http
443/tcp open https

Nmap done: 1 IP address (1 host up) scanned in 3.58 seconds

How it works:
When the container boots, it launches Tor and dnsmasq as daemons. The tor_wait script then waits for the Tor SOCKS proxy to be up before executing your command.

Arguments:
By default, args to docker run are passed to /bin/nmap which calls nmap with args -sT -PN -n "$@" necessary for it to work over Tor (via explainshell.com).
For example, this:
docker run --rm -it milesrichardson/onion-nmap -p 80,443 facebookcorewwwi.onion
will be executed as:
proxychains4 -f /etc/proxychains.conf /usr/bin/nmap -sT -PN -n -p 80,443 facebookcorewwwi.onion
In addition to the custom script for nmap, custom wrapper scripts for curl and nc exist to wrap them in proxychains, at /bin/curl and /bin/nc. To call them, simply specify curl or nc as the first argument to docker run. For example:
docker run --rm -it milesrichardson/onion-nmap nc -z 80 facebookcorewwwi.onion
will be executed as:
proxychains4 -f /etc/proxychains.conf /usr/bin/nc -z 80 facebookcorewwwi.onion
and
docker run --rm -it milesrichardson/onion-nmap curl -I https://facebookcorewwwi.onion
will be executed as:
proxychains4 -f /etc/proxychains.conf /usr/bin/curl -I https://facebookcorewwwi.onion
If you want to call any other command, including the original /usr/bin/nmap or /usr/bin/nc or /usr/bin/curl you can specify it as the first argument to docker run, e.g.:
docker run --rm -it milesrichardson/onion-nmap /usr/bin/curl -x socks4h://localhost:9050 https://facebookcorewwwi.onion

Environment variables:
There is only one environment variable: DEBUG_LEVEL. If you set it to anything other than 0, more debugging info will be printed (specifically, the attempted to connections to Tor while waiting for it to boot). Example:
$ docker run -e DEBUG_LEVEL=1 --rm -it milesrichardson/onion-nmap -p 80,443 facebookcorewwwi.onion
[tor_wait] Wait for Tor to boot... (might take a while)
[tor_wait retry 0] Check socket is open on localhost:9050...
[tor_wait retry 0] Socket OPEN on localhost:9050
[tor_wait retry 0] Check SOCKS proxy is up on localhost:9050 (timeout 2 )...
[tor_wait retry 0] SOCKS proxy DOWN on localhost:9050, try again...
[tor_wait retry 1] Check socket is open on localhost:9050...
[tor_wait retry 1] Socket OPEN on localhost:9050
[tor_wait retry 1] Check SOCKS proxy is up on localhost:9050 (timeout 4 )...
[tor_wait retry 1] SOCKS proxy DOWN on localhost:9050, try again...
[tor_wait retry 2] Check socket is open on localhost:9050...
[tor_wait retry 2] Socket OPEN on localhost:9050
[tor_wait retry 2] Check SOCKS proxy is up on localhost:9050 (timeout 6 )...
[tor_wait retry 2] SOCKS proxy UP on localhost:9050
[tor_wait] Done. Tor booted.
[nmap onion] nmap -p 80,443 facebookcorewwwi.onion
[proxychains] config file found: /etc/proxychains.conf
[proxychains] preloading /usr/lib/libproxychains4.so
[proxychains] DLL init: proxychains-ng 4.12

Starting Nmap 7.60 ( https://nmap.org ) at 2017-10-23 16:34 UTC
[proxychains] Dynamic chain ... 127.0.0.1:9050 ... facebookcorewwwi.onion:443 ... OK
[proxychains] Dynamic chain ... 127.0.0.1:9050 ... facebookcorewwwi.onion:80 ... OK
Nmap scan report for facebookcorewwwi.onion (224.0.0.1)
Host is up (2.8s latency).

PORT STATE SERVICE
80/tcp open http
443/tcp open https

Nmap done: 1 IP address (1 host up) scanned in 4.05 seconds


TrevorC2 - Command and Control via Legitimate Behavior over HTTP

$
0
0

TrevorC2 is a client/server model for masking command and control through a normally browsable website. Detection becomes much harder as time intervals are different and does not use POST requests for data exfil.

There are two components to TrevorC2 - the client and the server. The client can be configured to be used with anything. In this example it's coded in Python but can easily be ported to C#, PowerShell, or whatever you want. Currently the trevorc2_client.py supports Windows, MacOS, and Linux. You can always byte compile the Windows one to get an executable, but preference would be to use Windows without having to drop an executable as a stager.

The way that the server works is by tucking away a parameter thats right before the parameter. This is completely configurable, and it's recommended you configure everything to be unique in order to evade detection. Here is the workflow:
1. trevor2_server.py - edit the file first, and customize, what website you want to clone, etc. The server will clone a website of your choosing and stand up a server. This server is browsable by anyone and looks like a legitimate website. Contained within the source is parameter that (again is configurable), which contains the instructions for the client. Once a client connects, it searches for that parameter, then uses it to execute commands.
2. trevor2_client.py - all you need in any configurable option is the ability to call out to a website, parse some basic data, and then execute a command and then put the results in a base64 encoded query string parameter to the site. That's it, not hard.

Installation
pip install -r requirements.txt

Usage
First edit the trevor2_server.py - change the configuration options and site to clone.
python trevor2_server.py

Next, edit the trevor2_client.py - change the configuration and system you want it to communicate back to.
python trevor2_client.py



Dex-Oracle - A pattern based Dalvik deobfuscator which uses limited execution to improve semantic analysis

$
0
0
A pattern based Dalvik deobfuscator which uses limited execution to improve semantic analysis. Also, the inspiration for another Android deobfuscator: Simplify.

Before

After

sha1: a68d5d2da7550d35f7dbefc21b7deebe3f4005f3
md5: 2dd2eeeda08ac8c15be8a9f2d01adbe8

Installation

Step 1. Install Smali / Baksmali
Since you're an elite Android reverser, I'm sure you already have Smali and Baksmali on your path. If for some strange reason it's not already installed, this should get you started, but please examine it carefully before running:
mkdir ~/bin || cd ~/bin
curl --location -O https://bitbucket.org/JesusFreke/smali/downloads/smali-2.1.2.jar && mv smali-*.jar smali.jar
curl --location -O https://bitbucket.org/JesusFreke/smali/downloads/baksmali-2.1.2.jar && mv baksmali-*.jar baksmali.jar
curl --location -O https://bitbucket.org/JesusFreke/smali/downloads/smali
curl --location -O https://bitbucket.org/JesusFreke/smali/downloads/baksmali
chmod +x ./smali ./baksmali
export PATH=$PATH:$PWD

Step 2. Install Android SDK / ADB
Make sure adb is on your path.

Step 3. Install the Gem
gem install dex-oracle
Or, if you prefer to build from source:
git clone https://github.com/CalebFenton/dex-oracle.git
cd dex-oracle
gem install bundler
bundle install

Step 4. Connect a Device or Emulator
You must have either an emulator running or a device plugged in for Oracle to work.
Oracle needs to execute methods on an live Android system. This can either be on a device or an emulator (preferred). If it's a device, make sure you don't mind running potentially hostile code on it.
If you'd like to use an emulator, and already have the Android SDK installed, you can create and start emulator images with:
android avd

Usage
Usage: dex-oracle [opts] <APK / DEX / Smali Directory>
-h, --help Display this screen
-s ANDROID_SERIAL, Device ID for driver execution, default=""
--specific-device
-t, --timeout N ADB command execution timeout in seconds, default="120"
-i, --include PATTERN Only optimize methods and classes matching the pattern, e.g. Ldune;->melange\(\)V
-e, --exclude PATTERN Exclude these types from optimization; including overrides
--disable-plugins STRING[,STRING]*
Disable plugins, e.g. stringdecryptor,unreflector
--list-plugins List available plugins
-v, --verbose Be verbose
-V, --vverbose Be very verbose
For example, to only deobfuscate methods in a class called Lcom/android/system/admin/CCOIoll; inside of an APK called obad.apk:
dex-oracle -i com/android/system/admin/CCOIoll obad.apk

How it Works
Oracle takes Android apps (APK), Dalvik executables (DEX), and Smali files as inputs. First, if the input is an APK or DEX, it is disassembled into Smali files. Then, the Smali files are passed to various plugins which perform analysis and modifications. Plugins search for patterns which can be transformed into something easier to read. In order to understand what the code is doing, some Dalvik methods are actually executed with and the output is collected. This way, some method calls can be replaced with constants. After that, all of the Smali files are updated. Finally, if the input was an APK or a DEX file, the modified Smali files are recompiled and an updated APK or DEX is created.
Method execution is performed by the Driver. The input APK, DEX, or Smali is combined with the Driver into a single DEX using dexmerge and is pushed onto a device or emulator. Plugins can then use Driver which uses Java reflection to execute methods from the input DEX. The return values can be used to improve semantic analysis beyond mere pattern recognition. This is especially useful for many string decryption methods, which usually take an encrypted string or some byte array. One limitation is that execution is limited to static methods.

Hacking

Creating Your Own Plugin
There are three plugins which come with Oracle:
  1. Undexguard - removes certain types of Dexguard obfuscations
  2. Unreflector - removes some Java reflection
  3. String Decryptor - simple plugin which removes a common type of string encryption
If you encounter a new type of obfuscation, it may be possible to deobfuscate with Oracle. Look at the Smali and figure out if the code can either be:
  1. rearranged
  2. understood by executing some static methods
If either of these two are the case, you should try and write your own plugin. There are four steps to building your own plugin:
  1. identify Smali patterns
  2. figure out how to simplify the patterns
  3. figure out how to interact with driver and invoke methods
  4. figure out how to apply modifications directly
The included plugins should be a good guide for understanding steps #3 and #4. Driver is designed to help with step #2.
Of course, you're always welcome to share whatever obfuscation you come across and someone may eventually get to it.

Updating Driver
First, ensure dx is on your path. This is part of the Android SDK, but it's probably not on your path unless you're hardcore.
The driver folder is a Java project managed by Gradle. Import it into Eclipse, IntelliJ, etc. and make any changes you like. To finish updating the driver, run ./update_driver. This will rebuild the driver and convert the output JAR into a DEX.

Troubleshooting
If there's a problem executing driver code on your emulator or device, be sure to open monitor (part of the Android SDK) and check for any clues there. Even if the error doesn't make sense to you, it'll help if you post it along with the issue you'll create.
Not all Android platforms work well with dex-oracle. Some of them just crap out when trying to execute arbitrary DEX files. If you're having trouble with Segfaults or driver crashes, try using Android 4.4.2 API level 19 with ARM.
It's possible that a plugin sees a pattern it thinks is obfuscation but is actually some code it shouldn't execute. This seems unlikely because the obfuscation patterns are really unusual, but it is possible. If you're finding a particular plugin is causing problems and you're sure the app isn't protected by that particular obfuscator, i.e. the app is not DexGuarded but the DexGuard plugin is trying to execute stuff, just disable it.

More Information
  1. TetCon 2016 Android Deobfuscation Presentation
  2. Hacking with dex-oracle for AndroidMalware Deobfuscation


CredSniper - Phishing Framework which supports SSL and capture credentials with 2FA tokens

$
0
0

Easily launch a new phishing site fully presented with SSL and capture credentials along with 2FA tokens using CredSniper. The API provides secure access to the currently captured credentials which can be consumed by other applications using a randomly generated API token.

Benefits
  • Fully supported SSL via Let's Encrypt
  • Exact login form clones for realistic phishing
  • Any number of intermediate pages
    • (i.e. Gmail login, password and two-factor pages then a redirect)
  • Supports phishing 2FA tokens
  • API for integrating credentials into other applications
  • Easy to personalize using a templating framework

Basic Usage
usage: credsniper.py [-h] --module MODULE [--twofactor] [--port PORT] [--ssl] [--verbose] --final FINAL --hostname HOSTNAME
optional arguments:
-h, --help show this help message and exit
--module MODULE phishing module name - for example, "gmail"
--twofactor enable two-factor phishing
--port PORT listening port (default: 80/443)
--ssl use SSL via Let's Encrypt
--verbose enable verbose output
--final FINAL final url the user is redirected to after phishing is done
--hostname HOSTNAME hostname for SSL

Credentials
.cache : Temporarily store username/password when phishing 2FA
.sniped : Flat-file storage for captured credentials and other information

API End-point
  • View Credentials (GET) https://<phish site>/creds/view?api_token=<api token>
  • Mark Credential as Seen (GET) https://<phish site>/creds/seen/<cred_id>?api_token=<api token>
  • Update Configuration (POST) https://<phish site>/config
 {
'enable_2fa': true,
'module': 'gmail',
'api_token': 'some-random-string'
}

Modules
All modules can be loaded by passing the --module <name> command to CredSniper. These are loaded from a directory inside /modules. CredSniper is built using Python Flask and all the module HTML templates are rendered using Jinja2.
  • Gmail: The latest Gmail login cloned and customized to trigger/phish all forms of 2FA
    • modules/gmail/gmail.py: Main module loaded w/ --module gmail
    • modules/gmail/templates/error.html: Error page for 404's
    • modules/gmail/templates/login.html: Gmail Login Page
    • modules/gmail/templates/password.html: Gmail Password Page
    • modules/gmail/templates/authenticator.html: Google Authenticator 2FA page
    • modules/gmail/templates/sms.html: SMS 2FA page
    • modules/gmail/templates/touchscreen.html: Phone Prompt 2FA page

Installation

Ubuntu 16.04
You can install and run automatically with the following command:
$ git clone https://github.com/ustayready/CredSniper
$ cd CredSniper
~/CredSniper$ ./install.sh
Then, to run manually use the following commands:
~/$ cd CredSniper
~/CredSniper$ source bin/activate
(CredSniper) ~/CredSniper$ python credsniper.py --help
Note that Python 3 is required.

Screenshots

Gmail Module






fatcat - FAT Filesystems Explore, Extract, Repair, And Forensic Tool

$
0
0

This tool is designed to manipulate FAT filesystems, in order to explore, extract, repair, recover and forensic them. It currently supports FAT12, FAT16 and FAT32.
Tutorials & examples

Building and installing
You can build fatcat this way:
mkdir build
cd build
cmake ..
make
And then install it:
make install

Exploring

Using fatcat
Fatcat takes an image as argument:
fatcat disk.img [options]
You can specify an offset in the file with -O, this could be useful if there is multiple partitions on a block devices, for instance:
fatcat disk.img -O 1048576 [options]
This will tell fatcat to begin on the 1048576th byte. Have a look to the partition tutorial.

Listing
You can explore the FAT partition using -l option like this:
$ fatcat disk.img -l /
Listing path /
Cluster: 2
d 24/10/2013 12:06:00 some_directory/ c=4661
d 24/10/2013 12:06:02 other_directory/ c=4662
f 24/10/2013 12:06:40 picture.jpg c=4672 s=532480 (520K)
f 24/10/2013 12:06:06 hello.txt c=4671 s=13 (13B)
You can also provide a path like -l /some/directory.
Using -L, you can provide a cluster number instead of a path, this may be useful sometime.
If you add -d, you will also see deleted files.
In the listing, the prefix is f or d to tell if the line concerns a file or a directory.
The c= indicates the cluster number, s= indicates the site in bytes (which should be the same as the pretty size just after).
The h letter at the end indicates that the file is supposed to be hidden.
The d letter at the end indicates that the file was deleted.

Reading a file
You can read a file using -r, the file will be wrote on the standard output:
$ fatcat disk.img -r /hello.txt
Hello world!
$ fatcat disk.img -r /picture.jpg > save.jpg
Using -R, you can provide a cluster number instead of a path, but the file size information will be lost and the file will be rounded to the number of clusters it fits, unless you provide the -s option to specify the file size to read.
You can use -x to extract the FAT filesystem directories to a directory:
fatcat disk.img -x output/
If you want to extract from a certain cluster, provide it with -c.
If you provide -d to extract, deleted files will be extracted too.

Undelete

Browsing deleted files & directories
As explaines above, deleted files can be found in listing by providing -d:
$ fatcat disk.img -l / -d
f 24/10/2013 12:13:24 delete_me.txt c=5764 s=16 (16B) d
You can explore and spot a file or an interesting deleted directory.

Retrieving deleted file
To retrieve a deleted file, simply use -r to read it. Note that the produced file will be read contiguously from the original FAT system and may be broken.

Retreiving deleted directory
To retrieve a deleted directory, note its cluster number and extract it like above:
# If your deleted directory cluster is 71829
fatcat disk.img -x output/ -c 71829
See also: undelete tutorial

Recover

Damaged file system
Assuming your disk has broken sectors, you may want to do recovering on it.
The first advice is to make a copy of your data using ddrescue, and save your disk to another one or into a sane file.
When sectors are broken, their bytes will be replaced with 0s in the ddrescue image.
A first way to go is trying to explore your image using -l as above and check -i to find out if fatcat recognizes the disk as a FAT system.
Then, you can try to have a look at -2, to check if the file allocation tables differs, and if it looks mergeable. It is very likely that is will be mergeable, in this case, you can try -m to merge the FAT tables, don't forget to backup it before (see below).

Orphan files
When your filesystem is broken, there are files and lost files and lost directories that we call "orphaned", because you can't reach them from the normal system.
fatcat provides you an option to find those nodes, it will do an automated analysis of your system and explore allocated sectors of your filesystem, this is done with -o.
You will get a list of directories and files, like this:
There is 2 orphaned elements:
Directory clusters 4592 to 4592: 2 elements, 49B
File clusters 4611 to 4611: ~512B
You can then use directly -L and -R to have a look into those files and directories:
$ fatcat disk.img -L 4592
Listing cluster 4592
Cluster: 4592
d 23/10/2013 17:45:06 ./ c=4592
d 23/10/2013 17:45:06 ../ c=0
f 23/10/2013 17:45:22 poor_orphan.txt c=4601 s=49 (49B)
Note that orphan files have an unknown size, this mean that if you read it, you will get a file that is a multiple of the cluster sizes.
See also: orphaned files tutorial

Hacking
You can use fatcat to hack your FAT filesystem

Informations
The -i flag will provide you a lot of information about the filesystem:
fatcat disk.img -i
This will give you headers data like sectors sizes, fats sites, disk label etc. It will also read the FAT table to estimate the usage of the disk.
You can also get information about a specific cluster by using -@:
fatcat disk.img -@ 1384
This will give you the cluster address (offset of the cluster in the filesystem) and the value of the next cluster in the two FAT tables.

Backuping & restoring FAT
You can use -b to backup your FAT tables:
fatcat disk.img -b backup.fats
And use -p to write it back:
fatcat disk.img -p backup.fats

Writing to the FATs
You can write to the FAT tables with -w and -v:
fatcat disk.img -w 123 -v 124
This will write 124 as value of the next cluster of 123.
You can also choose the table with -t, 0 is both tables, 1 is the first and 2 the second.

Diff & merge the FATs
You can have a look at the diff of the two FATs by using -2:
# Watching the diff
$ fatcat disk.img -2
Comparing the FATs

FATs are exactly equals

# Writing 123 in the 500th cluster only in FAT1
$ fatcat disk.img -w 500 -v 123 -t 1
Writing next cluster of 500 from 0 to 123
Writing on FAT1

# Watching the diff
$ fatcat disk.img -2
Comparing the FATs
[000001f4] 1:0000007b 2:00000000

FATs differs
It seems mergeable
You can merge two FATs using -m. For each different entries in the table, if one is zero and not the other, the non-zero file will be choosen:
$ fatcat disk.img -m
Begining the merge...
Merging cluster 500
Merge complete, 1 clusters merged
See also: fixing fat tutorial

Directories fixing
Fatcat can fix directories having broken FAT chaining.
To do this, use -f. All the filesystem tree will be walked and the directories that are unallocated in the FAT but that fatcat can read will be fixed in the FAT.

Entries hacking
You can have information about an entry with -e:
fatcat disk.img -e /hello.txt
This will display the address of the entry (not the file itself), the cluster reference and the file size (if not a directory).
You can add the flag -c [cluster] to change the cluster of the entry and the flag -s [size] to change the entry size.
See also: fun with fat tutorial
You can use -k to search for a cluster reference.

Erasing unallocated files
You can erase unallocated sectors data, with zeroes using -z, or using random data using -S.
For instance, deleted files will then become unreadables.


Mentalist - Graphical Tool For Custom Wordlist Generation

$
0
0

Mentalist is a graphical tool for custom wordlist generation. It utilizes common human paradigms for constructing passwords and can output the full wordlist as well as rules compatible with Hashcat and John the Ripper.

Install from Source

Prerequisites

Linux (APT package manager)
Check if Python 3 is installed by running
python3 --version
If it is not, run:
sudo apt-get update && apt-get install python3.6
Additionally, you will need setuptools and Tk:
sudo apt-get install python3-setuptools python3-tk

OS X
There are varying ways of installing Python 3 on OS X, but the easiest is to install through Homebrew.
brew update && brew install python3

Windows
If using Windows, please refer to Installing Python 3 on Windows from the Hitchhiker's Guide. It is also extremely helpful to click the Python 3 installer checkbox to add Python to your PATH.

Install Mentalist
Clone the Mentalist repository:
git clone https://github.com/sc0tfree/mentalist.git

Go into the directory:
cd mentalist

Run setup.py:
python3 setup.py install

Running Mentalist

You can now run mentalist from the shell with the command
mentalist

Future Work
  • Ability to scrape sites as an attribute in the Base Words node.
  • Add dictionaries and lists for more languages
  • Add UK post codes to Append/Prepend Nodes
  • Option to perform de-duplication of Base Words
  • Mentalist Chain file differencing


Faraday v2.7 - Collaborative Penetration Test and Vulnerability Management Platform

$
0
0
Faraday is the Integrated Multiuser Risk Environment you have alwasy been looking for! It maps and leverages all the data you generate in real time, letting you track and understand your audits. Our dashboard for CISOs and managers uncovers the risks and impacts and risks being assessed by the audit in real-time without a single email. Developed with a specialized set of functionalities that helps users improve their own work, the main purpose is to re-use the available tools in the community and take advantage of them in a collaborative way!

As you have probably already heard, we are in the midst of a BIG upgrade in Faraday: replacing CouchDB with a much more robost database, Postgresql. This is taking a bit of time, as we are also redesigning parts of our database model to improve the performance of our app, but will be totally worth the wait!

Even though we are busy laboring away with this upgrade, we have not forgotten about you users! We have just released a new version of Faraday with some bug fixes, new plugins, and a csv importer!

What is new

Last modified and created timestamps

The hosts view now shows you the time of the most recent modification:

    Click in the host and you can see the when it was created!


New feature: Import from CSV

Now you can import information of your CSV to Faraday and create any type of Object in it! Hosts, Interfaces, Services, Vulnerabilities, Vulnerabilities Web and Tags can be created using our new CSV-importer.

The CSV file needs to be formatted in a compatible way, all the information about this can be found HERE!



Changes and fixes

  • Added “Last modified” and “Created” in Hosts view.
  • Checks if the port 5985 is already in use and shows the corresponding error message.
  • Fixed bug when trying to run Faraday as second process and closing the terminal (&!).
  • Fixed bug where it asked for dependencies eternally when you have a different version than the one required.
  • Fixed small bug in the update_from_document method.
  • Fixed bug, makes the python library dependencies specific to the desired version.
  • Fixed GitHub language bar to reflect real code percentage.
  • Merge PR #195: Create gentoo_requirements_extras.txt (New Github wiki page).
  • Merge PR #225: Add references to found vulnerabilities in nmap plugin.
  • New plugin: Netsparker cloud.
  • New plugin: Lynis (Winner of Faraday Challenge 2017).
  • New Fplugin: changes the status of all vulnerabilities of an specific workspace to closed.
  • New Fplugin: combines the “create_interface” and “create_host” scripts into one (create_interface_and_host script).
  • New Fplugin: import_csv , now you can import Faraday objects from a CSV.


https://www.faradaysec.com/
https://github.com/infobyte/faraday
https://twitter.com/faradaysec
https://forum.faradaysec.com/
https://www.faradaysec.com/ideas

Cr3dOv3r - Know The Dangers Of Credential Reuse Attacks

$
0
0

Your best friend in credential reuse attacks.
Cr3dOv3r simply you give it an email then it does two simple jobs (but useful) :
  • Search for public leaks for the email and if it any, it returns with all available details about the leak (Using hacked-emails site API).
  • Now you give it this email's old or leaked password then it checks this credentials against 16 websites (ex: facebook, twitter, google...) then it tells you if login successful in any website!

Imagine with me this scenario
  • You checking a targeted email with this tool.
  • The tool finds it in a leak so you open the leakage link.
  • You get the leaked password after searching the leak.
  • Now you back to the tool and enters this password to check if there's any website the user uses the same password in it.
  • You imagine the rest

Screenshots



Usage
usage: Cr3d0v3r.py [-h] email

positional arguments:
email Email/username to check

optional arguments:
-h, --help show this help message and exit

Installing and requirements

To make the tool work at its best you must have :
  • Python 3.x.
  • Linux or windows system.
  • The requirements mentioned in the next few lines.

Installing
+For windows : (After downloading ZIP and upzip it)
cd Cr3dOv3r-master
python -m pip install -r win_requirements.txt
python Cr3dOv3r.py -h
+For linux :
git clone https://github.com/D4Vinci/Cr3dOv3r.git
chmod 777 -R Cr3dOv3r-master
cd Cr3dOv3r-master
pip3 install -r requirements.txt
python Cr3dOv3r.py -h
If you want to add a website to the tool, follow the instructions in the wiki

Contact

MHA - Mail Header Analyzer

$
0
0

Mail header analyzer is a tool written in flask for parsing email headers and converting them to a human readable format and it also can:
  • Identify hop delays.
  • Identify the source of the email.
  • Identify hop country.

MHA is an alternative for the following:
NameDevIssues
MessageHeaderGoogleNot showing all the hops.
EmailHeadersMxtoolboxNot accurate and slow.
Message Header AnalyzerMicrosoftBroken UI.

Installation
Install system dependencies:
sudo apt-get update
sudo apt-get install python-pip
sudo pip install virtualenv
Create a Python virtual environment and activate it:
virtualenv virt
source virt/bin/activate
Clone the GitHub repo:
git clone https://github.com/lnxg33k/MHA.git
Install Python dependencies:
cd MHA
pip install -r requirements.txt
Run the development server:
python server.py -d
You can change the bind address or port by specifying the appropriate options: python server.py -b 0.0.0.0 -p 8080
Everything should go well, now visit http://localhost:8080.

Docker
A Dockerfile is provided if you wish to build a docker image.
docker build -t mha:latest .
You can then run a container with:
docker run -d -p 8080:8080 mha:latest



Viewing all 5842 articles
Browse latest View live