Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

SYNwall - A Zero-Configuration (IoT) Firewall

$
0
0


Zero config (IoT) firewall.

SYNwall is a project built (for the time being) as a Linux Kernel Module, to implement a transparent and no-config/no-maintenance firewall.


Basics

Usually IoT devices are out of a central control, with low profile hardware, tough environmental conditions and...we have no time to dedicate to maintain the security. So, may be we can not patch our IoT infrastructure and it will be very hard to maintain a "firewall-like" access control.

The idea is to create a de-centralized one-way OneTimePassword code to enable the NETWORK access to the device. All the traffic not containing the OTP will be discarded. No prior knowledge about who need to access is required, we just need a Pre-Shared Key to deploy. The protection will be completely transparent to the application level, because implemented at network protocol level (TCP and UDP).


Install

This repository contains the Linux Kernel module. It has been tested with 3.x, 4.x and 5.x version on X86_64, ARM, MIPS and AARCH64 architectures.

It requires the current kernel headers for compilation, which can be usually installed with the proper package manager. For example on Debian like distros:

sudo apt-get install linux-headers-$(uname -r)

Than, it should be enough to run the compilation:

make


Configuration

The module can be loaded in the usual way, with insmod or modprobe.

It has several parameters that allow you to customize the behaviour:

  • Pre-Shared Key used for the OneTimePassword

    psk:

    The PSK, must be a sequence of bytes from 32 to 1024. It will be part of the OTP, so the length of it will influence the size of the OTP injected in the packet. Without this parameter, the module will not load.

  • Enable UDP

    enable_udp: 0

    Enable/Disable the OTP for UDP protocol. By default it is disabled. Set to 1 to enable it. The OTP on UDP requires the module to be active on both of the communicating devices, since the OTP must be removed (by the module) before the packet is forwarded to the application level. If this is not true, you may experience weird behaviors. The UDP connection tracking, relies on conntrack module, so you may have to insert it to use this functionality (this depends on the installation). An error will be displayed in the kernel log if so.

    NOTE: by default, port 53 (DNS) and 123 (NTP) are blacklisted for outgoing connection, so the OTP is not added. If you need to change this, look for udp_blacklist[] array. I will add a parameter for this in the future.

  • Time precision parameter

    precision: 10

    The OTP is computed also with the current device time. Since the date could be different on the participating devices, you can "round" the time on a specific value, to allow time skew. Default is 10.

    The precision is expressed in power of two (you may argue why...it has been a decision to increase performance and have low impact on low end devices):

         ...
    9 -> 1 second
    10 -> 8 seconds
    ...

    Precision under 8 is probably not going to work.

    If you increase the precision value at 11 or more, consider to increase also the MAX_TRASH value in SYNgate_netfilter.c

  • Disable the OTP for outgoing packets

    disable_out: 0

    You may want to disable the OTP in outgoing packet, by settings this to 1. In this case the module will just drop the packets without OTP, but it will not participate to the communication mesh with other SYNwall devices. It can be useful in case of issues with the outgoing packets on uplink devices.

  • Enable DoS protection

    enable_antidos: 0

    This option can be enabled by setting this to 1. If set, this will limit the OTP computation on the device to a given number (allow_otp_ms variable, set to 1000 by default). In this case, only one OTP computation per second is allowed, preserving the CPU time of the device in case of a DoS attack.

  • Enable IP Spoofing protection

    enable_antispoof: 0

    By default the IP is not part of the OTP. This could lead to some replay attack. You can enable the antispoof protection to be fully safe. This may break the communication if some NATs are in place between the devices.

  • Delay in starting up the module functionalities (ms)

    load_delay: 10000

    You can decide to wait a while before activating the protection after the module load. This could be useful, in case of issues and after a reboot, to gain access to the device. The default is 10 seconds.

  • List of ports for port knocking failsafe

    portk: 0,0,0,0,0

    If the device clock is going bananas, it could be difficult to get access. One way could be the "delay" discussed before, but you can also set a sequence of "port knocking" which can disable the module for a while. The list, if defined, must be of 5 TCP ports. If the module identify a SYN packet on these ports in one second, it disable itself for the same time set as "load_delay". NOTE: if you actively use the sequence, remember to change it, since it can be easily sniffed!


Example of usage

WARNING: this is going to drop all the traffic to your device, so be sure to know how to access with another SYNwall device or by disabling it remotely (port knocking).

sudo insmod SYNwall.ko psk=123456789012345678901234567890123 precision=10 portk=12,13,14,15,16 load_delay=5000 enable_udp=1


Project Structure

SYNwall repository:

  • SYNwall_netfilter (.c and .h): Netfilter main package, with hooks and basic process functions
    • SYNauth (.c and .h): authentication functions, used to manage hashes and crypt stuff
    • SYNquark (.c and .h): Quark hashing implementation, directly based on the work done by Jean-Philippe Aumasson (@veorq) at https://github.com/veorq/Quark
  • SYNgate_netfilter (.c and .h): Netfilter package for SOCKS server module. It implements only the "outgoing" packet marking and is able to manage multiple PSK and Networks

SYNwall_distrib repository:

  • Ansible scripts for automatic distribution. See README.md there

SYNwall_ATAES132 repository:

  • PoC for secure EEPROM usage (PSK storage). See README.md there

SYNwall_docs repository:

  • Some docs, videos, DEMOs

Performance

Everything has been implemented to be used on low end devices, with very low resources. The choice of Quark hashing for the crypto hash has been done for this reason. The overhead added by the OTP computation is almost invisible in the regular usage:



whilst you can see a consistent CPU saving when a lot of traffic is sent to the device:



SYNgate

As a companion tool, the repository has also the SYNgate module. The SYNgate has been built with the same logic of the base module SYNwall, but it is working for multiple networks and PSK. You can define a multiple set of networks (with the related PSK and other options). The idea is to install it on a SOCKS server, to allow to use it for different protocols and destinations. The SYNwall_VM repository contains some script to build such a system with a SOCKS server and the module pre-installed.

SYNgate is working only on outgoing traffic.

To compile the SYNgate module, just use:

make SYNGATE=1


SYNgate Configuration

The module can be loaded in the usual way, with insmod or modprobe.

It has several parameters that allow you to customize the behaviour. It is very similar to the SYNwall configuration, but with a different logic: parameters are (comma separated) list of values and all of them have to be specified. The first value of a list correspond to the first of the others. Not all params are available, just the ones that make sense (remember the SYNgate will not affect incoming traffic):

It has only one parameter different from the SYNwall config, the dstnet_list

  • Destination network

    dstnet_list: ip1/mask1[,ip2/mask2]...

    List of networks in the IP/MASK format. Example: 192.168.1.0/24. If an IP is given (instead of network address), the network will be computed. All the IPs belonging to this network, will have the connection parameters (PSK, precision, etc) specified in the other lists, at the same array index.

  • Pre-Shared Key used for the OneTimePassword

    psk_list: pks1[,pks2]...

    The PSK, must be a sequence of bytes from 32 to 1024. It will be part of the OTP, so the length of it will influence the size of the OTP injected in the packet. Without this parameter, the module will not load.

  • Enable UDP

    enable_udp_list: {0|1}[,{0|1}]...

    Enable/Disable the OTP for UDP protocol. Set to 0 to disable it or 1 to enable it. The OTP on UDP requires the module to be active on both of the communicating devices, since the OTP must be removed (by the module) before the packet is forwarded to the application level. If this is not true, you may experience weird behaviors. The UDP connection tracking, relies on conntrack module, so you may have to insert it to use this functionality (this depends on the installation). An error will be displayed in the kernel log if so.

    NOTE: by default, port 53 (DNS) and 123 (NTP) are blacklisted for outgoing connection, so the OTP is not added. If you need to change this, look for udp_blacklist[] array. I will add a parameter for this in the future.

  • Time precision parameter

    precision_list: {9|10|...}[,{9|10|...}]...

    The OTP is computed also with the current device time. Since the date could be different on the participating devices, you can "round" the time on a specific value, to allow time skew.

    The precision is expressed in power of two (you may argue why...it has been a decision to increase performance and have low impact on low end devices):

         ...
    9 -> 1 second
    10 -> 8 seconds
    ...
  • Enable IP Spoofing protection

    enable_antispoof_list: {0|1}[,{0|1}]...

    Enable/Disable the antispoof protection. Set to 0 to disable it or 1 to enable it. By default the IP is not part of the OTP on SYNwall. This could lead to some replay attack. You can enable the antispoof protection to be fully safe. This may break the communication if some NATs are in place between the devices.


Example of usage

Example with two networks directly from CLI:

sudo insmod SYNgate.ko dstnet_list=10.1.1.0/24,198.168.10.0/24 psk_list=d41d8cd98f00b204e9800998ecf8427e,efebceec0de382839cd38bffcdc6bf0c enable_udp_list=0,0 precision_list=10,9 enable_antispoof_list=0,1

or using a configuration file:

sudo cat /etc/modprobe.d/SYNgate.conf
# SYNgate config file
# Keep it safe!!
options SYNgate dstnet_list=10.1.1.0/24,198.168.10.0/24
options SYNgate psk_list=d41d8cd98f00b204e9800998ecf8427e,efebceec0de382839cd38bffcdc6bf0c
options SYNgate enable_udp_list=0,0
options SYNgate precision_list=10,9
options SYNgate enable_antispoof_list=0,1

License

GPL-3.0


Author Information

Sorint.Lab




Dwn - D(Ockerp)Wn - A Docker Pwn Tool Manager

$
0
0


dwn is a "docker-compose for hackers". Using a simple YAML "plan" format similar to docker-compose, image names, versions and volume / port mappings are defined to setup a tool for use.


features

With dwn you can:

  • Configure common pentest tools for use in a docker container
  • Have context aware volume mounts
  • Dynamically modify port bindings without container restarts
  • And more!

installation

Simply run pip3 install dwn.


usage

dwn is actually really simple. The primary concept is that of "plans" where information about a tool (such as name, version, mounts and binds) are defined. There are a few built-in plans already available, but you can also roll your own. Without arguments, just running dwn would look like this.

❯ dwn
Usage: dwn [OPTIONS] COMMAND [ARGS]...

__
___/ / _____
/ _ / |/|/ / _ \
\_,_/|__,__/_//_/
docker pwn tool manager
by @leonjza / @sensepost

Options:
--debug enable debug logging
--help Show this message and exit.

Commands:
check Check plans and Docker environment
network Work with networks
plans Work with plans
run Run a plan
show Show running plans
stop Stop a plan

To list the available plans, run dwn plans show.

❯ dwn plans show
dwn plans
┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ name ┃ path ┃
┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ sqlmap │ /tools/dwn/plans/sqlmap.yml │
│ wpscan │ /tools/dwn/plans/wpscan.yml │
│ gowitness-report │ /tools/dwn/plans/gowitness-report.yml │
│ msfconsole │ /tools/dwn/plans/msfconsole.yml │
│ gowitness │ /tools/dwn/plans/gowitness.yml │
│ nginx │ /tools/dwn/plans/nginx.yml │
│ cme │ /tools/dwn/plans/cme.yml │
│ netcat-reverse │ /tools/dwn/plans/netcat-reverse.yml │
│ semgrep-sec │ /tools/dwn/plans/semgrep-sec.yml │
│ semgrep-ci │ ~/.dwn/plans/semgrep-ci.yml │
│ neo4j │ ~/.dwn/plans/neo4j.yml │
└──────────────────┴───────────────────────────────────────┘
11 plans

To run a plan such as gowitness screenshotting https://google.com, run dwn run gowitness --disable-db single https://www.google.com. This plan will exit when done, so you don’t have to dwn stop gowitness.

❯ dwn run gowitness --disable-db single https://www.google.com
(i) found plan for gowitness
(i) volume: ~/scratch -> /data
(i) streaming container logs
08 Feb 2021 10:46:18 INF preflight result statuscode=200 title=Google url=https://www.google.com

❯ ls screenshots
https-www.google.com.png

A plan such as netcat-reverse however will stay alive. You can connect to the plans TTY after it is started to interact with any shells you may receive. Example usage would be:

❯ dwn run netcat-reverse
(i) found plan for netcat-reverse
(i) port: 4444<-4444
(i) container booted! attach & detach commands are:
(i) attach: docker attach dwn_wghz_netcat-reverse
(i) detach: ctrl + p, ctrl + q

Attaching to the plan (and executing nc -e somewhere else)

❯ docker attach dwn_wghz_netcat-reverse
connect to [::ffff:172.19.0.2]:4444 from dwn_wghz_netcat-reverse_net_4444_4444.dwn:46318 ([::ffff:172.19.0.3]:46318)

env | grep -i shell
SHELL=/bin/zsh

read escape sequence

You can get a running plan report too

❯ dwn show
running plan report
┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━┓
┃ plan ┃ container(s) ┃ port(s) ┃ volume(s) ┃
┡━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━┩
│ netcat-reverse │ dwn_wghz_netcat-reverse_net_4444_4444 │ 4444<-4444 │ │
│ │ dwn_wghz_netcat-reverse │ │ │
└────────────────┴───────────────────────────────────────┴────────────┴───────────┘

And finally, stop a plan.

❯ dwn stop netcat-reverse -y
(i) stopping 2 containers for plan netcat-reverse

networking

dwn lets you dynamically map ports to plans without any container restarts. Networking commands live under the dwn network subcommand. Taking the nginx plan as an example, we can add a port mapping dynamically. First, start the nginx plan.

❯ dwn run nginx
(i) found plan for nginx
(i) volume: ~/scratch -> /usr/share/nginx/html
(i) port: 80<-8888
(i) container dwn_wghz_nginx started for plan nginx, detaching

Next, test the communication with cURL

❯ curl localhost:8888/poo.txt
haha, you touched it!

❯ curl localhost:9000/poo.txt
curl: (7) Failed to connect to localhost port 9000: Connection refused

Port 9000 is not open, so let's add a new port binding and test connectivity

❯ dwn network add nginx -i 80 -o 9000
(i) port binding for 9000->nginx:80 created

❯ curl localhost:9000/poo.txt
haha, you touched it!

updating plans

The dwn plans pull command can be used to update the images defined in plans. To only update a single plan, add the plan name after pull. Eg: dwn plans pull nginx.


writing plans

A dwn plans new command exists to quickly scaffold a new plan. While only a few options are needed to get a plan up and running, all of the options that exist in the Python Docker SDK for the run call are valid tags that can be used.



Ronin - A Ruby Platform For Vulnerability Research And Exploit Development

$
0
0


Ronin is a Ruby platform for vulnerability research and exploit development. Ronin allows for the rapid development and distribution of code, Exploits, Payloads, Scanners, etc, via Repositories.


Console

Ronin provides users with a powerful Ruby Console, pre-loaded with powerful convenience methods. In the Console one can work with data and automate complex tasks, with greater ease than the command-line.

>> File.read('data').base64_decode

Database

Ronin ships with a preconfigured Database, that one can interact with from Ruby, without having to write any SQL.

>> HostName.tld('eu').urls.with_query_param('id')

Repositories

Ronin provides a Repository system, allowing users to organize and share miscallaneous Data, Code, Exploits, Payloads, Scanners, etc.

$ ronin install git://github.com/user/myexploits.git

Libraries

Ronin provides libraries with additional functionality, such as Exploitation and Scanning:

$ gem install ronin-exploits

Features
  • Supports installing/updating/uninstalling of Repositories.
  • Provides a Database using DataMapper with:
    • {Ronin::Author}
    • {Ronin::License}
    • {Ronin::Arch}
    • {Ronin::OS}
    • {Ronin::Software}
    • {Ronin::Vendor}
    • {Ronin::Address}
      • {Ronin::MACAddress}
      • {Ronin::IPAddress}
      • {Ronin::HostName}
    • {Ronin::Port}
      • {Ronin::TCPPort}
      • {Ronin::UDPPort}
    • {Ronin::Service}
    • {Ronin::OpenPort}
    • {Ronin::OSGuess}
    • {Ronin::UserName}
    • {Ronin::URL}
    • {Ronin::EmailAddress}
    • {Ronin::Credential}
      • {Ronin::ServiceCredential}
      • {Ronin::WebCredential}
    • {Ronin::Organization}
    • {Ronin::Campaign}
    • {Ronin::Target}
  • Caches exploits, payloads, scanners, etc stored within Repositories into the Database.
  • Convenience methods provided by ronin-support.
  • Provides a customized Ruby Console using Ripl with:
    • Syntax highlighting.
    • Tab completion.
    • Auto indentation.
    • Pretty Printing (pp).
    • print_info, print_error, print_warning and print_debug output helper methods with color-output.
    • Inline commands (!nmap -v -sT victim.com)
  • Provides an extensible command-line interface.

Synopsis

Start the Ronin console:

$ ronin

Run a Ruby script in Ronin:

$ ronin exec script.rb

View available commands:

$ ronin help

View a man-page for a command:

$ ronin help wordlist

Install a Repository:

$ ronin install svn://example.com/path/to/repo

List installed Repositories:

$ ronin repos

Update all installed Repositories:

$ ronin update

Update a specific Repositories:

$ ronin update repo-name

Uninstall a specific Repositories:

$ ronin uninstall repo-name

List available Databases:

$ ronin database

Add a new Database:

$ ronin database --add team --uri mysql://user:pass@vpn.example.com/db

Remove a Database:

$ ronin database --remove team

Requirements

Install
$ gem install ronin

Development
  1. Fork It!
  2. Clone It!
  3. cd ronin
  4. bundle install
  5. git checkout -b my_feature
  6. Code It!
  7. bundle exec rake spec
  8. git push origin my_feature

License

Copyright (c) 2006-2021 Hal Brodigan (postmodern.mod3 at gmail.com)

This file is part of ronin.

Ronin is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

Ronin is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with Ronin. If not, see https://www.gnu.org/licenses/.



Traitor - Automatic Linux Privesc Via Exploitation Of Low-Hanging Fruit E.G. GTFOBin

$
0
0


Automatically exploit low-hanging fruit to pop a root shell. Linux privilege escalation made easy!

Traitor packages up a bunch of methods to exploit local misconfigurations and vulnerabilities (including most of GTFOBins) in order to pop a root shell.

It'll exploit most sudo privileges listed in GTFOBins to pop a root shell, as well as exploiting issues like a writable docker.sock. More routes to root will be added over time too.


Usage

Run with no arguments to find potential vulnerabilities/misconfigurations which could allow privilege escalation. Add the -p flag if the current user password is known. The password will be requested if it's needed to analyse sudo permissions etc.

traitor -p

Run with the -a/--any flag to find potential vulnerabilities, attempting to exploit each, stopping if a root shell is gained. Again, add the -p flag if the current user password is known.

traitor -a -p

Run with the -e/--exploit flag to attempt to exploit a specific vulnerability and gain a root shell.

traitor -p -e docker:writable-socket

Supported Platforms

Traitor will run on all Unix-like systems, though certain exploits will only function on certain systems.


Getting Traitor

Grab a binary from the releases page, or use go:

CGO_ENABLED=0 go get -u github.com/liamg/traitor/cmd/traitor

If the machine you're attempting privesc on cannot reach GitHub to download the binary, and you have no way to upload the binary to the machine over SCP/FTP etc., then you can try base64 encoding the binary on your machine, and echoing the base64 encoded string to | base64 -d > /tmp/traitor on the target machine, remembering to chmod +x it once it arrives.



Adfsbrute - A Script To Test Credentials Against Active Directory Federation Services (ADFS), Allowing Password Spraying Or Bruteforce Attacks

$
0
0


A script to test credentials against Active Directory Federation Services (ADFS), calculating the ADFS url of an organization and allowing password spraying or bruteforce attacks.

The main idea is carrying out password spraying attacks with a random and high delay between each test and using a list of proxies or Tor to make the detection by the Blue Team more difficult. Brute force attacks are also possible, or testing credentials with the format username:password (for example from Pwndb). Tested logins will get stored in a log file to avoid testing them twice.


Usage
./adfsbrute.py -t TARGET [-u USER] [-U USER_LIST] [-p PASSWORD] [-P PASSWORD_LIST] [-UL userpassword_list]
[-m MIN_TIME] [-M MAX_TIME] [-tp TOR_PASSWORD] [-pl PROXY_LIST] [-n NUMBER_OF_REQUESTS_PER_IP]
[-s STOP_ON_SUCCESS] [-r RANDOM_COMBINATIONS] [-d DEBUG] [-l LOG_FILE]

The parameters for the attacks are:

* -t: Target domain. Example: test.com

* -u: Single username. Example: agarcia@domain.com

* -U: File with a list of usernames. Example: users.txt

* -p: Single password: Example: Company123

* -P: File with a list of passwords. Example: passwords.txt

* -UP: File with a list of credentials in the format "username:password". Example: userpass.txt

* -m : Minimum value of random seconds to wait between each test. Default: 30

* -M : Maximum value of random seconds to wait between each test. Default: 60

* -tp: Tor password (change IP addresses using Tor)

* -pl: Use a proxy list (change IP addresses using a list of proxy IPs)

* -n: Number of requests before changing IP address (used with -tp or -pl). Default: 1

* -s: Stop on success, when one correct credential is found. Default: False

* -r: Randomize the combination of users and passwords. Default: True

* -d: Show d ebug messages. Default: True

* -l: Log file location with already tested credentials. Default: tested.txt

Examples

Password spraying with password "Company123", tor password is "test123" and changing the IP every 3 requests:

python3 adfsbrute.py -t company.com -U users.txt -p Company123 -tp test123 -n 3


 

Password spraying with password "Company123", tor password is "test123", changing the IP for every request, random delay time between 10 and 20 seconds and do not randomize the order of users:

python3 adfsbrute.py -t company.com -U users.txt -p Company123 -tp test123 -m 10 -M 20 -r False



Finding ADFS url:

python3 adfsbrute.py -t company.com



Using Tor

To use Tor to change the IP for every request, you must hash a password:

tor --hash-password test123

In the file /etc/tor/torrc, uncomment the variable ControlPort and the variable HashedControlPassword, and in this last one add the hash:

ControlPort 9051
HashedControlPassword 16:7F314CAB402A81F860B3EE449B743AEC0DED9F27FA41831737E2F08F87

Restart the tor service and use this password as argument for the script ("-tp test123" or "--tor_password 123")

service tor restart

Note

This script is implemented to test in security audits, DO NOT use without proper authorization from the company owning the ADFS or you will block accounts.



MoveKit - Cobalt Strike Kit For Lateral Movement

$
0
0


Movekit is an extension of built in Cobalt Strike lateral movement by leveraging the execute_assembly function with the SharpMove and SharpRDP .NET assemblies. The aggressor script handles payload creation by reading the template files for a specific execution type.

IMPORTANT: To use the script a user will only need to load the MoveKit.cna aggressor script which will load all the other necessary scripts with it. Additionally, depending on actions taken the SharpMove and SharpRDPassemblies will need to be compiled and placed into the Assemblies directory. Finally, some of the file moving requires dynamic compiling which will require Mono.


When loading the aggressor script there will be a selector loaded to the menubar named Move. There are multiple selections a user can select. First, users can select to execute a command on a remote system through WMI, DCOM, Task Scheduler, RDP, or SCM. Second, there is the Command execution mechanism which uses download cradles to grab and execute the files. Third, the File method drops a file on the system and executes it. There is Write File Only that does not do any execution, move data only. Finally, there is a Default settings to make using GUI faster and used with beacon commands. The default settings are used for anything that can accept a default.

To use the beacon commands it will read the default settings and use a few command line arguments. A beacon command example: <exec-type> <target> <listener> <filename>

move-msbuild 192.168.1.1 http move.csproj

Additionally, the custom pre built beacon command is a little bit different. Command example: move-pre-custom-file <target> <local-file> <remote-filename>

move-pre-custom-file computer001.local /root/payload.exe legit.exe

The location field is the trickiest part of the project. When selecting WMI file movement location will be used, if SMB is selected then it will not be used (so it can be left empty). Location takes three different values. First, it location is a URL then when the payload is created it will be hosted by Cobalt Strike's web server. The beacon host where the assembly will be executed from will make a web request to the URL and grab the file, which will be used in an event sub on the target host to write the file. Second, if location is a Windows directory then it will upload the created file to the beacon host and the assembly will read it from the file system and store in the event sub to write to the remote host. Finally, if the location field is a linux path or the word local then it will dynamically compile the payload into the assembly being executed. However, if the file is above the 1MB file size limit then it will show an error.

For all file methods the payload will be created through the aggressor script. However, if a payload is already created users can select to use the Custom (Prebuilt) option to move and execute it.

The kit contains different file movement techniques, execution triggers, and payload types.

File movement is considered the method used for getting a file to a remote host File movement types:

  • SMB to flat file
  • WMI to flat file
  • WMI to Registry Key Value
  • WMI to Custom WMI Class property

Command trigger is considered the method used for executing a specific command on a remote host. Command trigger types:

  • WMI
  • SCM
  • RDP
  • DCOM (Multiple)
  • Scheduled Tasks
  • Modify Scheduled Task (Existing Task has action updated, executes task and resets action)
  • Modify Service binpath (Existing Service has binpath updated, service is started and reset back to original state)

Shellcode only execution:

  • Excel 4.0 DCOM
  • WMI Event Subscription (coming soon)

Hijacks:

  • Service DLL Hijack (coming soon)
  • DCOM Server Hijack (coming soon)

Dependencies
  • Mono (MCS) for compiling .NET assemblies (Used with dynamic payload creation, InstallUtil, and Custom-NonPreBuilt). Also when FileWrite Assembly is used.

Gotchas:
  • Sometimes execute_assembly will be called before file movement, if this happens you can execute the payload by unchecking the Auto check box
  • The kit does not automatically clean up files, it is left up to the operator

Note: It is recommended not using the default templates with the project.

To replace a template you must meet two requirements. First, the template must be named the technique (example: msbuild.csproj). Second, the source code must contain the string $$PAYLOAD$$ where base64 encoded shellcode will go and be able to convert a base64 string to a byte array. Example for C#:

string strSC = "$$PAYLOAD$$";
byte[] sc = Convert.FromBase64String(strSC);

A change was added that allows for the defaults to update the 'Find and Replace string' and the shellcode formats in the 'Update Defaults dialog'. By default these are $$PAYLOAD$$ and base64.


Operational considerations
  • If using task scheduler scheduled tasks will be created and deleted
  • If using SCM services will be created and deleted
  • If using the AMSI bypass it will only work for WSH not PowerShell
  • If using the AMSI bypass it will modify the registry by either updating or creating a registry key then setting it back to its original value or deleting
  • It uses Cobalt Strike's execute-assembly function so it will inject into a sacrificial process like other post ex jobs
  • Files will be dropped on disk if using any of the File or Command methods
  • Templates should not be used, they are all public
  • All of the techniques are not new and are pretty well known

Credits

Some of the code, templates or inspiration comes from other people and projects

There are probably bugs somewhere, they tend to come up from time to time. Just bring them up and I'll fix them



Swissknife - Scriptable VSCode Extension To Generate Or Manipulate Data. Stop Pasting Sensitive Data In Webpag

$
0
0


The developers swissknife. Do conversions and generations right out of vs code. Extendable with user scripts

Available in the Visual Studio Marketplace


Currently available scripts
  • Base64 decode
  • Base64 encode
  • Binary To Text
  • Bip39 Mnemonic
  • CSV to Markdown
  • Count characters
  • Count words
  • Crypto currency value
  • Date to Timestamp
  • Eliptic Curve Key Pair
  • Generate Password
  • HTML Encode (AlL)
  • Hex decode
  • Hex encode
  • Hex to RGB
  • Identify hash
  • JWT Decode
  • Join lines
  • Lorem Ipsum
  • Markdown to HTML
  • Md5 hash
  • New Swissknife Script (JS)
  • New Swissknife Script (TS)
  • Password strength
  • RGB To Hex
  • RSA Key pair
  • Random String
  • Request to fetch
  • SHA1 hash
  • SHA256 hash
  • SHA512 hash
  • Self Signed Certificate
  • Start Local HTTP Server
  • Start Local HTTPS Server
  • Stop HTTP Server
  • Text To Binary
  • Text to String
  • Timestamp to Date
  • To Camel Case
  • To Lower Case
  • To Morse code
  • To Upper Case
  • UUIDv4
  • Unicode decode
  • Unicode encode (js format)
  • Unix/Linux Permission To Human Readable
  • Url Decode
  • Url Encode
  • Url Encode (All Characters)
  • Url Shorten
  • Url Unshorten (url expand)

Usage

You can invoke the dedicated command pallete with ctrl+shift+9 for windows or cmd+shift+9 for mac (when focusing the editor)

The conversions will only use the selected text by default. If no text is selected the entire content of the editor will be used. It supports multi selection and will run the script for each selection individually

Macbook Touchbar Support You can also invoke the swissknife extension directly from the macbook's touchbar



Scripts Details

Crypto currency value

Uses the API from Cryptonator. You can specify conversions directly from the text like:

1btc to eur  

For a list of supported currencies check here


Identify Hash

The outcome of the operation may return multiple values, as a hashes from different algorithms have the same output format. Still we organize the hashes from top down by most relevant.


HTTP(S) Server

The servers log all requests received into the "Output" window of VSCode (You can show it by going to view -> Output in the menu). Then on the right of the window (where usually has the value "Tasks"), filter by "Swissknife Server"


Privacy Note

One of the main purposes of this extension is to stop pasting data, or trusting generated data from random websites. The extension avoids doing external web requests or logging data, for privacy. But there are some operations where external requests are needed:

  • Crypto Currency Value - Does a request to the cryptonator api to get the available cryptocurrencies and a request to get the current price for a specific pair. The amount being converted is not sent, this is calculated on the local machine.

  • Url Unshorten - This one really needs to do the request to the short url, so it can get the redirect (full) url. But keep in mind that the full url is never reached, the extension does not follow the redirect.

  • URL Shortening - The shortening feature uses https://tinyurl.com to register a new short URL.


Writing Scripts

Swissknife will automatically load all scripts in its user scripting folder and you can find it by executing a command. Open you command pallete and type "Open swissknife users script folder". Or just start typing it as it will eventually be suggested. This is the folder where you can create your custom scripts.

To start a new script you can also use a command provided by the extension. Open swissknife picker and type "New swissknife script".


Script Reloading

Scripts are loaded into the extension when initializing VS Code, so when you create a custom script you'll need to reload the scripts. To make it easier for development, the extension has a command "Reload Swissknife Scripts" that you can call from the VS Code command pallette (do not confuse with the swissknife's script launcher).

Remember that everytime you do a change in a script in the user script folder you need to reload scripts.


Starting Template

You can chose the TS or JS version according to what you're more comfortable with. TS will be more complex as you need to transpile it to JS. We'll go with Javascript. This is the base structure of the script:

1btc to eur

This is the basic template to create scripts. In this file we created a script called "My Script". You can have as much scripts as you want per file. Its just a way of organization :) As you can see at the end, the structure for a script consists on 3 properties: title, detail and cb. The first two are self explanatory. cb is the code that will be called when you script runs. And by default swissknife gives you a few methods to help getting started, through the variable 'context'. The method doSomething is just replacing a's with b's


Context

In context you have some nice methods to help you out, and you should use them whenever possible.

  • insertRoutine(cb) - This method will insert the resolved content into the cursor on editor. It will call cb and send context as a parameter. cb is expected to be async
  • informationRoutine(cb) - This method will create a notification with the resolved content. It will call cb and send selected text in editor (all text if no selection) and context as a parameter. cb is expected to be async
  • replaceRoutine(cb) - This method will replace selected text in editor, with the resolved content from cb (if no text selected it replaces all text). It will call cb and send selected text in editor (all text if no selection) and context as a parameter. cb is expected to be async
  • vscode - This variable holds the vscode api.
  • modules - This variable is an array of all JS modules inside the script (and lib) folder. You can use them to call methods from the native scripts, to reuse code logic. Ex: context.modules.passwords.generateSecureCharCode())

The use of this methods is optional. If you feel that its easier to just work directly with vscode api you can also do it:

Object.defineProperty(exports, "__esModule", { value: true });

exports.doSomething = async (text, context) => {
return new Promise((resolve, reject) => {

resolve(text.replace(/a/g, "b"));

});
}
const scripts = [
{
title: "My Script",
detail: "This script does something",
cb: (context) => context.replaceRoutine(exports.doSomething)
},
]

exports.default = scripts;

More Examples
Object.defineProperty(exports, "__esModule", { value: true });

const scripts = [
{
title: "My Script2",
detail: "This script does something",
cb: (context) => {
console.log(context)
const editor = context.vscode.window.activeTextEditor;
editor.edit((edit) => {
edit.insert(editor.selection.myactive, "Doing stuff")
});
}
},
]

exports.default = scripts;

The best place to see examples is to check the native scripts bundled with the extension.


Future Plans
  • Create unit tests, specially for the scripts
  • Start doing proper error handlings
  • Create a place for user contributed scripts


Defeat-Defender - Powerful Batch Script To Dismantle Complete Windows Defender Protection And Even Bypass Tamper Protection

$
0
0


Powerfull Batch File To Disable Windows Defender,Firewall,Smartscreen And Execute the payload

Usage :
  1. Edit Defeat-Defender.bat on this line https://github.com/swagkarna/Defeat-Defender/blob/93823acffa270fa707970c0e0121190dbc3eae89/Defeat-Defender.bat#L72 and replace the direct url of your payload
  2. Run the script "run.vbs" . It will ask for Admin Permission.If permission Granted The script will work Silently without console windows...

After it got admin permission it will disable defender
  1. PUAProtection
  2. Automatic Sample Submission
  3. Windows FireWall
  4. Windows Smart Screen(Permanently)
  5. Disable Quickscan
  6. Add exe file to exclusions in defender settings
  7. Disable Ransomware Protection

Virus Total Result :



Bypasssing Windows-Defender Techniques :

Recently Windows Introduced new Feature called "Tamper Protection".Which Prevents the disable of real-time protection and modifying defender registry keys using powershell or cmd...If you need to disable real-time protection you need to do manually....But We will disable Real TimeProtection using NSudo without trigerring Windows Defender


After Running Defeat-Defender Script




Tested on Windows Version 20H2


Behind The Scenes :

When Batch file is executed it ask for admin permissions.After getting admin privileage it starts to disable windows defender real time protectin , firewall , smartscreen and starts downloading our backdoor from server and it will placed in startup folder.The backdoor will be executed after it has downloaded from server..And will be started whenever system starts..


Check out this article :

https://secnhack.in/create-fud-fully-undetectable-payload-for-windows-10/




PentestBro - Combines Subdomain Scans, Whois, Port Scanning, Banner Grabbing And Web Enumeration Into One Tool

$
0
0


Experimental tool for Windows. PentestBro combines subdomain scans, whois, port scanning, banner grabbing and web enumeration into one tool. Uses subdomain list of SecLists. Uses nmap service probes for banner grabbing. Uses list of paths for web enumeration.


Example scan of "www.ccc.de":

Scanned subdomain, IPs and ports



Grabbed banner for each IP and port



whois of all IP ranges




IRTriage - Incident Response Triage - Windows Evidence Collection For Forensic Analysis

$
0
0


Scripted collection of system information valuable to a Forensic Analyst. IRTriage will automatically "Run As ADMINISTRATOR" in all Windows versions except WinXP.

The original source was Triage-ir v0.851 an Autoit script written by Michael Ahrendt. Unfortunately Michael's last changes were posted on 9th November 2012

I let Michael know that I have forked his project: I am pleased to anounce that he gave me his blessing to fork his source code, long live Open Source!)


What if having a full disk image is not an option during an incident?

Imagine that you are investigating a dozen or more possibly infected or compromised systems. Can you spend 2-8 hours making a forensic copy of the hard drives on those computers? In such situation fast forensics"Triage" is the solution for such a situation. Instead of copying everything, collecting some key files can solve this issue.

IRTriage will collect:

  • system information
  • network information
  • registry hives
  • disk information, and
  • dump memory.

One of the powerful capabilities of IRTriage is collecting information from "Volume Shadow Copy" which can defeat many anti-forensics techniques.

The IRTriage is itself just an autoit script that depend on other tools such as:

  • Win32|64dd (free from Moonsols) or FDpro *(HBGary's commercial product)
  • Sysinternals Suite
  • The Sleuth Kit
  • Regripper
  • NirSoft => MFTDump and WinPrefetchView
  • md5deep and sha1deep
  • CSVFileView
  • 7zip
  • and some windows built-in commands.

In case of an incident, you want to make minimal changes to the "evidence machine", therefore I would suggest you copy IRTriage to a USB drive, the only issue here is if you are planning to dump the memory, the USB drive must be larger than the physical ram installed in the computer.

Once you launch the GUI application you can select what information you would like to collect. Each category is in a separate tab. All the collected information will be dumped into a new folder labled with [hostname-date-time].

NEWS: Changes from triage-ir v0.851

  • Renamed project to IRTriage
  • Versioning has changed to v2.[YY.MM.DD] for easier identification of last changes.
  • Updated the project to currently available tools.
  • Fixed the "commands executed" logging errors
  • Changed "Incident Log.txt" to "IncidentLog.csv" (TAB delimited)
  • Changed Compile time tools folder to ".\Compile\Tools" (Local to script)
  • Fixed ini file open dialog to open in local script directory

Version 2016.02.24 IRTriage is now truly compatible with the following versions of Windows:

  • Windows Workstations "WIN_10", "WIN_81", "WIN_8", "WIN_7", "WIN_VISTA", "WIN_XP", "WIN_XPe",
  • Windows Servers: "WIN_2016", "WIN_2012R2", "WIN_2012", "WIN_2008R2", "WIN_2008", "WIN_2003".

Version 2016.02.26 *Started to add new funtions:

*Processes()
- tcpvcon -anc -accepteula > Process2PortMap.csv
- tasklist /SVC /FO CSV > Processe2exeMap.csv
- wmic /output:ProcessesCmd.csv process get Caption,Commandline,Processid,ParentProcessId,SessionId /format:csv

*SystemInfo()
- wmic /output:InstallList.csv product get /format:csv
- wmic /output:InstallHotfix.csv qfe get caption,csname,description,hotfixid,installedby,installedon /format:csv

*Prefetch
**WinPrefetchView /Folder Prefetch /stab Prefetch.csv

*Options()
- mftdump.exe /l /m ComputerName /o ComputerName-MFT_Dump.csv $MFTcopy

TriageGUI()
- CSVFileView.exe IncidentLog.csv ;Added Checkbox to view IncidentLog after Acquisition
- cmd.exe ;Added Checkbox to open IRTriage commandline after Acquisition

Version 2016.03.08

  • added a custom compiled version of ReactOS's "cmd.exe" based on v0.4.0
  • +it can now use Linux equivalent commands:
    • clear = cls
    • cp = copy
    • df = free
    • env = set
    • ln = mklink
    • ls = dir
    • mv = move
    • pwd = cd, chdir
    • rm = delete, del, erase
    • sleep = pause
    • uname = ver, version
    • vmstat = memory, mem

Version 2016.03.08

  • Started to cleanup the code, trying to make it easier to modualarize.
  • Added the option at compile time to use HBGary's FDpro (Commercial) or Moonsol's (Free) memory acquisition software.
    • If you have HBGary's FDpro place it under the .\Compile\Tools folder in place of the "Zero byte" size file, is easy to switch back to Moonsol's memory acquisition software by replacing the FDpro.exe with a "less than 100 byte" sized file:-)

Version 2016.03.10

  • Continued cleanup of the code, removed unused Function CommandROSLOG()
  • Added $MFT parce to CSV
  • Added ability to view IncidentLog.csv after acquisition completed.

Version 2016.03.11

  • Updated cmd.exe
  • Added ability to open IRTriage's cmd.exe after acquisition completed.

Version 2016.03.14

  • Added Prefetch parce to CSV

Version 2016.03.24

  • Added IRTriage Update in tools menu (Update buttons mixed up)

Version 2016.03.28

  • Fixed IRTriage Update (Yes=Download Update, No=Display Update Info, Cancel=Cancel Update)

Version 2016.03.29

Version 2016.03.30

  • Fixed Volume Shadow Copy Functions
  • Minor Update to cmd.exe ver 4.1

Future Updates\Features will be based on this report: On-scene Triage open source forensic tool chests are they effective.



Android-PIN-Bruteforce - Unlock An Android Phone (Or Device) By Bruteforcing The Lockscreen PIN

$
0
0


Unlock an Android phone (or device) by bruteforcing the lockscreen PIN.

Turn your Kali Nethunter phone into a bruteforce PIN cracker for Android devices!

How it works

It uses a USB OTG cable to connect the locked phone to the Nethunter device. It emulates a keyboard, automatically tries PINs, and waits after trying too many wrong guesses.

[Nethunter phone] <--> [USB cable] <--> [USB OTG adaptor] <--> [Locked Android phone]

The USB HID Gadget driver provides emulation of USB Human Interface Devices (HID). This enables an Android Nethunter device to emulate keyboard input to the locked phone. It's just like plugging a keyboard into the locked phone and pressing keys.

This takes just over 16.6 hours with a Samsung S5 to try all possible 4 digit PINs, but with the optimised PIN list it should take you much less time.

You will need
  • A locked Android phone
  • A Nethunter phone (or any rooted Android with HID kernel support)
  • USB OTG (On The Go) cable/adapter (USB male Micro-B to female USB A), and a standard charging cable (USB male Micro-B to male A).
  • That's all!

Benefits
  • Turn your NetHunter phone into an Android PIN cracking machine
  • Unlike other methods, you do not need ADB or USB debugging enabled on the locked phone
  • The locked Android phone does not need to be rooted
  • You don't need to buy special hardware, e.g. Rubber Ducky, Teensy, Cellebrite, XPIN Clip, etc.
  • You can easily modify the backoff time to crack other types of devices
  • It works!

Features
  • Crack PINs of any length from 1 to 10 digits
  • Use config files to support different phones
  • Optimised PIN lists for 3,4,5, and 6 digit PINs
  • Bypasses phone pop-ups including the Low Power warning
  • Detects when the phone is unplugged or powered off, and waits while retrying every 5 seconds
  • Configurable delays of N seconds after every X PIN attempts
  • Log file

Installation

TBC


Executing the script

If you installed the script to /sdcard/, you can execute it with the following command.

bash ./android-pin-bruteforce

Note that Android mounts /sdcard with the noexec flag. You can verify this with mount.


Usage

Android-PIN-Bruteforce (0.1) is used to unlock an Android phone (or device) by bruteforcing the lockscreen PIN.
Find more information at: https://github.com/urbanadventurer/Android-PIN-Bruteforce

Commands:
crack Begin cracking PINs
resume Resume from a chosen PIN
rewind Crack PINs in reverse from a chosen PIN
diag Display diagnostic information
version Display version information and exit

Options:
-f, --from PIN Resume from this PIN
-a, --attempts Starting from NUM incorrect attempts
-m, --mask REGEX Use a mask for known digits in the PIN
-t, --type TYPE Select PIN or PATTERN cracking
-l, --length NUM Crack PINs of NUM length
-c, --config FILE Specify configuration file to load
-p, --pinlist FILE Specify a custom PIN list
-d, --dry-run Dry run for testing. Does n't send any keys.
-v, --verbose Output verbose logs

Usage:
android-pin-bruteforce <command> [options]

Supported Android Phones/Devices

This has been successfully tested with various phones including the Samsung S5, S7, Motorola G4 Plus and G5 Plus.

It can unlock Android versions 6.0.1 through to 10.0. The ability to perform a bruteforce attack doesn't depend on the Android version in use. It depends on how the device vendor developed their own lockscreen.

Check the Phone Database for more details https://github.com/urbanadventurer/Android-PIN-Bruteforce/wiki/Phone-Database


PIN Lists

Optimised PIN lists are used by default unless the user selects a custom PIN list.


Cracking PINs of different lengths

Use the --length commandline option.

Use this command to crack a 3 digit PIN, ./android-pin-bruteforce crack --length 3

Use this command to crack a 6 digit PIN ./android-pin-bruteforce crack --length 6


Where did the optimised PIN lists come from?

The optimised PIN lists were generated by extracting numeric passwords from database leaks then sorting by frequency. All PINs that did not appear in the password leaks were appended to the list.

The optimised PIN lists were generated from Ga$$Pacc DB Leak (21GB decompressed, 688M Accounts, 243 Databases, 138920 numeric passwords).


The 4 digit PIN list

The reason that the 4 digit PIN list is used from a different source is because it gives better results than the generated list from Ga$$Pacc DB Leak.

optimised-pin-length-4.txt is an optimised list of all possible 4 digit PINs, sorted by order of likelihood. It can be found with the filename pinlist.txt at https://github.com/mandatoryprogrammer/droidbrute

This list is used with permission from Justin Engler & Paul Vines from Senior Security Engineer, iSEC Partners, and was used in their Defcon talk, Electromechanical PIN Cracking with Robotic Reconfigurable Button Basher (and C3BO)


Cracking with Masks

Masks use regular expressions with the standard grep extended format.

./android-pin-bruteforce crack --mask "...[45]" --dry-run

  • To try all years from 1900 to 1999, use a mask of 19..
  • To try PINs that have a 1 in the first digit, and a 1 in the last digit, use a mask of 1..1
  • To try PINs that end in 4 or 5, use ...[45]

Configuration for different phones

Device manufacturers create their own lock screens that are different to the default or stock Android. To find out what keys your phone needs, plug a keyboard into the phone and try out different combinations.

Load a different configuration file, with the --config FILE commandline parameter.

Example: ./android-pin-bruteforce --config ./config.samsung.s5 crack

You can also edit the config file by customising the timing and keys sent.

The following configuration variables can be used to support a different phone's lockscreen.

# Timing
## DELAY_BETWEEN_KEYS is the period of time in seconds to wait after each key is sent
DELAY_BETWEEN_KEYS=0.25

## The PROGRESSIVE_COOLDOWN_ARRAY variables act as multi-dimensional array to customise the progressive cooldown
## PROGRESSIVE_ARRAY_ATTEMPT_COUNT__________ is the attempt number
## PROGRESSIVE_ARRAY_ATTEMPTS_UNTIL_COOLDOWN is how many attempts to try before cooling down
## PROGRESSIVE_ARRAY_COOLDOWN_IN_SECONDS____ is the cooldown in seconds

PROGRESSIVE_ARRAY_ATTEMPT_COUNT__________=(1 11 41)
PROGRESSIVE_ARRAY_ATTEMPTS_UNTIL_COOLDOWN=(5 1 1)
PROGRESSIVE_ARRAY_COOLDOWN_IN_SECONDS____=(30 30 60)

## SEND_KEYS_DISMISS_POPUPS_N_SECONDS_BEFORE_COOLDOWN_END defines how many seconds before the end of the cooldown period, keys will be sent
# set to 0 to disable
SEND_KEYS_DISMISS_POPUPS_N_SECONDS_BEFORE_COOLDOWN_END=5
## SEND_KEYS_DISMISS_POPUPS_AT_COOLDOWN_END configures the keys that are sent to dismiss messages and popups before the end of the cooldown period
SEND_KEYS_DISMISS_POPUPS_AT_COOLDOWN_END="enter enter enter"

## KEYS_BEFORE_EACH_PIN configures the keys that are sent to prompt the lock screen to appear. This is sent before each PIN.
## By default it sends "escape enter", but some phones will respond to other keys.

# Examples:
# KEYS_BEFORE_EACH_PIN="ctrl_escape enter"
# KEYS_BEFORE_EACH_PIN="escape space"
KEYS_BEFORE_EACH_PIN="escape enter"

## KEYS_STAY_AWAKE_DURING_COOLDOWN the keys that are sent during the cooldown period to keep the phone awake
KEYS_STAY_AWAKE_DURING_COOLDOWN="enter"

## SEND_KEYS_STAY_AWAKE_DURING_COOLDOWN_EVERY_N_SECONDS how often the keys are sent, in seconds
SEND_KEYS_STAY_AWAKE_DURING_COOLDOWN_EVERY_N_SECONDS=5

## DELAY_BEFORE_STARTING is the period of time in seconds to wait before the bruteforce begins
DELAY_BEFORE_STARTING=2
## KEYS_BEFORE_STARTING config ures the keys that are sent before the bruteforce begins
KEYS_BEFORE_STARTING="enter"

Popups

We send keys before the end of the cooldown period, or optionally during the cooldown period. This is to keep the lockscreen app active and to dismiss any popups about the number of incorrect PIN attempts or a low battery warning.


Test sending keys from the NetHunter phone

Test sending keys from the terminal

Use ssh from your laptop to the NetHunter phone, and use this command to test sending keys:

In this example, the enter key is sent.

echo "enter" | /system/xbin/hid-keyboard /dev/hidg0 keyboard

In this example, ctrl-escape is sent.

echo "left-ctrl escape" | /system/xbin/hid-keyboard /dev/hidg0 keyboard

Note: Sending combinations of keys in config file variables is different. Currently only ctrl_escape is supported.

In this example, keys a, b, c are sent.

echo a b c | /system/xbin/hid-keyboard /dev/hidg0 keyboard


Test sending keys from an app

This Android app is a virtual USB Keyboard that you can use to test sending keys.

https://store.nethunter.com/en/packages/remote.hid.keyboard.client/


How to send special keys

Use this list for the following variables:

  • KEYS_BEFORE_EACH_PIN
  • KEYS_STAY_AWAKE_DURING_COOLDOWN
  • KEYS_BEFORE_STARTING

To send special keys use the following labels. This list can be found in the hid_gadget_test source code.

Key labelKey label
left-ctrlf6
right-ctrlf7
left-shiftf8
right-shiftf9
left-altf10
right-altf11
left-metaf12
right-metainsert
returnhome
escpageup
bckspcdel
tabend
spacebarpagedown
caps-lockright
f1left
f2down
f3kp-enter
f4up
f5num-lock

To send more than one key at the same time, use the following list:

  • ctrl_escape (This sends left-ctrl and escape)

If you need more key combinations please open a new issue in the GitHub issues list.


Customising the Progressive Cooldown

The following section of the config file controls the progressive cooldown.

## The PROGRESSIVE_COOLDOWN_ARRAY variables act as multi-dimensional array to customise the progressive cooldown
## PROGRESSIVE_ARRAY_ATTEMPT_COUNT__________ is the attempt number
## PROGRESSIVE_ARRAY_ATTEMPTS_UNTIL_COOLDOWN is how many attempts to try before cooling down
## PROGRESSIVE_ARRAY_COOLDOWN_IN_SECONDS____ is the cooldown in seconds

PROGRESSIVE_ARRAY_ATTEMPT_COUNT__________=(1 11 41)
PROGRESSIVE_ARRAY_ATTEMPTS_UNTIL_COOLDOWN=(5 1 1)
PROGRESSIVE_ARRAY_COOLDOWN_IN_SECONDS____=(30 30 60)

The array is the same as this table.

attempt numberattempts until cooldowncooldown
1530
11130
41160

Why can't you use a laptop?

This works from an Android phone because the USB ports are not bidirectional, unlike the ports on a laptop.


How Android emulates a keyboard

Keys are sent using /system/xbin/hid-keyboard. To test this and send the key 1 you can use echo 1 | /system/xbin/hid-keyboard dev/hidg0 keyboard

In Kali Nethunter, /system/xbin/hid-keyboard is a compiled copy of hid_gadget_test.c. This is a small program for testing the HID gadget driver that is included in the Linux Kernel. The source code for this file can be found at https://www.kernel.org/doc/html/latest/usb/gadget_hid.html and https://github.com/aagallag/hid_gadget_test.


Troubleshooting


If it is not bruteforcing PINs

Check the orientation of the cables

The Nethunter phone should have a regular USB cable attached, while the locked phone should have an OTG adaptor attached.

The OTG cable should be connected to the locked Android phone. The regular USB cable should be connected to the Nethunter phone.

Refer to the graphic on how to connect the phones.


Check it is emulating a keyboard

You can verify that the NetHunter phone is succesfully emulating a keyboard by connecting it to a computer using a regular charging/data USB cable. Open a text editor like Notepad while it is cracking and you should see it entering PIN numbers into the text editor.

Note that you will not need an OTG cable for this.


Try restarting the phones

Try powering off the phones and even taking out the batteries if that is possible.


Try new cables

Try using new cables/adaptors as you may have a faulty cable/adaptor.


If it doesn't unlock the phone with a correct PIN

You might be sending keys too fast for the phone to process. Increase the DELAY_BETWEEN_KEYS variable in the config file.

If you don't see 4 dots come up on the phone's screen then maybe it is not receiving 4 keys.
Managing Power Consumption

If your phone runs out of power too soon, follow these steps:

  • Make sure both phones are fully charged to 100% before you begin
  • Reduce the screen brightness on both the victim phone and NetHunter phone if possible
  • Place both phones into Airplane mode, however you may want to enable WiFi to access the NetHunter phone via SSH.
  • The locked phone will power the NetHunter phone, because it appears as a keyboard accessory
  • Use a USB OTG cable with a Y splitter for an external power supply, to allow charging of the NetHunter phone while cracking
  • Take breaks to charge your devices. Pause the script with CTRL-Z and resume with the fg shell command.
  • Avoid the SEND_KEYS_STAY_AWAKE_DURING_COOLDOWN_EVERY_N_SECONDS configuration option. This will cause the locked phone to use more battery to keep the screen powered. Instead use the SEND_KEYS_DISMISS_POPUPS_N_SECONDS_BEFORE_COOLDOWN_END option (Default).

Check the Diagnostics Report

Use the command diag display diagnostic information.

bash ./android-pin-bruteforce diag

If you receive this message when the USB cable is plugged in then try taking the battery out of the locked Android phone and power cycling it.

[FAIL] HID USB device not ready. Return code from /system/xbin/hid-keyboard was 5.


How the usb-devices command works

The diagnostics command uses the usb-devices script but it is only necessary as part of determining whether the USB cables are incorrectly connected. This can be downloaded from https://github.com/gregkh/usbutils/blob/master/usb-devices


Use verbose output

Use the --verbose option to check the configuration is as expected. This is especially useful when you are modifying the configuration.


Use the dry-run

Use the --dry-run option to check how it operates without sending any keys to a device. This is especially useful when you are modifying the configuration or during development.

Dry run will:

  • Not send any keys
  • Will continue instead of aborting if the KEYBOARD_DEVICE or HID_KEYBOARD is missing.

HID USB Mode

Try this command in a shell on the NetHunter phone: /system/bin/setprop sys.usb.config hid


Known Issues

  • This cannot detect when the correct PIN is guessed and the phone unlocks.
  • Your phones may run out of
    battery before the correct PIN is found.
  • Don't trust phone configuration files from unknown sources without reviewing them first. The configuration files are shell scripts and could include malicious commands.

Roadmap

  • [DONE] Works
  • [DONE] Detects USB HID failures
  • [DONE] Improve Usage and commandline options/config files
  • [DONE] Add bruteforce for n digit PINs
  • [DONE] Mask for known digits
  • [DONE] Crack PIN list in reverse (to find which recent PIN unlocked the device)
  • [DONE] Implement configurable lockscreen prompt
  • [DONE] Implement cooldown change after 10 attempts
  • [WORKING] Find/test more devices to bruteforce
  • Add progress bar
  • Add ETA
  • ASCII art
  • Nicer GUI for NetHunter
  • Implement for iPhone
  • Detect when a phone is unlocked (Use Nethunter camera as a sensor?)
  • Crack Android Patterns (try common patterns first)

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.


Authors and acknowledgment

Developed by Andrew Horton (@urbanadventurer).

The following people have been very helpful:
  • Vlad Filatov (@v1adf): Testing many phones for the Wiki Phone Database

Motivation

My original motivation to develop this was to unlock a Samsung S5 Android phone. It had belonged to someone who had passed away, and their family needed access to the data on it. As I didn't have a USB Rubber Ducky or any other hardware handy, I tried using a variety of methods, and eventually realised I had to develop something new.


Credit

The optimised PIN list is from Justin Engler (@justinengler) & Paul Vines from Senior Security Engineer, iSEC Partners and was used in their Defcon talk, Electromechanical PIN Cracking with Robotic Reconfigurable Button Basher (and C3BO)..


Graphics

Designed by Andrew Horton and gratefully using these free vector packs:


Comparison with other projects and methods to unlock a locked Android phone


What makes this project unique?

I've been asked what makes this project unique when there are other open-source Android PIN cracking projects.

Android-PIN-Bruteforce is unique because it cracks the PIN on Android phones from a NetHunter phone and it doesn't need the locked phone to be pre-hacked.

It works:

  • Without having to buy special hardware, such as a Rubber Ducky, Celebrite, or XPIN Clip.
  • Without ADB or root access (the phone doesn't have to be pre-hacked).
ProjectADB/USB DebuggingRequires rootRequires $ hardwareCommercial
Android-PIN-Bruteforce
NoNoNethunter phoneNo
github.com/PentesterES/AndroidPINCrackYesYesNoNo
github.com/ByteRockstar1996/Cracking-Android-Pin-LockYesYesNoNo
github.com/sch3m4/androidpatternlockYesYesNoNo
github.com/georgenicolaou/androidlockcrackerYesYesNoNo
github.com/MGF15/P-DecodeYesYesNoNo
github.com/BitesFor/ABLYesYesNoNo
github.com/wuseman/WBRUTERYesNoNoNo
github.com/Gh005t/Android-BruteForceYesNoNoNo
github.com/mandatoryprogrammer/droidbruteNoNoRubber Ducky $No
github.com/hak5darren/USB-Rubber-DuckyNoNoRubber Ducky $Yes
github.com/bbrother/stm32f4androidbruteforceNoNoSTM32F4 dev board $No
hdb-team.com/product/hdbox/NoNoHDBOX $$Yes
xpinclip.comNoNoXPINClip $$Yes
cellebrite.com/en/ufed/NoNoCellebrite UFED $$$Yes

Some of these projects/products are really awesome but they achieve a different goal to Android-PIN-Bruteforce.

If a project requires a gestures.key or password.key, I've listed it as requiring root. If a project requires a custom bootloader, I've listed that as requiring both ADB and root. If you would like your project listed in this table then please open a new issue. There are links to each of these projects in the

Related Projects & Futher Reading section.
Regular phone users

  • Try the top 20 PINs from the DataGenetics PIN analysis that apparently unlocks 26.83% of phones.
  • Use an SMS lock-screen bypass app (requires app install before phone is locked)
  • Use Samsung Find My Mobile (requires you set it up before phone is locked)
  • Crash the Lock Screen UI (Android 5.0 and 5.1)
  • Use the Google Forgot pattern, Forgot PIN, or Forgot password (Android 4.4 KitKat and earlier)
  • Factory Reset (you lose all your data

Users who have already replaced their Android ROM

If the phone has already been rooted, has USB debugging enabled, or has adb enabled.

  • Flash the Pattern Password Disable ZIP using a custom recovery (Requires TWRP, CMW, Xrec, etc.)
  • Delete /data/system/gesture.key or password.key (requires root and adb on locked device)
  • Crack /data/system/gesture.key and password.key (requires root and adb on locked device)
  • Update sqlite3 database settings.db (requires root and adb on locked device)

Forensic Investigators

These methods can be expensive and are usually only used by specialised phone forensic investigators.

In order of difficulty and expense:

  • Taking advantage of USB debugging being enabled (Oxygen Forensic Suite)
  • Bruteforce with keyboard emulation (
    Android-PIN-Bruteforce, RubberDucky attack, XPIN Clip, HBbox)
  • JTAG (Interface with TAPs (Test Access Ports) on the device board)
  • In-System Programming (ISP) (Involves directly connecting to pins on flash memory chips on the device board)
  • Chip Off (Desolder and remove flash memory chips from the device)
  • Clock Glitching / Voltage Fault Injection (Hardware CPU timing attacks to bypass PIN restrictions)
  • Bootloader exploits (Zero-day exploits that attack the bootloader. GrayKey from Grayshift and Cellebrite)

JTAG, ISP, and Chip Off techniques are less useful now because most devices are encrypted. I don't know of any practical attacks on phone PINs that use clock glitching, if you know of a product that uses this technique please let me know so I can include it.

Security Professionals and Technical Phone Users

Use the USB HID Keyboard Bruteforce with some dedicated hardware.

  • A RubberDucky and Darren Kitchen's Hak5 brute-force script
  • Write a script for a USB Teensy
  • Buy expensive forensic hardware
  • Or you can use Android-PIN-Bruteforce with your NetHunter phone!

Attempts to use an otherwise awesome project Duck Hunter, to emulate a RubberDucky payload for Android PIN cracking did not work. It crashed the phone probably because of the payload length.


Related Projects & Futher Reading

USB HID Hardware without NetHunter

hak5 12x17: Hack Any 4-digit Android PIN in 16 hours with a USB Rubber Ducky https://archive.org/details/hak5_12x17

Hak5: USB Rubber Ducky https://shop.hak5.org/products/usb-rubber-ducky-deluxe

USB-Rubber-Ducky Payloads https://github.com/hak5darren/USB-Rubber-Ducky/wiki/Payloads

Teensy https://www.pjrc.com/teensy/

Brute Forcing An Android Phone with a STM32F4Discovery Development Board https://github.com/bbrother/stm32f4androidbruteforcehttps://hackaday.com/2013/11/10/brute-forcing-an-android-phone/

Automated brute force attack against the Mac EFI PIN (Using a Teensy) https://orvtech.com/atacar-efi-pin-macbook-pro-en.htmlhttps://hackaday.io/project/2196-efi-bruteforcer

Droidbrute: An Android PIN cracking USB rubber ducky payload made efficient with a statistically generated wordlist. https://github.com/mandatoryprogrammer/droidbrute

Discussion forum about the hak5 episode, and Android Brute Force 4-digit pin https://forums.hak5.org/topic/28165-payload-android-brute-force-4-digit-pin/


NetHunter HID keyboard attacks

NetHunter HID Keyboard Attacks https://www.kali.org/docs/nethunter/nethunter-hid-attacks/


Linux Kernel HID support

Human Interface Devices (HID) https://www.kernel.org/doc/html/latest/hid/index.html#

Linux USB HID gadget driver and hid-keyboard program https://www.kernel.org/doc/html/latest/usb/gadget_hid.htmlhttps://github.com/aagallag/hid_gadget_test

The usb-devices script https://github.com/gregkh/usbutils/blob/master/usb-devices


Cracking Android PIN and Pattern files

AndroidPINCrack - bruteforce the Android Passcode given the hash and salt (requires root on the phone) https://github.com/PentesterES/AndroidPINCrack

Android Pattern Lock Cracker - bruteforce the Android Pattern given an SHA1 hash (requires root on the phone) https://github.com/sch3m4/androidpatternlock


General Recovery Methods

[Android][Guide]Hacking And Bypassing Android Password/Pattern/Face/PI https://forum.xda-developers.com/showthread.php?t=2620456

Android BruteForce using ADB & Shell Scripting https://github.com/Gh005t/Android-BruteForce


Forensic Methods and Hardware

PATCtech Digital Forensics: Getting Past the Android Passcode http://patc.com/online/a/Portals/965/Android%20Passcode.pdf

XPIN Clip https://xpinclip.com/

HDBox from HDB Team https://hdb-team.com/product/hdbox/

Cellebrite UFED https://www.cellebrite.com/en/ufed/

GrayKey from Grayshift https://www.grayshift.com/graykey/


PIN Analysis

Electromechanical PIN Cracking with Robotic Reconfigurable Button Basher (and C3BO) https://www.defcon.org/html/defcon-21/dc-21-speakers.html#Engler

DataGenetics PIN analysis https://datagenetics.com/blog/september32012/index.html



Sish - HTTP(S)/WS(S)/TCP Tunnels To Localhost Using Only SSH

$
0
0


An open source serveo/ngrok alternative.


Deploy

Builds are made automatically for each commit to the repo and are pushed to Dockerhub. Builds are tagged using a commit sha, branch name, tag, latest if released on main. You can find a list here. Each release builds separate sish binaries that can be downloaded from here for various OS/archs. Feel free to either use the automated binaries or to build your own. If you submit a PR, images are not built by default and will require a retag from a maintainer to be built.

  1. Pull the Docker image

    • docker pull antoniomika/sish:latest
  2. Run the image

    • docker run -itd --name sish \
      -v ~/sish/ssl:/ssl \
      -v ~/sish/keys:/keys \
      -v ~/sish/pubkeys:/pubkeys \
      --net=host antoniomika/sish:latest \
      --ssh-address=:22 \
      --http-address=:80 \
      --https-address=:443 \
      --https=true \
      --https-certificate-directory=/ssl \
      --authentication-keys-directory=/pubkeys \
      --private-key-location=/keys/ssh_key \
      --bind-random-ports=false
  3. SSH to your host to communicate with sish

    • ssh -p 2222 -R 80:localhost:8080 ssi.sh

Docker Compose

You can also use Docker Compose to setup your sish instance. This includes taking care of SSL via Let's Encrypt for you. This uses the adferrand/dnsrobocert container to handle issuing wildcard certifications over DNS. For more information on how to use this, head to that link above. Generally, you can deploy your service like so:

docker-compose -f deploy/docker-compose.yml up -d

The domain and DNS auth info in deploy/docker-compose.yml and deploy/le-config.yml should be updated to reflect your needs. You will also need to create a symlink that points to your domain's Let's Encrypt certificates like:

ln -s /etc/letsencrypt/live/<your domain>/fullchain.pem deploy/ssl/<your domain>.crt
ln -s /etc/letsencrypt/live/<your domain>/privkey.pem deploy/ssl/<your domain>.key

Careful: the symlinks need to point to /etc/letsencrypt, not a relative path. The symlinks will not resolve on the host filesystem, but they will resolve inside of the sish container because it mounts the letsencrypt files in /etc/letsencrypt, not ./letsencrypt.

I use these files in my deployment of ssi.sh and have included them here for consistency.


Google Cloud Platform

There is a tutorial for creating an instance in Google Cloud Platform with sish fully setup that can be found here. It can be accessed through Google Cloud Shell.


Open in Google Cloud Shell


How it works

SSH can normally forward local and remote ports. This service implements an SSH server that only handles forwarding and nothing else. The service supports multiplexing connections over HTTP/HTTPS with WebSocket support. Just assign a remote port as port 80 to proxy HTTP traffic and 443 to proxy HTTPS traffic. If you use any other remote port, the server will listen to the port for TCP connections, but only if that port is available.

You can choose your own subdomain instead of relying on a randomly assigned one by setting the --bind-random-subdomains option to false and then selecting a subdomain by prepending it to the remote port specifier:

ssh -p 2222 -R foo:80:localhost:8080 ssi.sh

If the selected subdomain is not taken, it will be assigned to your connection.


Supported forwarding types

HTTP forwarding

sish can forward any number of HTTP connections through SSH. It also provides logging the connections to the connected client that has forwarded the connection and a web interface to see full request and responses made to each forwarded connection. Each webinterface can be unique to the forwarded connection or use a unified access token. To make use of HTTP forwarding, ports [80, 443] are used to tell sish that a HTTP connection is being forwarded and that HTTP virtualhosting should be defined for the service. For example, let's say I'm developing a HTTP webservice on my laptop at port 8080 that uses websockets and I want to show one of my coworkers who is not near me. I can forward the connection like so:

ssh -R hereiam:80:localhost:8080 ssi.sh

And then share the link https://hereiam.ssi.sh with my coworker. They should be able to access the service seamlessly over HTTPS, with full websocket support working fine. Let's say hereiam.ssi.sh isn't available, then sish will generate a random subdomain and give that to me.


TCP forwarding

Any TCP based service can be used with sish for TCP and alias forwarding. TCP forwarding will establish a remote port on the server that you deploy sish to and will forward all connections to that port through the SSH connection and to your local device. For example, if I was to run a SSH server on my laptop with port 22 and want to be able to access it from anywhere at ssi.sh:2222, I can use an SSH command on my laptop like so to forward the connection:

ssh -R 2222:localhost:22 ssi.sh

I can use the forwarded connection to then access my laptop from anywhere:

ssh -p 2222 ssi.sh

TCP alias forwarding

Let's say instead I don't want the service to be accessible by the rest of the world, you can then use a TCP alias. A TCP alias is a type of forwarded TCP connection that only exists inside of sish. You can gain access to the alias by using SSH with the -W flag, which will forwarding the SSH process' stdin/stdout to the fowarded TCP connection. In combination with authentication, this will guarantee your remote service is safe from the rest of the world because you need to login to sish before you can access it. Changing the example above for this would mean running the following command on my laptop:

ssh -R mylaptop:22:localhost:22 ssi.sh

sish won't publish port 22 or 2222 to the rest of the world anymore, instead it'll retain a pointer saying that TCP connections made from within SSH after a user has authenticated to mylaptop:22 should be forwarded to the forwarded TCP tunnel. Then I can use the forwarded connection access my laptop from anywhere using:

ssh -o ProxyCommand="ssh -W %h:%p ssi.sh" mylaptop

Shorthand for which is this with newer SSH versions:

ssh -J ssi.sh mylaptop

Authentication

If you want to use this service privately, it supports both public key and password authentication. To enable authentication, set --authentication=true as one of your CLI options and be sure to configure --authentication-password or --authentication-keys-directory to your liking. The directory provided by --authentication-keys-directory is watched for changes and will reload the authorized keys automatically. The authorized cert index is regenerated on directory modification, so removed public keys will also automatically be removed. Files in this directory can either be single key per file, or multiple keys per file separated by newlines, similar to authorized_keys. Password auth can be disabled by setting --authentication-password="" as a CLI option.

One of my favorite ways of using this for authentication is like so:

sish@sish0:~/sish/pubkeys# curl https://github.com/antoniomika.keys > antoniomika

This will load my public keys from GitHub, place them in the directory that sish is watching, and then load the pubkey. As soon as this command is run, I can SSH normally and it will authorize me.


Custom domains

sish supports allowing users to bring custom domains to the service, but SSH key auth is required to be enabled. To use this feature, you must setup TXT and CNAME/A records for the domain/subdomain you would like to use for your forwarded connection. The CNAME/A record must point to the domain or IP that is hosting sish. The TXT record must be be a key=val string that looks like:

sish=SSHKEYFINGERPRINT  

Where SSHKEYFINGERPRINT is the fingerprint of the key used for logging into the server. You can set multiple TXT records and sish will check all of them to ensure at least one is a match. You can retrieve your key fingerprint by running:

sish=SSHKEYFINGERPRINT

If you trust the users connecting to sish and would like to allow any domain to be used with sish (bypassing verification), there are a few added flags to aid in this. This is especially useful when adding multiple wildcard certificates to sish in order to not need to automatically provision Let's Encrypt certs. To disable verfication, set --bind-any-host=true, which will allow and subdomain/domain combination to be used. To only allow subdomains of a certain subset of domains, you can set --bind-hosts to a comma separated list of domains that are allowed to be bound.

To add certficates for sish to use, configure the --https-certificate-directory flag to point to a dir that is accessible by sish. In the directory, sish will look for a combination of files that look like name.crt and name.key. name can be arbitrary in either case, it just needs to be unique to the cert and key pair to allow them to be loaded into sish.


Load balancing

sish can load balance any type of forwarded connection, but this needs to be enabled when starting sish using the --http-load-balancer, --tcp-load-balancer, and --alias-load-balancer flags. Let's say you have a few edge nodes (raspberry pis) that are running a service internally but you want to be able to balance load across these devices from the outside world. By enabling load balancing in sish, this happens automatically when a device with the same forwarded TCP port, alias, or HTTP subdomain connects to sish. Connections will then be evenly distributed to whatever nodes are connected to sish that match the forwarded connection.


Whitelisting IPs

Whitelisting IP ranges or countries is also possible. Whole CIDR ranges can be specified with the --whitelisted-ips option that accepts a comma-separated string like "192.30.252.0/22,185.199.108.0/22". If you want to whitelist a single IP, use the /32 range.

To whitelist countries, use --whitelisted-countries with a comma-separated string of countries in ISO format (for example, "pt" for Portugal). You'll also need to set --geodb to true.


DNS Setup

To use sish, you need to add a wildcard DNS record that is used for multiplexed subdomains. Adding an A record with * as the subdomain to the IP address of your server is the simplest way to achieve this configuration.


Demo - At this time, the demo instance has been set to require auth due to abuse

There is a demo service (and my private instance) currently running on ssi.sh that doesn't require any authentication. This service provides default logging (errors, connection IP/username, and pubkey fingerprint). I do not log any of the password authentication data or the data sent within the service/tunnels. My deploy uses the exact deploy steps that are listed above. This instance is for testing and educational purposes only. You can deploy this extremely easily on any host (Google Cloud Platform provides an always-free instance that this should run perfectly on). If the service begins to accrue a lot of traffic, I will enable authentication and then you can reach out to me to get your SSH key whitelisted (make sure it's on GitHub and you provide me with your GitHub username).


Notes
  1. This is by no means production ready in any way. This was hacked together and solves a fairly specific use case.
    • You can help it get production ready by submitting PRs/reviewing code/writing tests/etc
  2. This is a fairly simple implementation, I've intentionally cut corners in some places to make it easier to write.
  3. If you have any questions or comments, feel free to reach out via email me@antoniomika.me or on freenode IRC #sish

Upgrading to v1.0

There are numerous breaking changes in sish between pre-1.0 and post-1.0 versions. The largest changes are found in the mapping of command flags and configuration params. Those have changed drastically, but it should be easy to find the new counterpart. The other change is SSH keys that are supported for host key auth. sish continues to support most modern keys, but by default if a host key is not found, it will create an OpenSSH ED25519 key to use. Previous versions of sish would aes encrypt the pem block of this private key, but we have since moved to using the native OpenSSH private key format to allow for easy interop between OpenSSH tools. For this reason, you will either have to manually convert an AES encrypted key or generate a new one.


CLI Flags
sish is a command line utility that implements an SSH server that can handle HTTP(S)/WS(S)/TCP multiplexing, forwarding and load balancing.  It can handle multiple vhosting and reverse tunneling endpoints for a large number of clients.    Usage:    sish [flags]    Flags:        --admin-console                               Enable the admin console accessible at http(s)://domain/_sish/console?x-authorization=admin-console-token    -j, --admin-console-token string                  The token to use for admin console access if it's enabled (default "S3Cr3tP4$$W0rD")        --alias-load-balancer                         Enable the alias load balancer (multiple clients can bind the same alias)        --append-user-to-subdomain                    Append the SSH user to the subdomain. This is useful in multitenant environments        --append-user-to-subdomain-separator string   The token to use for separating username and subdomain selection in a virtualhost (default "-")        --authentication                              Require authentication for the SSH service    -k, --authentication-keys-directory string        Directory where public keys for public key authentication are stored.                                                      sish will watch this directory and automatically load new keys and remove keys                                                      from the authentication list (default "deploy/pubkeys/")    -u, --authentication-password string              Password to use for ssh server password authentication (default "S3Cr3tP4$$W0rD")        --banned-aliases string                       A comma separated list of banned aliases that users are unable to bind    -o, --banned-countries string                     A comma separated list of banned countries. Applies to HTTP, TCP, and SSH connections    -x, --banned-ips string                           A comma separated list of banned ips that are unable to access the service. Applies to HTTP, TCP, and SSH connections    -b, --banned-subdomains string                    A comma separated list of banned subdomains that users are unable to bind (default "localhost")        --bind-any-host                               Bind any host when accepting an HTTP listener        --bind-hosts string                           A comma separated list of other hosts a user can bind. Requested hosts should be subdomains of a host in this list        --bind-random-aliases                         Force bound alias tunnels to use random aliases instead of user provided ones (default true)        --bind-random-aliases-length int              The length of the random alias to generate if a alias is unavailable or if random aliases are enforced (default 3)        --bind-random-ports                           Force TCP tunnels to bind a random port, where the kernel will randomly assign it (default true)        --bind-random-subdomains                      Force bound HTTP tunnels to use random subdomains instead of user provided ones (default true)        --bind-random-subdomains-length int           The length of the random subdomain to generate if a subdomain is unavailable or if random subdomains are enforced (default 3)        --cleanup-unbound                             Cleanup unbound (unforwarded) SSH connections after a set timeout (default true)        --cleanup-unbound-timeout duration            Duration to wait before cleaning up an unbound (unforwarded) connection (default 5s)    -c, --config string                               Config file (default "config.yml")        --debug                                       Enable debugging information    -d, --domain string                               The root domain for HTTP(S) multiplexing that will be appended to subdomains (default "ssi.sh")        --force-requested-aliases                     Force the aliases used to be the one that is requested. Will fail the bind if it exists already        --force-requested-ports                       Force the ports used to be the one that is requested. Will fail the bind if it exists already        --force-requested-subdomains                  Force the subdomains used to be the one that is requested. Will fail the bind if it exists already        --geodb                                       Use a geodb to verify country IP address association for IP filtering    -h, --help                                        help for sish    -i, --http-address string                         The address to listen for HTTP connections (default "localhost:80")        --http-load-balancer                          Enable the HTTP load balancer (multiple clients can bind the same domain)        --http-port-override int                      The port to use for http command output. This does not effect ports used for connecting, it's for cosmetic use only        --https                                       Listen for HTTPS connections. Requires a correct --https-certificate-directory    -t, --https-address string                        The address to listen for HTTPS connections (default "localhost:443")    -s, --https-certificate-directory string          The directory containing HTTPS certificate files (name.crt and name.key). There can be many crt/key pairs (default "deploy/ssl/")        --https-ondemand-certificate                  Enable retrieving certificates on demand via Let's Encrypt        --https-ondemand-certificate-accept-terms     Accept the Let's Encrypt terms        --https-ondemand-certificate-email string     The email to use with Let's Encrypt for cert notifications. Can be left blank        --https-port-override int                     The port to use for https command output. This does not effect ports used for connecting, it's for cosmetic use only        --idle-connection                             Enable connection idle timeouts for reads and writes (default true)        --idle-connection-timeout duration            Duration to wait for activity before closing a connection for all reads and writes (default 5s)        --load-templates                              Load HTML templates. This is required for admin/service consoles (default true)        --load-templates-directory string             The directory and glob parameter for templates that should be loaded (default "templates/*")        --localhost-as-all                            Enable forcing localhost to mean all interfaces for tcp listeners (default true)        --log-to-client                               Enable logging HTTP and TCP requests to the client        --log-to-file                                 Enable writing log output to file, specified by log-to-file-path        --log-to-file-compress                        Enable compressing log output files        --log-to-file-max-age int                     The maxium number of days to store log output in a file (default 28)        --log-to-file-max-backups int                 The maxium number of rotated logs files to keep (default 3)        --log-to-file-max-size int                    The maximum size of outputed log files in megabytes (default 500)        --log-to-file-path string                     The file to write log output to (default "/tmp/sish.log")        --log-to-stdout                               Enable writing log output to stdout (default true)        --ping-client                                 Send ping requests to the underlying SSH client.                                                      This is useful to ensure that SSH connections are kept open or close cleanly (default true)        --ping-client-interval duration               Duration representing an interval to ping a client to ensure it is up (default 5s)        --ping-client-timeout duration                Duration to wait for activity before closing a connection after sending a ping to a client (default 5s)    -n, --port-bind-range string                      Ports or port ranges that sish will allow to be bound when a user attempts to use TCP forwarding (default "0,1024-65535")    -l, --private-key-location string                 The location of the SSH server private key. sish will create a private key here if                                                      it doesn't exist using the --private-key-passphrase to encrypt it if supplied (default "deploy/keys/ssh_key")    -p, --private-key-passphrase string               Passphrase to use to encrypt the server private key (default "S3Cr3tP4$$phrAsE")        --proxy-protocol                              Use the proxy-protocol while proxying connections in order to pass-on IP address and port information        --proxy-protocol-listener                     Use the proxy-protocol to resolve ip addresses from user connections        --proxy-protocol-policy string                What to do with the proxy protocol header. Can be use, ignore, reject, or require (default "use")        --proxy-protocol-timeout duration             The duration to wait for the proxy proto header (default 200ms)        --proxy-protocol-use-timeout                  Use a timeout for the proxy-protocol read    -q, --proxy-protocol-version string               What version of the proxy protocol to use. Can either be 1, 2, or userdefined.                                                      If userdefined, the user needs to add a command to SSH called proxyproto:version (ie proxyproto:1) (default "1")        --redirect-root                               Redirect the root domain to the location defined in --redirect-root-location (default true)    -r, --redirect-root-location string               The location to redirect requests to the root domain                                                      to instead of responding with a 404 (default "https://github.com/antoniomika/sish")        --service-console                             Enable the service console for each service and send the info to connected clients    -m, --service-console-token string                The token to use for service console access. Auto generated if empty for each connected tunnel    -a, --ssh-address string                          The address to listen for SSH connections (default "localhost:2222")        --tcp-aliases                                 Enable the use of TCP aliasing        --tcp-load-balancer                           Enable the TCP load balancer (multiple clients can bind the same port)        --time-format string                          The time format to use for both HTTP and general log messages (default "2006/01/02 - 15:04:05")        --verify-dns                                  Verify DNS information for hosts and ensure it matches a connecting users sha256 key fingerprint (default true)        --verify-ssl                                  Verify SSL certificates made on proxied HTTP connections (default true)    -v, --version                                     version for sish    -y, --whitelisted-countries string                A comma separated list of whitelisted countries. Applies to HTTP, TCP, and SSH connections    -w, --whitelisted-ips string                      A comma separated list of whitelisted ips. Applies to HTTP, TCP, and SSH connections  


HttpDoom - A Tool For Response-Based Inspection Of Websites Across A Large Amount Of Hosts For Quickly Gaining An Overview Of HTTP-based Attack Surface

$
0
0


Validate large HTTP-based attack surfaces in a very fast way. Heavily inspired by Aquatone.


Why?

When I utilize Aquatone to flyover some hosts, I have some performance issues by the screenshot feature, and the lack of extension capabilities - like validating front-end technologies with a plugin-like system -, also, my codebase is mainly C# and Rust, and make the maintenance of a tool wrote in another language can lead to a lot of issues.

With these ideas in mind, HttpDoom is born.


Installing

In order to install HttpDoom, in the current release cycle, due to not have a runtime-independent build at this time (only devel builds are available), you must have .NET5 runtime (or SDK) - AKA dotnet - installed in your host, with the .NET toolchain available in your Linux or macOS (automatic installation for Windows is not supported at this time, your PR to installation script is welcome. WSL works fine):

$ ./installer.sh

The installer script also updates (removing the current instalation) new releases of HttpDoom.


How this works?

The description (--help) of the CLI is all you need to know:

HttpDoom:
HttpDoom is a tool for response-based inspection of websites across a large
surface.
amount of hosts for quickly gaining an overview of HTTP-based attack

Usage:
HttpDoom [options]

Options:
-d, --debug Print debugging information
-f, --follow-redirect HTTP client follow any automatic
redirects (default is false)
-m, --max-redirects Max automatic redirect depth when is
enable (default is 3)
-s, --screenshot Take screenshots from the alive host
with ChromeDriver (default is false)
-r, --screenshot-resolution Set screenshot resolution (default
is 1366x768)
-F, --capture-favicon Download the application favicon
-h, --headers <headers> Set default headers to every request
User-Agent)
(default is just a random
-t, --http-timeout <http-timeout> Timeout in milliseconds for HTTP
requests (default is 5000)
-T, --threads <threads> Number of concurrent threads
(default is 20)
-o, --output-directory Path to save the output directory
<output-directory>
-p, --ports <ports> Set of ports to check (default is
80, 443, 8080 and 8433)
-P, --proxy <proxy> Proxy to use for HTTP requests
-w, --w ord-list <word-list> List of hosts to flyover against
(REQUIRED)
--version Show version information
-?, -h, --help Show help and usage information

But it is fast?

Let's take a look on the result of a flyover agains 5000 hosts on default HttpDoom ports (80, 443, 8080 and 8433), running in the very first working release, with 2 threads (provided by a generic Amazon EC2 instance) agains the same settings on Aquatone 1.7.0:

HttpDoom:

...
[+] Flyover is done! Enumerated #31128 responses in 2.49 minute(s)
[+] Got a total of #176 alive hosts!
...

Aquatone:

...
Writing session file...Time:
- Started at : 2020-12-20T08:27:43Z
- Finished at : 2020-12-20T08:34:35Z
- Duration : 6m52s
...

Note: The results of these tests can vary a lot based on a series of specific conditions of your host. Make the test locally and check which tool offers the best performance.


Output

By default, we create all the necessary directories, and we also randomly choose their names (you can set this up with -o, in doubt see --help).

Within the main directory, a general.json file is created containing all the results in a single file (to facilitate the search or ingestion in some visual tool), which looks like this:

[
{
"Domain": "google.com",
"Addresses": [
"2800:3f0:4001:81a::200e",
"172.217.28.14"
],
"Requested": "https://google.com/",
"Port": 443,
"Content": "\u003CHTML\u003E\u003CHEAD\u003E\u003Cmeta http-equiv=\u0022content-type\u0022 content=\u0022text/html;charset=utf-8\u0022\u003E\n\u003CTITLE\u003E301 Moved\u003C/TITLE\u003E\u003C/HEAD\u003E\u003CBODY\u003E\n\u003CH1\u003E301 Moved\u003C/H1\u003E\nThe document has moved\n\u003CA HREF=\u0022https://www.google.com/\u0022\u003Ehere\u003C/A\u003E.\r\n\u003C/BODY\u003E\u003C/HTML\u003E\r\n",
"ScreenshotPath": "C:\\Users\\REDACTED\\AppData\\Local\\Temp\\c14obxml.kfy\\Screenshots\\0086aea9-c4d4-4bbf-89d8-728e5d2ff184.png",
"FaviconPath": "C:\\Users\\REDACTED\\AppData\\Local\\Temp\\c14obxml.kfy\\Favicons\\172d671c-636d-443b-b5b4-30ed6e10b8aa.ico",
"Headers": [
{
"Key": "Location",
"Value": [
"https://www.google.com/"
]
},
{
"Key": "Date",
"Value": [
"Tue, 02 Feb 2021 15:59:46 GMT"
]
},
{
"Key": "Cache-Control",
"Value": [
"public, max-age=2592000"
]
},
{
"Key": "Server",
"Value": [
"gws"
]
},
{
"Key": "X-XSS-Protection",
"Value": [
"0"
]
},
{
"Key": "X-Frame-Options",
"Value": [
"SAMEORIGIN"
]
},
{
"Key": "Alt-Svc",
"Value": [
"h3-29=\u0022:443\u0022; ma=2592000",
"h3-T051=\u0022:443\u0022; ma=2592000",
"h3-Q050=\u0022:443\u0022; ma=2592000",
"h3-Q046=\u0022:443\u0022; ma=2592000",
"h3-Q043=\u0022:443\u0022; ma=2592000",
"quic=\u0022:443\u0022; ma=2592000"
]
}
],
"Cookies": [],
"StatusCode": 301
},
// ...
]

A directory called Individual Results is also created, indexing the results individually, categorically based on the name of the URI used for the request, as well the screenshots, if you use HttpDoom with option -s and favicons, if the site has one, and if you use HttpDoom with option -F:

.
├── Favicons
│   ├── 31be8e61-d90b-4b40-bcef-640fb31588e7.ico
│   └── 4e097b93-12f2-4f20-9582-547cc6d20312.ico
├── Individual Results
│   ├── http:google.com:80.json
│   └── https:google.com:443.json
├── Screenshots
│   ├── 1d395ce1-b329-4379-8d9e-2868ed41e67d.png
│   └── a9f90f23-4d5c-4f13-ba3e-5d8f88aa3926.png
└── general.json

Note: The pattern of Individual Results files is scheme:address:port.But : can be an invalid character depending on what operational system you use HttpDoom. For deeper ACK, check the documentation of Path.GetInvalidFileNameChars() in MSDN.


Roadmap

The project are focused to be a really useful tool.

  • 0x00: Make the satuday project work;
  • 0x01: Baking the CLI options very similar to Aquatone;
  • 0x02: Fix issues with large (5K+) hosts wordlists;
  • 0x03: Well, this is not "threads" but work like, maybe need a better polishing;
  • 0x04 Screenshots because why not;
  • 0x05: Create the community-driven fingerprint engine to enumerate vulnerabilities on headers and bodies of the HTTP responses;

Spraygen - Password List Generator For Password Spraying

$
0
0


Password list generator for password spraying - prebaked with goodies


Version 1.4

Generates permutations of Months, Seasons, Years, Sports Teams (NFL, NBA, MLB, NHL), Sports Scores, "Password", and even Iterable Keyspaces of a specified size.

All permutations are generated with common attributes appended/prepended (such as "!" or "#"), or custom separators (such as "." or "_").

Users can extend the attributes and separators using comma delimited lists of characters.

Spraygen also accepts single words or external wordlists that allow you to generate tuned custom wordlists in addition to what is already provided.

You could use tools like crunch, a fancy bash loop over SecLists, or whatever have you but that takes time...this one is made for spraying, so get to it!

python3 spraygen.py -h
_
( \_
( \_
( \_
( \_ ___
( Password \ | |
( Spray |คคคคคคคค|___|
( _ / |
( _ / /~~~~~~~~~\
( _ / ( Spray )
(_/ | This |
| |
| Get |
| Creds |
|_________|

Original Art by Alex Chudnovsky (Unaffiliated)
Spraygen tool by 3ndG4me
Version 1.4

usage: spraygen.py [-h] [--year_start YEAR_START] [--year_end YEAR_END] [-s separators] [-a attributes] [-w wordlist] [-n single word] [--mode {all,nosep,noattr,years,plain,custom}]
[--type {all,iterative,sports,nfl,nba,mlb,nhl,months,seasons,password,custom}] [--iter {ascii,num ,spec,asciinum,asciispec,numspec,full}] [--size SIZE] [--min_length MIN_LENGTH] [--max_length MAX_LENGTH]
[-o output file] [-p] [--sort {nosort,asc,desc,random}] [-v]

Parse Spray List Arguments.

optional arguments:
-h, --help show this help message and exit
--year_start YEAR_START
starting year for a range of years
--year_end YEAR_END ending year for a range of years
-s separators a comma delimited list of one or more separators
-a attributes a comma delimited list of one or more attributes
-w wordlist path to a custom wordlist
-n single word single custom word to generate a custom wordlist with
--mode {all,nosep,noattr,ye ars,plain,custom}
Mode for list generation. Can be all, no separators, no attributes, only years, plain, or custom (will only use parameters passed into -s or -a).
--type {all,iterative,sports,nfl,nba,mlb,nhl,months,seasons,password,custom}
Type of list to generate. Can be all, iterative, sports, nfl, nba, mlb, nhl, months, seasons, password, or custom. Choosing 'all' executes all options except for 'iterative' which must be run
manually.
--iter {ascii,num,spec,asciinum,asciispec,numspec,full}
Keyspace mode for iterative list generation. Only works when --type is set to 'iterative'. Can be ascii, num, spec, asciinum, asciispec, numspec, or full. Will generate all permutations of the
selected keyspace with a given length set with the --size parameter.
--size SIZE Length of passwords generated by a set keyspace. Only works when --type is set to 'iterative' and an --iter keyspace mode is set.
--min_length MIN_LENGTH
Minimum length of passwords to include in the list. (Default: 1)
--max_length MAX_LENGTH
Maximum length of passwords to include in the list (Default: 999)
-o output file name of a file to create and write the final output to
-p prints the output line by line as plaintext
--sort {nosort,asc,desc,random}
Sort final output. Sorting methods supported are nosort, asc, desc, random.
-v prints the current version of spraygen and exits

Basic Usage
  1. Install dependencies pip3 install -r requirements.txt
  2. Run python3 spraygen.py -p - this will generate all default built in wordlists with all permutations and print them to the screen

Credits
  • @MarkoH17 - for the boolean python3.8 backwards compatibility fix


Cypheroth - Automated, Extensible Toolset That Runs Cypher Queries Against Bloodhound's Neo4j Backend And Saves Output To Spreadsheets

$
0
0


Automated, extensible toolset that runs cypher queries against Bloodhound's Neo4j backend and saves output to spreadsheets.


Description

This is a bash script that automates running cypher queries against Bloodhound data stored in a Neo4j database.

I found myself re-running the same queries through the Neo4j web interface on multiple assessments and figured there must be an easier way.

The list of cypher queries to run is fully extensible. The formatting example below shows how to add your own.

Please share any additional useful queries so I can add them to this project!

Fully tested to be working in Bash on Linux, macOS, and Windows


Demo


Prereqs
  • The cypher-shell command comes bundled with Neo4j, and is required for this script to function
    • If Neo4j is installed and cypher-shell is not found, you may have an outdated version of Neo4j
    • The latest version can always be found at this location
    • On Kali, upgrade to the latest version using Neo4j's Debian repository
  • Optional: If the ssconvert command is present, the script will combine all .csv output to sheets within a .xls file
    • Install the gnumeric toolset with apt or brew to gain access to ssconvert

On Windows we recommend using WSL to run this script, while the neo4j database runs on Windows. You will just need to install the cypher-shell package in WSL (Linux).


Usage

Flags:

  -u Neo4J Username (Required)
-p Neo4J Password (Required)
-d Fully Qualified Domain Name (Required) (Case Sensitive)
-a Bolt address (Optional) (Default: localhost:7687)
-t Query Timeout (Optional) (Default: 30s)
-v Verbose mode (Optional) (Default:FALSE)
-h Help text and usage example (Optional)

Example with Defaults:

./cypheroth.sh -u neo4j -p BloodHound -d TESTLAB.LOCAL

Example with All Options:

./cypheroth.sh -u neo4j -p hunter2 -d BigTech.corp -a 10.0.0.1:7687 -t 5m -v true

Files are added to a subdirectory named after the FQDN.


Cypher Queries

There are nearly 60 queries in the script currently. This is a sample of the information you'll receive:

  • Full User Property List
  • Full Computer Property List
  • Full Domain Property List
  • Full OU Property List
  • Full GPO Property List
  • Full Group Property List
  • Computers with Admins
  • Computers without Admins
  • Kerberoastable users and computers where they are admins

To add additional queries, edit the queries array within cypheroth.sh and add a line using the following format:

Description;Cypher Query;Output File

If adding a query that requires the Domain value to be set, save it as $DOMAIN.

Example 1:

All Usernames;MATCH (u:User) RETURN u.name;usernames.csv

Example 2:

All Domain Admins;MATCH (u:User) MATCH (g:Group {name:'DOMAIN ADMINS@$DOMAIN'}) RETURN u.displayname;domainAdmins.csv

Analyze several domains

If you need to analyze several domains, you can run multiple instances of Cypheroth in parallel with each one working on its domain. You can use the following script for example (10 in parallel).

#!/usr/bin/env bash
DOMAINS=(domA.example.net domB.example.net [...])
parallel -j10 --lb ./cypheroth.sh <args> -d {} ::: "${DOMAINS[@]}"

Troubleshooting

If you are running an outdated version of cypher-shell you may receive the following error:

DateTime is not supported as a return type in Bolt protocol version 1.
Please make sure driver supports at least protocol version 2.
Driver upgrade is most likely required.

To fix, update Neo4j to the latest version.


Author

Chris Farrell (@seajay)


Acknowledgments



Modded-Ubuntu - Run Ubuntu GUI On Your Termux With Much Features

$
0
0


Run Ubuntu GUI on your termux with much features.


Features
  • Fixed Audio Output
  • Lightweight {Requires at least 4GB Storage}
  • Katoolin3 tool for installing kali tools
  • 2 Browsers (Chromium & Mozilla Firefox)
  • Supports Bangla Fonts
  • VLC Media Player
  • Visual Studio Code
  • Easy for Beginners

Installation
  • First Clone the Repository & Run the setup File

    • pkg update -y && pkg upgrade -y
    • pkg install git wget -y
    • git clone git://github.com/modded-ubuntu/modded-ubuntu.git
    • cd modded-ubuntu
    • bash setup.sh
  • Then Restart your Termux& Type the following commands

    • ubuntu
    • bash user.sh
  • Type your ubuntu root username. Must be lowercase & no space included.

  • Then Again Restart your Termux & Type the following commands

    • ubuntu
    • bash gui.sh
  • You have to note your VNC password !!

  • Ubuntu image is now successfully installed .

    • Type vncstart to run Vncserver
    • Type vncstop to stop Vncserver
  • Install VNC VIEWER Apk on your Device. Google Play Store

  • Open VNC VIEWER & Click on + Button & Enter the Address localhost:1& Name anything you like

  • Set the Picture Quality to High for better Quality

  • Click on Connect & Input the Password

  • Enjoy :D


NOTE :
  • Type ubuntu to run Ubuntu CLI.

  • Type vncstart to run Vncserver

  • Type vncstop to stop Vncserver

  • Type bash remove.sh to remove Ubuntu Modded Os


Video Tutorial :



Credits :
This Tool Uses the ubuntu image provided by the termux package `proot-distro` 

Full Credit of the Ubuntu image goes to them .

Termux Proot Distro - https://github.com/termux/proot-distro

Maintainers

OUR TEAM : TERMUX HACKER BD

If you like our work then dont forget to give a Star :)


KubiScan - A Tool To Scan Kubernetes Cluster For Risky Permissions

$
0
0


A tool for scanning Kubernetes cluster for risky permissions in Kubernetes's Role-based access control (RBAC) authorization model. The tool was published as part of the "Securing Kubernetes Clusters by Eliminating Risky Permissions" research https://www.cyberark.com/threat-research-blog/securing-kubernetes-clusters-by-eliminating-risky-permissions/.

Overview

KubiScan helps cluster administrators identify permissions that attackers could potentially exploit to compromise the clusters. This can be especially helpful on large environments where there are lots of permissions that can be challenging to track. KubiScan gathers information about risky roles\clusterroles, rolebindings\clusterrolebindings, users and pods, automating traditional manual processes and giving administrators the visibility they need to reduce risk.


What can it do?
  • Identify risky Roles\ClusterRoles
  • Identify risky RoleBindings\ClusterRoleBindings
  • Identify risky Subjects (Users, Groups and ServiceAccounts)
  • Identify risky Pods\Containers
  • Dump tokens from pods (all or by namespace)
  • Get associated RoleBindings\ClusterRoleBindings to Role, ClusterRole or Subject (user, group or service account)
  • List Subjects with specific kind ('User', 'Group' or 'ServiceAccount')
  • List rules of RoleBinding or ClusterRoleBinding
  • Show Pods that have access to secret data through a volume or environment variables
  • Get bootstrap tokens for the cluster

Usage

Container

With ~/.kube/config file

This should be executed within the Master node where the config file is located:
docker run -it --rm -e CONF_PATH=~/.kube/config -v /:/tmp cyberark/kubiscan

  • CONF_PATH - the cluster config file's path

Inside the container the command kubiscan is equivalent to python3 /KubiScan/KubiScan.py.
Notice that in this case, the whole file system will be mounted. This is due to the fact that the config files contain paths to other places in the filesystem that will be different in other environments.


With service account token (good from remote)

Some functionality requires a privileged service account with the following permissions:

  • resources: ["roles", "clusterroles", "rolebindings", "clusterrolebindings", "pods", "secrets"]
    verbs: ["get", "list"]
  • resources: ["pods/exec"]
    verbs: ["create", "get"]

But most of the functionalities are not, so you can use this settings for limited service account:
It can be created by running:

kubectl apply -f - << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubiscan-sa
namespace: default
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubiscan-clusterrolebinding
subjects:
- kind: ServiceAccount
name: kubiscan-sa
namespace: default
apiGroup: ""
roleRef:
kind: ClusterRole
name: kubiscan-clusterrole
apiGroup: ""
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubiscan-clusterrole
rules:
- apiGroups: ["*"]
resources: ["roles", "clusterroles", "rolebindings", "clusterrolebindings", "pods"]
verbs: ["get", "list"]
EOF

Save the service account's token to a file:
kubectl get secrets $(kubectl get sa kubiscan-sa -o json | jq -r '.secrets[0].name') -o json | jq -r '.data.token' | base64 -d > token

Run the container from anywhere you want:
docker run -it --rm -v $PWD/token:/token cyberark/kubiscan

In the shell you will be able to to use kubiscan like that:
kubiscan -ho <master_ip:master_port> -t /token <command>

For example:
kubiscan -ho 192.168.21.129:8443 -t /token -rs

Notice that you can also use the certificate authority (ca.crt) to verify the SSL connection:
docker run -it --rm -v $PWD/token:/token -v <ca_path>/ca.crt:/ca.crt cyberark/kubiscan

Inside the container:
kubiscan -ho <master_ip:master_port> -t /token -c /ca.crt <command>

To remove the privileged service account, run the following commands:
kubectl delete clusterroles kubiscan-clusterrole
kubectl delete clusterrolebindings kubiscan-clusterrolebinding
kubectl delete sa kubiscan-sa


Directly with Python3

Prerequisites:

Example for installation on Ubuntu:

apt-get update
apt-get install -y python3 python3-pip
pip3 install kubernetes
pip3 install PTable

Run alias kubiscan='python3 /<KubiScan_folder/KubiScan.py' to use kubiscan.

After installing all of the above requirements you can run it in two different ways:


From the Master node:

On the Master node where ~/.kube/config exist and all the relevant certificates, simply run:
kubiscan <command>
For example: kubiscan -rs will show all the risky subjects (users, service accounts and groups).


From a remote host:

To use this tool from a remote host, you will need a privileged service account like we explained in the container section.
After you have the token inside a file you can run:
kubiscan -ho <master_ip:master_port> -t /token <command>


Examples

To see all the examples, run python3 KubiScan.py -e or from within the container kubiscan -e.


Demo

A small example of KubiScan usage:



Risky Roles YAML

There is a file named risky_roles.yaml. This file contains templates for risky roles with priority.
Although the kind in each role is Role, these templates will be compared against any Role\ClusterRole in the cluster.
When each of these roles is checked against a role in the cluster, it checks if the role in the cluster contains the rules from the risky role. If it does, it will be marked as risky.
We added all the roles we found to be risky, but because each one can define the term "risky" in a different way, you can modify the file by adding\removing roles you think are more\less risky.


References:

For more comments, suggestions or questions, you can contact Eviatar Gerzi (@g3rzi) and CyberArk Labs.



Reproxy - Simple Edge Server / Reverse Proxy

$
0
0


Reproxy is a simple edge HTTP(s) server / reverse proxy supporting various providers (docker, static, file). One or more providers supply information about requested server, requested url, destination url and health check url. Distributed as a single binary or as a docker container.

  • Automatic SSL termination with Let's Encrypt
  • Support of user-provided SSL certificates
  • Simple but flexible proxy rules
  • Static, command line proxy rules provider
  • Dynamic, file-based proxy rules provider
  • Docker provider with an automatic discovery
  • Optional traffic compression
  • User-defined limits and timeouts
  • Single binary distribution
  • Docker container distribution
  • Built-in static assets server

Server can be set as FQDN, i.e. s.example.com or * (catch all). Requested url can be regex, for example ^/api/(.*) and destination url may have regex matched groups in, i.e. http://d.example.com:8080/$1. For the example above http://s.example.com/api/something?foo=bar will be proxied to http://d.example.com:8080/something?foo=bar.

For convenience, requests with the trailing / and without regex groups expanded to /(.*), and destinations in those cases expanded to /$1. I.e. /api/ -> http://127.0.0.1/service will be translated to ^/api/(.*) -> http://127.0.0.1/service/$1

Both HTTP and HTTPS supported. For HTTPS, static certificate can be used as well as automated ACME (Let's Encrypt) certificates. Optional assets server can be used to serve static files.

Starting reproxy requires at least one provider defined. The rest of parameters are strictly optional and have sane default.

Example with a static provider: reproxy --static.enabled --static.rule="example.com/api/(.*),https://api.example.com/$1" Example with an automatic docker discovery: reproxy --docker.enabled --docker.auto


Install

Latest stable version has :vX.Y.Z tag (with :latest alias) and the current master has :master tag.


Providers

Proxy rules supplied by various providers. Currently included file, docker and static. Each provider may define multiple routing rules for both proxied request and static (assets). User can sets multiple providers at the same time.

See examples of various providers in examples


Static

This is the simplest provider defining all mapping rules directly in the command line (or environment). Multiple rules supported. Each rule is 3 or 4 comma-separated elements server,sourceurl,destination,[ping-url]. For example:

  • *,^/api/(.*),https://api.example.com/$1, - proxy all request to any host/server with /api prefix to https://api.example.com
  • example.com,/foo/bar,https://api.example.com/zzz,https://api.example.com/ping - proxy all requests to example.com and with /foo/bar url to https://api.example.com/zzz. Uses https://api.example.com/ping for the health check

The last (4th) element defines an optional ping url used for health reporting. I.e.*,^/api/(.*),https://api.example.com/$1,https://api.example.com/ping. See Health check section for more details.


File

reproxy --file.enabled --file.name=config.yml

Example of config.yml:

default: # the same as * (catch-all) server
- { route: "^/api/svc1/(.*)", dest: "http://127.0.0.1:8080/blah1/$1" }
- {
route: "/api/svc3/xyz",
dest: "http://127.0.0.3:8080/blah3/xyz",
"ping": "http://127.0.0.3:8080/ping",
}
srv.example.com:
- { route: "^/api/svc2/(.*)", dest: "http://127.0.0.2:8080/blah2/$1/abc" }

This is a dynamic provider and file change will be applied automatically.


Docker

Docker provider supports a fully automatic discovery (with --docker.auto) with no extra configuration and by default redirects all requests like https://server/<container_name>/(.*) to the internal IP of the given container and the exposed port. Only active (running) containers will be detected.

This default can be changed with labels:

  • reproxy.server - server (hostname) to match. Also can be a list of comma-separated servers.
  • reproxy.route - source route (location)
  • reproxy.dest - destination path. Note: this is not full url, but just the path which will be appended to container's ip:port
  • reproxy.port - destination port for the discovered container
  • reproxy.ping - ping path for the destination container.
  • reproxy.enabled - enable (yes, true, 1) or disable (no, false, 0) container from reproxy destinations.

Pls note: without --docker.auto the destination container has to have at least one of reproxy.* labels to be considered as a potential destination.

With --docker.auto, all containers with exposed port will be considered as routing destinations. There are 3 ways to restrict it:

  • Exclude some containers explicitly with --docker.exclude, i.e. --docker.exclude=c1 --docker.exclude=c2 ...
  • Allow only a particular docker network with --docker.network
  • Set the label reproxy.enabled=false or reproxy.enabled=no or reproxy.enabled=0

This is a dynamic provider and any change in container's status will be applied automatically.


SSL support

SSL mode (by default none) can be set to auto (ACME/LE certificates), static (existing certificate) or none. If auto turned on SSL certificate will be issued automatically for all discovered server names. User can override it by setting --ssl.fqdn value(s)


Logging

By default no request log generated. This can be turned on by setting --logger.enabled. The log (auto-rotated) has Apache Combined Log Format

User can also turn stdout log on with --logger.stdout. It won't affect the file logging but will output some minimal info about processed requests, something like this:

2021/04/16 01:17:25.601 [INFO]  GET - /echo/image.png - xxx.xxx.xxx.xxx - 200 (155400) - 371.661251ms
2021/04/16 01:18:18.959 [INFO] GET - /api/v1/params - xxx.xxx.xxx.xxx - 200 (74) - 1.217669m

Assets Server

User may turn assets server on (off by default) to serve static files. As long as --assets.location set it will treat every non-proxied request under assets.root as a request for static files. Assets server can be used without any proxy providers. In this mode reproxy acts as a simple web server for a static context.

In addition to the common assets server multiple custom static servers supported. Each provider has a different way to define such static rule and some providers may not support it at all. For example, multiple static server make sense in case of static (command line provide), file provider and can be even useful with docker provider.

  1. static provider - if source element prefixed by assets: it will be treated as file-server. For example *,assets:/web,/var/www, will serve all /web/* request with a file server on top of /var/www directory.
  2. file provider - setting optional field assets: true
  3. docker provider - reproxy.assets=web-root:location, i.e. reproxy.assets=/web:/var/www.

More options
  • --gzip enables gizp compression for responses.
  • --max=N allows to set the maximum size of request (default 64k)
  • --header sets extra header(s) added to each proxied request
  • --timeout.* various timeouts for both server and proxy transport. See timeout section in All Application Options

Ping and health checks

reproxy provides 2 endpoints for this purpose:

  • /ping responds with pong and indicates what reproxy up and running
  • /health returns 200 OK status if all destination servers responded to their ping request with 200 or 417 Expectation Failed if any of servers responded with non-200 code. It also returns json body with details about passed/failed services.

All Application Options
  -l, --listen=                     listen on host:port (default: 127.0.0.1:8080) [$LISTEN]
-m, --max= max response size (default: 64000) [$MAX_SIZE]
-g, --gzip enable gz compression [$GZIP]
-x, --header= proxy headers [$HEADER]
--signature enable reproxy signature headers [$SIGNATURE]
--dbg debug mode [$DEBUG]

ssl:
--ssl.type=[none|static|auto] ssl (auto) support (default: none) [$SSL_TYPE]
--ssl.cert= path to cert.pem file [$SSL_CERT]
--ssl.key= path to key.pem file [$SSL_KEY]
--ssl.acme-location= dir where certificates will be stored by autocert manager (default: ./var/acme) [$SSL_ACME_LOCATION]
--ssl.acme-email= admin email for certificate notifications [$SSL_ACME_EMAIL]
--ssl.http-port= http port for redirect to https and acme challenge test (default: 80) [$SSL_HTTP_PORT]
--ssl.fqdn= FQDN(s) for ACME certificates [$SSL_ACME_FQDN]

assets:
-a, --assets.location= assets location [$ASSETS_LOCATION]
--assets.root= assets web root (default: /) [$ASSETS_ROOT]

logger:
--logger.stdout enable stdout logging [$LOGGER_STDOUT]
--logger.enabled enable access and error rotated logs [$LOGGER_ENABLED]
--logger.file= location of access log (default: access.log) [$LOGGER_FILE]
--logger.max-size= maximum size in megabytes before it gets rotated (default: 100) [$LOGGER_MAX_SIZE]
--logger.max-backups= maximum number of old log files to retain (default: 10) [$LOGGER_MAX_BACKUPS]

dock er:
--docker.enabled enable docker provider [$DOCKER_ENABLED]
--docker.host= docker host (default: unix:///var/run/docker.sock) [$DOCKER_HOST]
--docker.network= docker network [$DOCKER_NETWORK]
--docker.exclude= excluded containers [$DOCKER_EXCLUDE]
--docker.auto enable automatic routing (without labels) [$DOCKER_AUTO]

file:
--file.enabled enable file provider [$FILE_ENABLED]
--file.name= file name (default: reproxy.yml) [$FILE_NAME]
--file.interval= file check interval (default: 3s) [$FILE_INTERVAL]
--file.delay= file event delay (default: 500ms) [$FILE_DELAY]

static:
--static.enabled enable static provider [$STATIC_ENABLED]
--static.rule= routing rules [$STATIC_RULES]

timeout:
--tim eout.read-header= read header server timeout (default: 5s) [$TIMEOUT_READ_HEADER]
--timeout.write= write server timeout (default: 30s) [$TIMEOUT_WRITE]
--timeout.idle= idle server timeout (default: 30s) [$TIMEOUT_IDLE]
--timeout.dial= dial transport timeout (default: 30s) [$TIMEOUT_DIAL]
--timeout.keep-alive= keep-alive transport timeout (default: 30s) [$TIMEOUT_KEEP_ALIVE]
--timeout.resp-header= response header transport timeout (default: 5s) [$TIMEOUT_RESP_HEADER]
--timeout.idle-conn= idle connection transport timeout (default: 90s) [$TIMEOUT_IDLE_CONN]
--timeout.tls= TLS hanshake transport timeout (default: 10s) [$TIMEOUT_TLS]
--timeout.continue= expect continue transport timeout (default: 1s) [$TIMEOUT_CONTINUE]

Help Options:
-h, --help Show this help message


Status

The project is under active development and may have breaking changes till v1 released.



BetterXencrypt - A Better Version Of Xencrypt - Xencrypt It Self Is A Powershell Runtime Crypter Designed To Evade AVs

$
0
0


A better version of Xencrypt.Xencrypt it self is a Powershell runtime crypter designed to evade AVs. cause Xencrypt is not FUD anymore and easily get caught by AMSI,i recode the stub and now it FUD again. And the original Xencrypt,if you see on the screenshot proof,he's tested on Windows 8,and if i test it on the newest Windows 10,it doesnt FUD, cause that i want to make it FUD again and make everyone happy :D


This tool tested on Windows 10 v20H2

Proof-Of-FUDness (if you dont trust my word)

kinda lazy to fireup my windows VM and retest it again


Features
  • Bypasses AMSI,Behavior Monitoring,and all modern AVs in use on MetaDefender (dont wanna test it VirusTotal.MetaDefender is more than enough)
  • Compresses and encrypts powershell scripts
  • Has a minimal and often even negative (thanks to the compression) overhead
  • Randomizes variable names to further obfuscate the decrypter stub
  • Super easy to modify to create your own crypter variant
  • Supports recursive layering (crypter crypting the crypted output), tested up to 500 layers.
  • Supports Import-Module as well as standard running as long as the input script also supported it
  • All features in a single file so you can take it with you anywhere!

Thanks To
  • Me for not dying when creating this tool
  • Xentropy and SecForce for creating the original Xencrypt
  • Ed Wilson AKA Microsoft Scripting Guy for the great Powershell scripting tutorials
  • and the last one is Emeric Nasi for the research on bypassing AV dynamics

Usage

Its better to run BetterXencrypt script on Linux Powershell,cause i never try it on Windows Powershell. (Surprised that Linux have Powershell?Take a look at this)

Import-Module ./betterxencrypt.ps1
Invoke-BetterXencrypt -InFile invoke-mimikatz.ps1 -OutFile xenmimi.ps1

You will now have an encrypted xenmimi.ps1 file in your current working directory. You can use it in the same way as you would the original script, so in this case:

Import-Module ./xenmimi.ps1
Invoke-Mimikatz

It also supports recursive layering via the -Iterations flag.

Invoke-BetterXencrypt -InFile invoke-mimikatz.ps1 -OutFile xenmimi.ps1 -Iterations 100

Warning though, the files can get big and generating the output file can take a very long time depending on the scripts and number of iterations requested.



Overlord - Overlord - Red Teaming Infrastructure Automation

$
0
0


Overlord provides a python-based console CLI which is used to build Red Teaming infrastructure in an automated way. The user has to provide inputs by using the tool’s modules (e.g. C2, Email Server, HTTP web delivery server, Phishing server etc.) and the full infra / modules and scripts will be generated automatically on a cloud provider of choice. Currently supports AWS and Digital Ocean. The tool is still under development and it was inspired and uses the Red-BaronTerraform implementation found on Github.

A demo infrastructure was set up in our blog post https://blog.qsecure.com.cy/posts/overlord/.

For the full documentation of the tool visit the Wiki tab at https://github.com/qsecure-labs/overlord/wiki.


Installation
git clone https://github.com/qsecure-labs/overlord.git
cd overlord/config
chmod +x install.sh
sudo ./install.sh

Acknowledgments

This project could not be created without the awsome work for Marcello Salvati @byt3bl33d3r with the RedBaron Project. That is the reason why we are referencing the name of RedBaron on our project as well.

As Marcello stated on his acknowledgments, further thanks to:

  1. @_RastaMouse's two serie's blogpost on 'Automated Red Team Infrastructure Deployment with Terraform' Part 1 and 2
  2. @bluscreenofjeff's with his amazing Wiki on Read Team Infrastucture
  3. @spotheplanet's blog post on Red team infrastructure

Disclaimer

Overlord comes without warranty and is meant to be used by penetration testers during approved red teaming assessments and/or social enigneering assessments. Overlord's developers and QSecure decline all responsibility in case the tool is used for malicious purposes or in any illegal context.



Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>